Metaphorical Drift is the hypothesized gradual attenuation of experiential richness in cultural metaphors caused by the recursive feedback loop between human-generated and AI-generated language. Humans produce language saturated with conceptual metaphors grounded in embodied experience. That language becomes training data. AI systems extract statistical patterns and produce new language replicating those patterns. The new language is read by humans who absorb its metaphorical structure. The absorbed metaphors shape subsequent human language production. The new human language, now partly shaped by AI-generated patterns, becomes training data for the next generation of AI systems. Each turn of the loop is a potential site of drift — a gradual shift in conceptual structures away from embodied grounding and toward statistical regularity. The drift may be undetectable in any single iteration but cumulative across thousands.
The mechanism is precise. A human speaker choosing between "She grasped the concept" and "She caught the concept" and "She seized the concept" is making a selection influenced, below conscious awareness, by the distinct motor programs associated with grasping, catching, and seizing — different hand configurations, different force dynamics, different relationships between agent and object. The selection carries information about the speaker's embodied understanding of the cognitive event. An AI system making the same selection is operating on distributional statistics: which variant appears most frequently in similar contexts in training data. The selection may correlate with embodied distinctions because the training data was produced by embodied speakers whose selections were influenced by those distinctions. But the correlation is indirect, mediated by statistics rather than by experience.
As AI-generated text becomes an increasing proportion of the text circulating in the world — and therefore an increasing proportion of future training data — the correlation attenuates. Each generation of the loop moves the selection further from its embodied ground. The concern is not that AI will produce wrong metaphors; it is that AI will produce metaphors that are statistically correct but experientially thin — metaphors that follow the distributional patterns of human language without carrying the full experiential weight the patterns originally encoded. The surface form is preserved. The cognitive depth is diminished. And the diminishment is invisible to casual inspection.
If conceptual metaphor theory is correct that metaphors do not merely express thoughts but constitute them, then the drift has implications for the quality of thought itself. If metaphors circulating through a culture become experientially thinner — if the connection between linguistic form and the bodily experience grounding it weakens — then the thoughts those metaphors structure become correspondingly thinner. Concepts lose their experiential richness. Reasoning loses its embodied grounding. The culture's cognitive architecture shifts, subtly and incrementally, from a foundation built on felt physical experience to a foundation built on statistical patterns that approximate felt experience without possessing it.
The drift is speculative in its magnitude and timeline but not in its mechanism. The mechanism follows directly from three well-established observations: that human language encodes embodied conceptual metaphors, that AI systems learn to replicate the patterns without the embodied grounding, and that AI-generated text is entering the language pool humans absorb and future AI systems train on. The optimistic reading is that embodied human evaluation — the ground check — will filter out metaphors that are statistically fluent but experientially empty. The pessimistic reading is that the sheer volume of AI-generated text overwhelms evaluative capacity, allowing experientially thin metaphors to circulate unchecked. The realistic reading lies between and depends on institutional design: educational practices that develop embodied evaluative capacity, cultural norms that apply evaluative friction to AI-generated content, and institutional mechanisms that preserve significant proportions of human-generated, embodied-experience-grounded text in training pipelines.
The concept of metaphorical drift emerges from the intersection of conceptual metaphor theory with the observable feedback loop between human and AI-generated language since approximately 2022. Analogous concerns about model collapse — the degradation of AI systems trained on their own outputs — have been formalized in the machine learning literature. Metaphorical drift extends the concern to the cultural and cognitive level, asking what happens to human thought when the linguistic ecology within which it develops is increasingly shaped by disembodied processing.
Recursive feedback loop. Human language trains AI; AI produces language; humans absorb AI language; absorbed patterns shape subsequent human language; the cycle repeats.
Statistical correlation vs. embodied grounding. AI selection correlates with embodied distinctions indirectly through statistics rather than directly through experience.
Cumulative attenuation. Each iteration potentially moves the linguistic surface further from the embodied substrate that originally produced it.
Invisibility of drift. The surface forms persist; the diminishment of experiential content is undetectable by casual inspection.
Institutional dependence. Whether drift is bounded or accelerating depends on educational, cultural, and institutional structures that determine evaluative capacity and training-data composition.
Whether metaphorical drift is a genuine phenomenon or a speculative concern is itself debated. Supporters point to analogous phenomena in machine learning (model collapse) and in historical linguistics (semantic bleaching of frequently used metaphors). Skeptics argue that human embodied evaluation is robust enough to resist cultural drift driven by AI-generated content, and that the metaphorical inventory of languages has always evolved through processes not obviously different from what AI introduces.