The distinction between frozen and flexible language maps onto the deeper distinction between knowing and understanding. A practitioner who knows a domain can deploy its vocabulary, cite its concepts, produce its expected outputs. A practitioner who understands the domain can adapt the vocabulary to new contexts, recognize when concepts fail to apply, revise outputs when situations demand revision. Under normal conditions, the difference is invisible. The frozen language performs as well as the flexible language because the conditions do not stress the difference. Under abnormal conditions — novel problems, unusual constraints, situations that fall outside the training distribution — the frozen language breaks because it was never flexible enough to bend.
The Deleuze error Segal describes in You On AI is Gee's paradigmatic case. Claude produced a passage connecting Csikszentmihalyi's flow concept to Deleuze's concept of smooth space. The passage was fluent, elegant, argumentatively coherent — and philosophically wrong. Deleuze's smooth space has almost nothing to do with flow in Csikszentmihalyi's sense. The passage worked rhetorically because the training data contained enough surface patterns to generate text that looked like a competent philosophical connection. The text was frozen language at exactly the kind of task — drawing original connections between thinkers from different traditions — where genuine understanding is most required and most absent.
Frozen language is not unique to AI. Students have produced it for centuries — essays that deploy the teacher's vocabulary without having genuinely engaged with the material. Corporate communications generate it by the cubic meter. What makes AI distinctive is the scale and fluency at which frozen language is produced. The human student's frozen language is usually detectable to an attentive reader — the vocabulary is not quite right, the transitions are awkward, the argument does not quite fit together. AI's frozen language is harder to detect. The vocabulary is precise. The transitions are smooth. The argument fits together on the surface. Detection requires situated understanding deep enough to register the absence beneath the surface — the kind of situated understanding that only sustained practice in the relevant Discourse produces.
The practical consequence is that evaluating AI-generated output requires depth the output itself does not evidence. The practitioner who can reliably distinguish flexible from frozen language is the practitioner who possesses the situated meaning that flexible language expresses. Practitioners who lack that depth cannot reliably evaluate AI output at the level of meaning — they can check surface correctness (does the code compile? does the brief cite correctly?) but not deeper adequacy (does the code actually work under realistic conditions? does the brief actually address the strongest counterargument?). This is why AI use is most reliably productive in the hands of senior practitioners and most prone to invisible failure in the hands of juniors who have not yet developed the depth AI evaluation requires.
Gee developed the concept in the context of his work on cybersapien literacy (2024) and articulated it most directly in his 2025 RELC Journal interview. The term draws on a long tradition in cognitive science and linguistics of distinguishing inert, encapsulated knowledge from generative, adaptive competence — a tradition that includes Whitehead's inert ideas (1929), Polanyi's tacit knowledge, and contemporary work on transfer in learning science.
Surface correctness without depth. Frozen language passes superficial tests while lacking the generative understanding that produced flexible language.
Breakage under novelty. Frozen language fails when conditions shift because it was never flexible enough to adapt.
Invisible under normal conditions. The distinction is only visible under stress that tests the difference.
AI as scale producer. Machine-generated text produces frozen language at unprecedented scale and with unprecedented surface fluency.
Depth required to detect. Only practitioners with situated meaning in the relevant Discourse can reliably recognize when fluent output is frozen.
Whether frozen language can be prevented or merely detected is a live question in educational and organizational practice. Prevention would require tools that support the learning processes producing flexible language rather than generating output that substitutes for them. Detection requires cultivated depth in evaluators. Both strategies face the structural challenge that the market rewards output fluency, which is precisely what frozen language provides, and penalizes the slower, more difficult work of building the flexible understanding that genuine depth requires.