The decontextualization machine is On AI's name for what large language models do at the level of their fundamental operation. The training process extracts statistical patterns from trillions of tokens of human-produced text. Every text in the training corpus was produced in a specific context — by a specific person, for a specific audience, in response to a specific situation, drawing on specific knowledge that the producer selected because of specific contextual factors. The model retains none of this context. It retains the text — the propositional residue of the situated practice that produced it. The patterns the model extracts capture what was said without preserving why it was said, by whom, in response to what specific situation. Decontextualization is not a failure of the technology. It is the technology's design.
The phrase names what Lave's framework identifies as the central epistemic operation of contemporary AI systems: the extraction of general patterns from specific contexts, producing outputs that are plausible across contexts without being grounded in any particular one. This operation is the source of the models' extraordinary utility — they can generate competent output on topics their users have no expertise in, across domains no human practitioner could master individually, at scales no situated community could match. It is also the source of their characteristic failure mode: the production of outputs that are contextually rootless, and that require human practitioners to supply the context the model has stripped away.
The capacity to supply that context is itself a product of situated engagement. The lawyer who can evaluate an AI-generated brief has that evaluative capacity because she spent years reading cases, arguing before judges, watching how specific arguments land in specific courtrooms. The developer who can evaluate AI-generated code has that capacity because she spent years building systems, watching them break, learning through situated encounters what the code does not say about itself. If the next generation develops its evaluative capacity through AI-assisted practice rather than situated engagement, the capacity will be thinner — less equipped to detect the gap between what the model produced and what the situation requires.
The framing connects Lave's anthropological work to Lucy Suchman's critique of computational models of action in Plans and Situated Actions (1987). Suchman showed that human action is fundamentally situated, improvised in response to specific circumstances, and cannot be reduced to the execution of pre-specified plans. Large language models are vastly more sophisticated than the expert systems Suchman critiqued, but the structural issue persists. The model operates on decontextualized representations. The human operates in a specific, situated context. The gap is bridged — when it is bridged — by the human's capacity to supply what the model lacks.
The solution On AI proposes is not rejection of decontextualized information but recontextualization — the deliberate, institutional, sustained effort to embed AI-mediated information within the situated practices through which practitioners develop the judgment to use it wisely. The decontextualization machine is here to stay. The question is whether the institutions that surround it will preserve the conditions under which practitioners develop the thick understanding that makes decontextualized output useful rather than merely abundant.
The phrase appears in Chapter 8 of On AI, applying Lave's framework of situated cognition to the structural characterization of contemporary AI systems. It extends the analytical tradition of Suchman's Plans and Situated Actions (1987) — which Lave herself endorsed publicly — into the era of large language models.
Decontextualization is the tool's design, not its flaw. Language models exist to extract general patterns from specific contexts. That is what they do well.
The outputs arrive without context. What the tool produces is plausible across contexts but situated in none, requiring human practitioners to supply the contextual judgment the tool cannot provide.
Supplying context requires situated experience. The capacity to evaluate AI output is itself a product of the situated engagement that AI tools make optional.
Recontextualization is the institutional response. Not refusing the tools but embedding them within practices that preserve situated engagement — code reviews as learning events, mentorship as collaborative practice, assessment that captures understanding rather than output.
A minority position holds that the decontextualization-recontextualization framing overstates the problem — that human practitioners have always worked with decontextualized information (textbooks, documentation, reference manuals) and have always had to supply context. The Lavean response is that the scale and accessibility of AI-produced decontextualized information is categorically different from prior forms, and that the economic incentives now push against the situated engagement that previously supplied context as a by-product of daily work.