Meadows's name for the system trap in which standards decline through a sequence of individually acceptable reductions, each of which becomes the reference point for the next comparison. The mechanism is cumulative and invisible: no single step triggers alarm, but the aggregate trajectory represents substantial decline. In the AI ecosystem, the drift operates on the quality of human cognitive engagement. When AI tools produce output that is good enough — competent, plausible, structurally sound — the standard for what counts as acceptable work gradually adjusts to match the tool's output.
The first AI-generated draft is compared to a skilled human's draft and found to be slightly less nuanced but dramatically faster. The comparison is favorable on balance. The AI draft is accepted. The standard shifts: acceptable work now includes output produced without the deep engagement that characterized the previous standard. The next comparison is made against the new standard. The AI draft meets it. The standard shifts again.
Each shift is imperceptible. The aggregate trajectory, over months and years of accumulated shifts, is a substantial reduction in the depth, originality, and hard-won specificity of the work the system produces. Edo Segal identifies this trap when he describes the Deleuze failure — the moment Claude produced a passage of philosophical elegance that was philosophically wrong, and the recognition that fluent prose had nearly passed for genuine thought.
Meadows's escape: anchor standards to an external reference that does not drift. In manufacturing, this means testing against absolute specifications rather than relative comparison to recent output. In the cognitive domain, it means standards calibrated to the process — whether the person underwent the cognitive engagement necessary to develop genuine understanding — rather than the product, whose surface quality AI has made unreliable as an indicator of depth.
Meadows derived the drift pattern from observations of quality degradation in organizations that measured only current performance against recent history. The logic applies wherever the reference point for 'acceptable' is generated by the system itself rather than by external criteria — a structural feature now universal in AI-augmented knowledge work.
Cumulative invisibility. No single step triggers alarm; the trajectory is apparent only in comparison to distant history.
Self-referential standard. When 'acceptable' is defined relative to recent output, standards drift with the output.
Surface indistinguishability. AI produces work that looks like the product of deep engagement, defeating surface-level quality control.
Process anchors. Standards tied to the cognitive engagement that produced the work resist drift better than output-quality standards.
Connection to smoothness. The drift is invisible precisely because smooth output looks like the product of depth.