Drift (System Behavior) — Orange Pill Wiki
CONCEPT

Drift (System Behavior)

The gradual divergence between a system's intended behavior and its actual operation — often attributed to complexity but substantially driven by accumulated micro-errors from degraded judgment.

Drift describes the slow, persistent divergence between what a system was designed to do and what it actually does over time. In organizational contexts, drift manifests as the gap between strategic intent and operational reality, between documented processes and actual practices, between the product roadmap and the shipped product. The phenomenon is familiar and typically attributed to complexity, changing requirements, or inevitable entropy. Leroy's attention residue framework suggests an underdiagnosed contributing factor: the systematic accumulation of small errors introduced by evaluations performed under cognitive load. Each residue-impaired judgment approves an output that is adequate but not quite right — a subtle architectural compromise, a strategic decision that serves short-term pressures better than long-term goals, a feature that works but doesn't align with the product's deeper purpose. Individually, these deviations are negligible; collectively, they bend the system's trajectory.

In the AI Story

Hedcut illustration for Drift (System Behavior)
Drift (System Behavior)

Drift is structurally invisible because it operates below the threshold of any single decision's observability. No alarm sounds when a builder approves output that is good rather than excellent. No metric declines when an architectural choice is competent rather than inspired. The deviation enters the system unmarked, becomes a foundation for subsequent decisions, and propagates through dependency chains. By the time drift is recognized — when the gap between intention and reality is large enough to force organizational attention — months or years of small deviations have accumulated, and the cost of correction is substantial. The organization typically attributes the drift to poor execution or changing conditions; Leroy's framework suggests that impaired direction — residue-degraded judgment at key evaluative nodes — is at least as consequential.

The compounding mechanism is multiplicative. Each layer of development builds on previous layers, inheriting their decisions as constraints. If Layer 1 contains a subtle architectural flaw introduced by a residue-impaired evaluation, Layer 2 builds on the flawed foundation and — if its evaluation is also residue-impaired — may introduce a second flaw that the first one made more likely. Layer 3 inherits both flaws, and by Layer 10, the system's behavior reflects the compounded consequences of decisions that individually appeared sound but collectively produced a structure that no single evaluator intended. The drift is emergent, authored by no one, and correctable only through expensive refactoring that the organization's quarterly pressures rarely permit.

AI tools accelerate drift through two pathways. First, they increase the rate of decision-making: more outputs per day means more evaluative judgments per day, each an opportunity for subtle error under residue. Second, they compress feedback cycles that previously provided correction opportunities. In pre-AI development, the manual implementation of a flawed design often surfaced the flaw before it propagated far — the developer building the feature would encounter the architectural problem and raise it. When AI implements rapidly and the builder monitors from a distance, this embodied error-detection is lost. Flaws that manual implementation would have caught propagate into production, where they are harder to detect and more expensive to fix.

Organizational responses to drift typically involve process reforms: more documentation, more review layers, stricter approval workflows. Leroy's framework suggests these responses miss the mechanism. Adding review layers increases the number of evaluators but doesn't address the cognitive state of those evaluators — if each reviewer is carrying residue from multiple context switches, adding more residue-impaired reviews doesn't improve quality. The effective intervention is reducing the residue load at each evaluative node: assigning reviewers to fewer simultaneous projects, protecting their cognitive resources through workflow design, and measuring the quality of their judgments rather than the quantity of their approvals.

Origin

Drift as a concept has multiple origins across domains. In navigation, drift is wind or current pushing a vessel off course. In organizational sociology, it appears in Scott Snook's 'practical drift' and Diane Vaughan's 'normalization of deviance' — the gradual acceptance of deviations from standards. In systems engineering, it's the progressive divergence between specification and implementation. The application to AI-augmented work as a consequence of residue-impaired judgment is novel to the Leroy simulation, synthesizing her attention research with organizational drift literature to identify a specific, measurable, and previously underdiagnosed mechanism producing the divergence that practitioners recognize but struggle to explain.

Key Ideas

Micro-errors compound. Individually negligible deviations from optimal decisions accumulate through dependency chains, producing emergent system behavior that no single evaluator intended or approved.

Invisible at decision-time. The moment of drift introduction — a residue-impaired evaluation approving adequate rather than excellent output — looks indistinguishable from competent performance and leaves no diagnostic markers.

Attributed incorrectly. Organizations blame drift on complexity, changing requirements, or execution failures, missing the role of impaired direction from evaluators whose judgment was degraded by accumulated context-switching.

Accelerated by AI. Faster production and compressed feedback cycles increase both the rate of decision-making (more opportunities for error) and the probability that errors propagate undetected into production.

Appears in the Orange Pill Cycle

Further reading

  1. Diane Vaughan, The Challenger Launch Decision (1996)
  2. Scott A. Snook, Friendly Fire (2000)
  3. Charles Perrow, Normal Accidents (1984)
  4. Sophie Leroy, 'Why Is It So Hard to Do My Work?' (2009)
  5. Karl E. Weick and Kathleen M. Sutcliffe, Managing the Unexpected (2007)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT