Quality debt is the organizational analogue of technical debt, but structurally more dangerous because it is invisible to the people who generate it. Where technical debt arises from knowing the right solution and choosing an expedient shortcut, quality debt arises from residue-impaired judgment that approves outputs appearing adequate but containing subtle flaws. The builder carrying attention residue from multiple context switches evaluates AI-generated code, text, or design with depleted cognitive resources. Her evaluation passes outputs that a fully resourced judgment might have refined, redirected, or rejected. The approved outputs propagate through dependency chains, become foundations for subsequent work, and bend the system's trajectory away from its intended course. Unlike technical debt, quality debt leaves no markers — no TODO comments, no obvious code smells — only a gradual drift between what the system should do and what it does.
The invisibility is structural. Technical debt is, in principle, knowable: the engineer who wrote expedient code knows she cut a corner and often documents it. Quality debt has no such transparency. The builder who approved a subtly flawed output under residue influence didn't know her judgment was impaired — the impairment operates below subjective awareness. She felt competent, the output looked right, and she moved to the next evaluation. The flaw embedded in her approval remains unmarked, discovered only when its consequences manifest downstream: a scalability problem six months later, a strategic misalignment revealed by changing market conditions, an architectural decision that makes future modifications prohibitively expensive.
Quality debt compounds through organizational networks in ways technical debt does not. A residue-impaired evaluation by Builder A produces output with a subtle inadequacy. Builder B receives A's output as input, and if B is also carrying residue, her capacity to detect A's inadequacy is diminished. The flaw propagates to Builder C, whose own residue-impaired evaluation fails to catch it. At each handoff, the probability of detection is less than one, and the probabilities multiply: if each builder has a 90% chance of catching the flaw, three serial evaluations have a 27% chance of missing it entirely. In AI-augmented organizations where handoffs occur hourly rather than weekly, and where each builder performs dozens of evaluations daily under accumulated residue, the compounding is severe.
The manifestation is drift — the gradual divergence between intended and actual system behavior that organizations attribute to complexity, changing requirements, or inevitable entropy. Leroy's framework suggests an additional factor: systematic evaluation degradation producing a steady stream of subtle errors that individually are insignificant but collectively bend trajectories. The drift is slow enough that no single quarter reveals it, yet fast enough that multi-year retrospectives show clear divergence between the strategic direction articulated at the start and the operational reality that emerged. The divergence is typically blamed on execution failures; the monitoring tax framework suggests that impaired direction — residue-degraded judgment at evaluative nodes — is at least as consequential.
Measuring quality debt requires new instrumentation. Conventional metrics track output volume (features shipped, documents produced) and obvious failures (bugs reported, customer complaints). They don't track the gap between what shipped and what could have shipped if evaluations had been performed with fully resourced judgment. Proxies exist: defect rates per thousand lines of AI-generated code correlate with the number of context switches the approving engineer performed that day; strategic pivot frequency correlates with the residue load of the executives who made the original strategic choices. These correlations are suggestive, not dispositive, but they point toward the construction of metrics that make quality debt visible before its consequences propagate beyond organizational capacity to address them.
The concept adapts technical debt — coined by Ward Cunningham in 1992 to describe the cumulative cost of expedient code — to the cognitive domain. Quality debt as a distinct category emerged from discussions among AI-governance researchers and organizational psychologists examining why AI productivity gains were accompanied by reports of increasing technical debt, strategic confusion, and the specific exhaustion that suggests something is being depleted faster than it's being replenished. The term crystallized around the recognition that residue-impaired judgment produces a form of debt that is harder to detect, harder to quantify, and harder to repay than traditional technical debt, because the moment of debt incursion — the evaluation performed under cognitive load — looks indistinguishable from competent performance.
Invisibly incurred. Unlike technical debt, which the builder often knows she is creating, quality debt arises from impaired judgment the builder cannot detect — her evaluation feels competent while being measurably degraded.
Propagates through networks. Subtle flaws approved by residue-impaired evaluators become inputs for downstream work, where further residue-impaired evaluations fail to catch them, producing compounding error through dependency chains.
Manifests as drift. The organizational symptom is gradual divergence between intended and actual behavior — attributed to complexity but substantially driven by accumulated micro-errors from degraded evaluative judgment.
Unmeasured by standard metrics. Productivity dashboards capture output volume and obvious failures but not the gap between adequate outputs that shipped and excellent outputs that fully resourced judgment would have demanded.
Repayment is expensive. Addressing quality debt requires not just fixing individual flaws but rebuilding the institutional conditions — workflow design, recovery protection, reduced switching — that prevent its further accumulation.