Latent Failures (Vaughan Reading) — Orange Pill Wiki
CONCEPT

Latent Failures (Vaughan Reading)

Errors, weaknesses, and vulnerabilities embedded in a system but not yet manifested as visible problems — dormant until the specific conditions that trigger them arise, and accumulating invisibly in AI-augmented workflows where the reduced review and comprehension gap prevent their detection.

The concept of latent failures, developed most fully by James Reason and extended by Vaughan's framework, names the class of errors that exist within a system before any visible failure occurs. The errors are real; the system functions adequately despite them; the conditions that would reveal them have not yet been encountered. In AI-augmented work, latent failures accumulate through the specific mechanisms Vaughan's framework identifies: reduced review that fails to detect surface anomalies, opacity that hides reasoning-level errors, comprehension gaps that leave practitioners unable to evaluate substance, and production pressure that rewards speed over thoroughness.

In the AI Story

Hedcut illustration for Latent Failures (Vaughan Reading)
Latent Failures (Vaughan Reading)

Latent failures are the structural counterpart of normalized deviance at the technical level. Where normalized deviance describes how standards drift, latent failures describe what accumulates in the gap between drifted standards and the conditions they were designed to meet. The O-ring erosion on flights two through twenty-four was a latent failure — real damage to the sealing mechanism, visible in the engineering reports, but not producing a catastrophic outcome under the conditions those flights encountered.

In AI-augmented systems, latent failures take forms that include: subtle logical errors in AI-generated code that functional tests do not catch but that would manifest under load conditions the tests do not simulate; dependency relationships the deploying engineer did not understand and cannot diagnose; security vulnerabilities introduced through patterns the AI has learned from training data that contained similar vulnerabilities; correlation errors in AI-generated financial models that produce negligible distortion under normal market conditions and amplify catastrophically under stress.

The detection of latent failures requires precisely the capabilities that AI-era normalization has eroded: deep review, structural comprehension, independent evaluation, and the accumulated tacit knowledge that allows experienced practitioners to sense that something is wrong even when they cannot articulate what. The erosion of these capabilities does not increase the rate at which latent failures are introduced; it decreases the rate at which they are detected, producing a widening gap between the failures present in the system and the failures the system's oversight has identified.

Vaughan's framework adds to the latent-failure concept the structural observation that the accumulation is invisible to the metrics institutions typically track. Organizations measure outputs, not the distance between outputs and the boundary of failure. The latent failures accumulate in that invisible distance, and the organization's confidence in its systems is calibrated to the frequency of manifested failures — zero, or negligible — rather than to the margin between current operation and the conditions that would reveal the accumulated latencies.

Origin

The concept was developed by James Reason in Human Error (1990) and extended through his work on the Swiss cheese model of accident causation. Vaughan's framework contributed the specific organizational mechanisms — normalized deviance, structural secrecy — by which latent failures accumulate and remain undetected within otherwise well-functioning institutions.

Key Ideas

Real but dormant. Latent failures are actual errors that have not yet manifested as visible problems because the triggering conditions have not arrived.

Detection versus introduction. AI-era erosion of oversight does not primarily increase the rate of latent failures; it decreases the rate at which existing latencies are detected.

Invisible to standard metrics. Organizations measure manifested failures, not the accumulating margin between operation and the boundary of failure.

Proportional to the gap. When the trigger arrives, the failure is proportional to the accumulated latencies — to the distance between current practice and the standards that would have caught them.

Reason's Swiss cheese model. Latencies align through successive layers of defense until a trajectory through the holes produces the catastrophic event.

Appears in the Orange Pill Cycle

Further reading

  1. James Reason, Human Error (Cambridge University Press, 1990)
  2. James Reason, Managing the Risks of Organizational Accidents (1997)
  3. Diane Vaughan, The Challenger Launch Decision (1996)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT