The concept, developed in Diane Vaughan's The Challenger Launch Decision (1996), names the structural mechanism through which engineering institutions slide toward catastrophic failure not through dramatic bad decisions but through the accumulation of small accommodations each of which, taken alone, appeared reasonable. O-ring erosion on the Space Shuttle began as an unexpected anomaly. Each subsequent launch in which erosion occurred without catastrophic failure constituted, in institutional terms, evidence that erosion was acceptable. The acceptance narrowed the margin. By January 1986, the margin was zero, and the institution had arrived at that margin through a sequence of locally rational decisions that, aggregated, produced an irrational outcome. Petroski drew on Vaughan's framework to illustrate the dynamics of the complacency cycle at finer temporal resolution: the cycle operates not only across generations but within the career of any given engineer, and the normalization of deviance is the within-career mechanism by which margin erodes.
Vaughan's framework emerged from six years of archival research into the Challenger disaster. Her central finding — that the managers and engineers who approved the January 28 launch were operating within institutional norms that had gradually shifted to accommodate anomalies previously considered unacceptable — displaced the earlier narrative of moral failure or willful negligence. The people involved were not unusually careless. They were operating inside a system whose standards had drifted, through a sequence of incremental accommodations, to a point where what would have been unacceptable a decade earlier had become routine.
The framework has specific relevance to AI-augmented engineering because AI accelerates every mechanism Vaughan identified. Observation of anomalies: AI systems processing vastly more data than human engineers can observe many more anomalies per unit time. Assessment of anomalies against standards: AI systems can evaluate anomalies against codified standards at machine speed. Normalization: when AI systems consistently rate anomalies as acceptable because they fall within codified ranges, the institutional perception that the anomalies are acceptable is reinforced faster than human judgment can intervene. Baseline shift: the accumulated record of AI-assessed acceptable anomalies shifts the institutional baseline for what counts as normal, and the shift happens faster than institutional review cycles can track.
The critical point in Vaughan's framework, often missed in popular summaries, is that normalization of deviance is not a failure of individual judgment but a structural feature of complex institutions operating under uncertainty. No single decision in the Challenger sequence would have appeared obviously wrong to a reasonable observer. Each decision was grounded in evidence, precedent, and institutional process. The catastrophe emerged from the aggregation of locally rational decisions. This structural feature means the defense against normalization of deviance must itself be structural: institutional practices that force periodic re-examination of accumulated accommodations, external reviewers who bring perspectives outside the institutional frame, and cultural norms that preserve space for the kind of felt judgment that cannot be quantified.
Petroski's framework adds a specific diagnosis of how AI affects these structural defenses. External reviewers require frameworks distinct from the ones the institution is using; when AI tools are increasingly shared across institutions, the external reviewer's frame may not be genuinely external. Cultural norms preserving space for felt judgment require institutional recognition that judgment matters; when AI outputs are uniformly confident and quantitative, the institutional preference for quantitative evidence is reinforced and felt judgment is further marginalized. The Vaughan framework and the Petroski framework together suggest that the AI era creates conditions under which normalization of deviance can accelerate while the institutional defenses against it are simultaneously weakened.
The concept was developed in Diane Vaughan's The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA (1996), University of Chicago Press. Vaughan's six-year archival research produced one of the most influential sociological analyses of institutional failure of the late twentieth century. The framework has been extended by subsequent scholars — Scott Snook's Friendly Fire (2000) applied it to military organizations; various analyses have applied it to healthcare, financial institutions, and engineering firms. Petroski incorporated the framework into his later work, treating it as the fine-grained mechanism through which the complacency cycle operates within institutions.
Normalization is incremental and invisible. The process occurs through small accommodations, each of which appears reasonable in its context. No single decision is recognizably wrong. The wrongness emerges from aggregation.
Institutional norms drift through accommodation. What is unacceptable at one time becomes acceptable later through sequences of small accommodations, each of which shifts the baseline against which the next accommodation is evaluated.
AI accelerates every phase of the process. Observation, assessment, normalization, and baseline shift all operate faster when AI mediates the engineering workflow. The institutional review cycles that might catch the drift operate at human speed.
The defense is structural, not individual. Because normalization operates through locally rational decisions, the defense cannot depend on individual engineers recognizing bad decisions. It must depend on institutional practices that force periodic re-examination, preserve outside perspectives, and maintain cultural space for judgment.
The framework has been criticized as excessively structural, underweighting the role of individual moral agency in institutional failure. Defenders argue that the criticism misreads the framework: Vaughan did not deny that individuals could have intervened in the Challenger sequence, only that the institutional conditions made intervention extraordinarily difficult for people operating within those conditions. The contemporary debate extends to AI: whether the normalization of AI-generated outputs as acceptable can be prevented by better individual training of reviewers, or whether it requires structural changes in how AI systems are integrated into engineering workflows. The Petroski-Vaughan combined framework suggests the latter: individual training is necessary but not sufficient, because the dynamics of normalization will overwhelm individual vigilance unless the structural conditions are redesigned.