The equation follows directly from Vaughan's four-phase mechanism. Each phase widens the gap between formal standard and practiced standard by a small, rational increment. The gap is invisible to the metrics institutions track because the outputs remain competent under normal conditions. The accumulated width determines the severity of the eventual failure, because the failure occurs when the extraordinary condition exceeds the range the practiced standard can accommodate.
The AI transition's specific contribution to the equation is acceleration. The Challenger drift unfolded over twenty-four flights spanning five years. The AI-augmented organization's drift can unfold over weeks, because the production pressure compressing review depth operates continuously and the tool's consistent competence provides continuous reinforcement. The width the gap reaches in AI-augmented environments within months may equal the width that took NASA years to accumulate.
The four structural conditions that coexist in AI-augmented organizations — comprehension gap, review deficit, redundancy gap, opacity barrier — together produce the prerequisites for a Vaughan-type failure. The specific trigger cannot be predicted; the structural vulnerability can. Whether the trigger appears as cybersecurity incident, medical event, financial cascade, or something else is less consequential than the prior question of whether the structures designed to detect and absorb the trigger will still be in place when it arrives.
The equation offers no comfort in the observation that most AI-augmented operations succeed. Twenty-four successful flights were not evidence of safety; they were evidence that failure conditions had not yet been encountered. The twenty-fifth encountered them. The observation that most AI deployments succeed is, by the same logic, not evidence that accumulated deviance is benign; it is evidence that the trigger event has not yet arrived.
The equation is implicit throughout Vaughan's Challenger research and was formalized through her theoretical extensions in Dead Reckoning and her work on organizational deviance. Its application to AI draws on Charles Perrow's normal accident theory, Reason's latent failure framework, and contemporary cybersecurity research on AI deployment.
Proportionality to the gap. Failure severity tracks the width of the gap between formal and practiced standards, not the magnitude of the triggering event.
Acceleration in AI. The tool's speed and competence compress the normalization timeline from years to weeks.
Four conditions coexist. Comprehension gap, review deficit, redundancy gap, and opacity barrier together produce structural vulnerability.
Trigger-agnostic. The equation does not predict the specific trigger; it predicts the proportionality of the response when any adequate trigger arrives.
Negative evidence mistaken for safety. Successful operation under normal conditions is not evidence of safety; it is evidence that the failure conditions have not yet been encountered.