Vaughan's framework identifies the reasonable exception as the mechanism by which normalized deviance is actually constructed at the level of individual decisions. Each exception is bounded, justified by specific circumstances, and supported by a growing evidence base. The person making the exception can point to data, to precedent, to a rational cost-benefit analysis. The exception is not a failure of judgment; it is an exercise of judgment within an institutional environment where the burden of proof falls asymmetrically on those who wish to stop rather than those who wish to proceed. The AI transition has generated reasonable exceptions at unprecedented pace and scale, each defensible in isolation, cumulatively reshaping what institutions consider adequate oversight.
The concept's power derives from its resistance to prevention through conventional means. Telling a competent professional that her reasonable judgment is wrong, when the accumulated evidence supports her position and the immediate cost of the alternative is visible while the risk of proceeding is speculative, is an act of organizational heroism that institutional structures rarely reward.
In the AI context, the reasonable exception appears in several distinct categories. The review exception emerges when AI-generated output is produced faster than it can be reviewed at the original depth, creating structural incentives to reduce review scope. The comprehension exception arises when practitioners evaluate AI output functionally rather than structurally, because evaluating the logic requires expertise they do not possess. The team-size exception follows when productivity multipliers shift the economic calculus of team composition. The expertise exception enables practitioners to operate in adjacent domains with AI assistance, trading domain judgment for tool competence.
Each exception rests on the previous ones. The manager who reduces team size has already inherited the reduced review depth and the comprehension gap. The smaller team reviews less carefully the code it already understands less deeply. The exceptions compound, and the accumulated weight produces a system functioning normally by standards the system has taught itself to apply.
The concept connects directly to production pressure: the reasonable exception is rational precisely because the institutional environment rewards proceeding and penalizes stopping, and the rationality is structural rather than personal. The individual making the exception is not miscalibrating; the institution has calibrated her.
The concept is implicit throughout Vaughan's Challenger research and was formalized in her subsequent theoretical work on organizational deviance. The precision of the concept — the exception as atomic unit, the chain as accumulated structure — derives from Vaughan's reconstruction of the twenty-four pre-Challenger flights and the specific engineering decisions made at each.
Defensibility as the danger. The reasonable exception is dangerous precisely because it is defensible; exceptions that are indefensible can be addressed through conventional accountability mechanisms.
Asymmetric burden of proof. The party that wishes to proceed needs only to point to precedent; the party that wishes to stop must produce novel justifying evidence.
Compounding structure. Each exception rests on previous exceptions, producing a chain that no single correction can reverse.
Four categories in AI work. Review, comprehension, team size, and expertise — each generating exceptions that compound with the others.
Evidence grows with each iteration. The track record of successful exceptions provides empirical support for the next exception, making resistance progressively harder.