Designing for the exception inverts the dominant approach to automation. Most systems are optimized for the common case: the routine flight, the standard transaction, the typical query. The human is designed into the system as a backstop for whatever the automation cannot handle. Bainbridge showed that this architecture guarantees the backstop will fail, because the conditions it is being asked to operate under — degraded skill, surprise, time pressure, poor situation awareness — are exactly the conditions in which human performance is worst. The alternative is to design the system so that the human's role in the exceptional case is actively protected: engagement is maintained during normal operation, skills are exercised rather than allowed to decay, situation awareness is preserved through active participation, and the transition from monitoring to acting is supported rather than punished. This is the principle behind high-reliability organizations, and it is the principle AI deployment systematically violates.
The contrast with prevailing AI deployment is stark. Most AI tools are designed to maximize the quality of the output on the common case — the typical prompt, the standard code request, the ordinary question. The human's role in the rare case — when the AI hallucinates, when the context exceeds the model's training distribution, when the stakes require judgment the model cannot provide — is not designed at all. It is left to the user's discretion, exercised under whatever cognitive conditions the normal interface has produced, typically conditions of fluency-induced trust and monitoring-induced attention degradation.
Bainbridge's principle has four components. First, preserve active engagement: design the system so that the human is a participant in normal operation, not a monitor. Second, exercise skills deliberately: build routine tasks that maintain the manual and cognitive capabilities the human will need in exceptional cases. Third, support the transition: design the interface so that the human can rebuild situation awareness quickly when control returns, rather than being dropped into a novel situation cold. Fourth, accept the limits: recognize that some exceptions will still exceed human capacity, and design the rest of the system to contain the consequences.
The principle has been partially adopted in aviation — mandatory hand-flying hours, periodic simulator drills, checklist discipline — and in high-reliability organizations such as nuclear plants and aircraft carrier flight decks. It has been almost entirely ignored in AI deployment, where the economic logic pushes in the opposite direction: maximize AI output, minimize human friction, accept the occasional catastrophic failure as a cost of operation. The Bainbridge framework suggests this is a false economy, paid for in the currency of eventual failures whose magnitude compounds as skill decay progresses.
The AI-era application of the principle is concrete. Joel Spolsky's concept of controlled friction — mandatory AI-free work hours to maintain diagnostic capability — is designing for the exception. AI practice frameworks that preserve mentoring, code review with explicit requirement to read rather than skim, and organizational norms that reward depth over speed are all instantiations. None of them are required by the tools themselves; all of them require institutional commitment to build.
Bainbridge articulated the principle most explicitly in her 1987 follow-up work and in a series of human-factors conference papers through the 1990s. The principle has been adopted as canonical in safety-critical domains but has not penetrated the design of AI tools for cognitive work — an asymmetry this volume treats as the central governance failure of the AI transition.
Normal operation must preserve exception capability. The conditions required for successful human intervention in rare cases are the primary constraint on how the system should operate in common cases — not the other way around.
Skills must be exercised, not assumed. Maintaining operator capability requires designing routine tasks that exercise the skills needed for exceptions, even when those tasks are less efficient than fully automated alternatives.
The transition is the design challenge. Most system failures happen during the moment of manual reversion; good design supports this transition rather than leaving it to heroic individual effort.
Economic logic resists the principle. Designing for the exception costs efficiency during the normal case, and organizations under cost pressure systematically undermine the principle until the first catastrophic failure forces them to rebuild it.
Critics argue that designing for the exception is expensive and that most systems do not require it — the cost of rare failures is lower than the cost of always preserving exception capability. Defenders respond that this calculation systematically underestimates tail risk and externalizes the cost of failure onto users, workers, or the public.