In a well-automated system, exceptions are rare. This is the system's design goal. But the rarity produces a specific pathology: the operator encounters anomalies so infrequently that she develops no embodied sense of how they feel, no pattern library of failure modes, no practiced response to surprise. When the rare event finally arrives — and in any system operating long enough, it eventually does — the operator must respond to a situation she has never personally encountered, using skills she has rarely exercised, under time pressure that forecloses careful deliberation. The rare event problem is not solved by making events rarer. Making events rarer makes the problem worse.
The problem is epistemological as well as practical. Expertise in complex systems is built through encounter with the full distribution of situations the system produces — including the rare, the anomalous, the edge cases that reveal how the system actually works under stress. Automation truncates this distribution. The operator sees only the normal cases, because the abnormal ones trigger automation's exception-handling routines. Over time, her mental model of the system becomes a model of its normal operation, increasingly poorly matched to its actual behavior under failure.
Bainbridge saw this play out in chemical process control, where senior operators could recognize a developing fault from subtle pressure oscillations that junior operators missed entirely. The senior operators had built their pattern library during an earlier era when they had manually controlled the process and encountered every type of disturbance. Junior operators, trained in fully automated systems, had never seen the disturbances directly — only the alarms that announced them after the fact. When a fault occurred outside the alarm system's coverage, the juniors were flying blind.
The AI version of the problem is sharper still. The apprenticeship problem in contemporary software development is the rare event problem in its most visible form: junior developers using AI coding assistants encounter almost no debugging, no wrestling with unfamiliar error messages, no slow accumulation of the pattern recognition that senior engineers built through thousands of hours of struggle. When the AI fails — and its failures are increasingly subtle — the next generation of developers will have built no library of how failure feels. Joel Spolsky's law of leaky abstractions describes the same dynamic from the builder's perspective.
The rare event problem interacts dangerously with what Gary Klein calls recognition-primed decision-making. Experts do not deliberate through options under time pressure; they recognize situations and retrieve responses that have worked in similar situations before. When the situation has never been encountered, recognition fails, and the expert is left with no expertise to deploy. The rare event problem is, in this sense, the structural cause of what Perrow called the operator's dilemma.
Bainbridge derived the concept from her observations of nuclear and chemical plant incidents in the 1970s, where post-hoc analysis consistently revealed that operators had been asked to respond to situations outside their direct experience. The formalization in her 1983 paper gave the phenomenon a name that has since been adopted across safety science, aviation human factors, and — now — AI safety research.
Rarity is the cost of reliability. The more reliable the system, the rarer the exceptions — and the less opportunity the operator has to build the skills required to handle them.
Expertise depends on encounter. The pattern library on which expert response depends is built through direct contact with the full distribution of situations, including the anomalous ones automation shields operators from seeing.
Simulation is insufficient. Rare events cannot be fully rehearsed in training, because their essential feature is that no one anticipated them — the operator must respond to a situation that was not in the simulation library.
The next generation loses most. Senior operators built expertise before automation; their juniors never have the chance, and the erosion of collective expertise is silent until it matters.
Advocates of high-reliability organizations argue that structured anomaly-exposure programs — deliberate introduction of controlled failures, cross-training, mandatory manual-mode practice — can partially substitute for naturalistic encounter with rare events. Critics observe that such programs require institutional commitment that economic pressure systematically erodes.