The Recognition-Primed Decision model is Gary Klein's foundational contribution to cognitive science, developed through thousands of critical incident interviews with firefighters, nurses, and military commanders. The model describes a two-phase process: first, rapid pattern recognition in which current conditions activate stored cases from the expert's long-term memory, along with an associated action script; second, mental simulation in which the expert runs the recognized action forward in time, watching for moments when the projected scenario breaks down. Experts go with their first recognized option more than eighty percent of the time, modifying or cycling to new patterns only when simulation reveals misfit. The model overturned decades of classical decision theory by demonstrating that expert cognition operates through satisficing within recognition rather than optimization across alternatives — and that this compressed analysis is both faster and more reliable than formal comparison under field conditions.
There is a parallel reading that begins not with the elegance of expert cognition but with its material dependencies. Klein's firefighters didn't simply accumulate pattern libraries through experience — they operated within specific institutional architectures that enabled decades of stable practice. The fire department provided continuous employment, structured training, predictable equipment, and crucially, the economic security to remain within a single domain long enough to develop expertise. This infrastructure is precisely what is being dismantled in the transition to AI-mediated work. The gig economy, constant reskilling demands, and algorithmic management create conditions where RPD cannot develop because workers never stay in one context long enough to build the requisite pattern library.
The model's emphasis on embodied experience also conceals a deeper problem: expertise formation requires not just time but particular kinds of time — protected apprenticeships, gradual assumption of responsibility, permission to learn from non-catastrophic failure. These are luxuries of stable institutional contexts that assumed human expertise would remain valuable. As AI systems handle routine pattern-matching, humans are pushed toward edge cases and novel situations — precisely where RPD fails because no pattern library exists. The firefighter who recognizes building collapse patterns developed that recognition through hundreds of similar buildings; the nurse who spots sepsis early saw dozens of cases progress. But when AI handles the routine cases, humans encounter only the anomalies, creating a perverse dynamic where we need RPD most exactly where its preconditions cannot be met. The model thus becomes not a description of enduring human advantage but a monument to a vanishing form of work organization.
The RPD model emerged from Klein's 1984 interviews with fire commanders in Cleveland, Ohio, who could not explain how they decided what to do at a burning building. The textbook answer — generate options, compare them, select the optimum — did not match what the commanders actually did. They arrived, read the situation, and knew. Klein's four-decade research program built a science around that knowing, demonstrating that it was neither mystical nor irrational but the compressed output of extensive experience operating on rich environmental cues.
The model rests on two components that must be analyzed together. Pattern recognition provides the match between current conditions and stored cases. Mental simulation provides the evaluative rehearsal in which the recognized action is projected forward to check for breakdowns. Neither component alone is sufficient: recognition without simulation produces thoughtless automation; simulation without recognition produces deliberation too slow for the field conditions experts face.
Klein insists that RPD is not the opposite of analysis but analysis internalized through practice until it operates faster than conscious thought — rapid, parallel pattern-matching drawing on a library of thousands of cases. This distinction matters enormously for the AI era. Large language models approximate the pattern-matching component at a different scale and through a different mechanism, but without the simulation-based evaluation or the anomaly detection that makes human expertise reliable in situations outside the training distribution.
The model's enduring influence extends far beyond firefighting. It shapes military decision-making training, medical emergency protocols, and, increasingly, the design of human-AI interaction systems. Klein's work with DARPA's Explainable AI program applied RPD's cognitive architecture to the question of what users need in order to oversee AI systems effectively — a question that formal decision theory had been systematically unable to address.
Klein developed the model through the Critical Decision Method, a structured interview technique in which experienced practitioners were walked backward through specific challenging incidents to surface the cues they attended to, the patterns they recognized, and the simulations they ran. The method revealed a cognitive architecture that laboratory studies had missed because they had eliminated the conditions — time pressure, ambiguity, high stakes — under which expertise actually operates.
The 1998 publication of Sources of Power brought RPD to wider audiences, and Klein's 2009 adversarial collaboration with Daniel Kahneman established the conditions under which expert intuition can be trusted — a synthesis that anchored the model in the broader behavioral science literature while preserving its distinct claims about field cognition.
Recognition over comparison. Experts do not evaluate multiple options; they recognize the situation and implement the first workable response.
Two-phase architecture. Pattern recognition provides the candidate action; mental simulation provides the evaluative rehearsal before commitment.
Satisficing within recognition. The goal is finding an option that works, not searching for the option that is best.
Experience-dependent. The model only operates reliably when the practitioner has accumulated a rich pattern library through direct engagement with the domain.
Anomaly-triggered deliberation. When simulation reveals misfit, the expert shifts to active sensemaking rather than continuing with the recognized pattern.
Classical decision theorists initially resisted the RPD model as abandoning the normative standards of rational choice. Klein's response — that the normative standards were derived from laboratory conditions that do not obtain in the field — became the founding argument of the naturalistic decision-making movement. The more recent debate concerns whether AI systems that perform pattern-matching at scale are implementing a version of RPD or merely its surface features. Klein's position is that the simulation and anomaly-detection phases, which require embodied engagement with the domain, cannot be replicated by systems trained on statistical regularities alone.
The right frame here depends entirely on which temporal horizon we're examining. For understanding how current experts operate — Klein's firefighters, surgeons, pilots still flying — the RPD model captures something fundamental (90% accurate). These practitioners do recognize situations holistically, do run mental simulations, do satisfice rather than optimize. The contrarian's point about institutional infrastructure doesn't invalidate the cognitive architecture; it simply notes that this architecture requires specific conditions to develop.
Where the material critique gains force is in projecting forward (70% contrarian view). If expertise development requires thousands of hours of relatively stable practice, and if AI increasingly handles routine cases while humans manage exceptions, then RPD becomes less a model of human cognition than a historical artifact of a particular era of work organization. The question isn't whether Klein correctly described expert decision-making but whether the conditions for such expertise can persist. Here the contrarian reading seems prescient — the gig economy and algorithmic management do fragment the continuity required for pattern library formation.
The synthesis requires recognizing RPD as both cognitively accurate and institutionally contingent. Perhaps the model's real value in the AI era isn't as a description of how all experts decide, but as a specification of what we lose when we eliminate the conditions for expertise development. The two-phase architecture — recognition plus simulation — might better be understood as a three-phase system where the third, typically invisible phase is the institutional substrate that enables the first two. This reframing suggests that preserving human decision capability requires not just understanding cognition but actively maintaining the organizational forms that allow pattern libraries to develop.