The RPD model emerged from Klein's 1984 interviews with fire commanders in Cleveland, Ohio, who could not explain how they decided what to do at a burning building. The textbook answer — generate options, compare them, select the optimum — did not match what the commanders actually did. They arrived, read the situation, and knew. Klein's four-decade research program built a science around that knowing, demonstrating that it was neither mystical nor irrational but the compressed output of extensive experience operating on rich environmental cues.
The model rests on two components that must be analyzed together. Pattern recognition provides the match between current conditions and stored cases. Mental simulation provides the evaluative rehearsal in which the recognized action is projected forward to check for breakdowns. Neither component alone is sufficient: recognition without simulation produces thoughtless automation; simulation without recognition produces deliberation too slow for the field conditions experts face.
Klein insists that RPD is not the opposite of analysis but analysis internalized through practice until it operates faster than conscious thought — rapid, parallel pattern-matching drawing on a library of thousands of cases. This distinction matters enormously for the AI era. Large language models approximate the pattern-matching component at a different scale and through a different mechanism, but without the simulation-based evaluation or the anomaly detection that makes human expertise reliable in situations outside the training distribution.
The model's enduring influence extends far beyond firefighting. It shapes military decision-making training, medical emergency protocols, and, increasingly, the design of human-AI interaction systems. Klein's work with DARPA's Explainable AI program applied RPD's cognitive architecture to the question of what users need in order to oversee AI systems effectively — a question that formal decision theory had been systematically unable to address.
Klein developed the model through the Critical Decision Method, a structured interview technique in which experienced practitioners were walked backward through specific challenging incidents to surface the cues they attended to, the patterns they recognized, and the simulations they ran. The method revealed a cognitive architecture that laboratory studies had missed because they had eliminated the conditions — time pressure, ambiguity, high stakes — under which expertise actually operates.
The 1998 publication of Sources of Power brought RPD to wider audiences, and Klein's 2009 adversarial collaboration with Daniel Kahneman established the conditions under which expert intuition can be trusted — a synthesis that anchored the model in the broader behavioral science literature while preserving its distinct claims about field cognition.
Recognition over comparison. Experts do not evaluate multiple options; they recognize the situation and implement the first workable response.
Two-phase architecture. Pattern recognition provides the candidate action; mental simulation provides the evaluative rehearsal before commitment.
Satisficing within recognition. The goal is finding an option that works, not searching for the option that is best.
Experience-dependent. The model only operates reliably when the practitioner has accumulated a rich pattern library through direct engagement with the domain.
Anomaly-triggered deliberation. When simulation reveals misfit, the expert shifts to active sensemaking rather than continuing with the recognized pattern.
Classical decision theorists initially resisted the RPD model as abandoning the normative standards of rational choice. Klein's response — that the normative standards were derived from laboratory conditions that do not obtain in the field — became the founding argument of the naturalistic decision-making movement. The more recent debate concerns whether AI systems that perform pattern-matching at scale are implementing a version of RPD or merely its surface features. Klein's position is that the simulation and anomaly-detection phases, which require embodied engagement with the domain, cannot be replicated by systems trained on statistical regularities alone.