The Critical Decision Method (CDM) is a cognitive task analysis technique developed by Klein and colleagues in the 1980s that became the methodological engine of the Naturalistic Decision Making movement. The method involves walking experienced practitioners backward through specific non-routine incidents, using probe questions to surface the cues they attended to, the patterns they recognized, the options they considered, the mental simulations they ran, and the decisions they made. CDM is iterative — multiple passes through the same incident, each focused on different aspects — and it relies on specific probes designed to elicit tacit knowledge practitioners cannot articulate spontaneously. The method produced the empirical foundation for Klein's RPD model and has been adopted across dozens of research programs studying expertise in high-stakes domains.
The method emerged from Klein's early fire commander interviews, where he discovered that standard interview approaches produced thin descriptions of how experts decided. Practitioners would give abstract accounts that corresponded poorly to the rich cognitive processes the incidents actually involved. CDM's structured iteration, with its specific focus on non-routine incidents where decision-making was most cognitively demanding, surfaced the patterns, simulations, and anomaly-detection processes that thinner interview approaches had missed.
The method's relevance to AI design is direct. If AI systems are to support rather than supplant expert cognition, they must be designed on the basis of accurate understanding of how expert cognition operates. CDM provides the empirical technique for developing this understanding in specific domains, and Klein has used it extensively in his work on human-AI interaction, including his research with DARPA's Explainable AI program.
CDM's structure embodies an epistemological commitment that is unusual in cognitive science: the assumption that practitioners have privileged access to their own decision-making processes that can be surfaced through skilled interviewing, even when they cannot articulate the processes spontaneously. The assumption runs against behaviorist and computationalist traditions that treat internal cognitive processes as inaccessible or irrelevant. CDM's empirical productivity across dozens of domains has been one of the strongest arguments for taking expert self-reports seriously when they are elicited through appropriate techniques.
The method has been extended and refined over four decades, most notably in Crandall, Klein, and Hoffman's 2006 Working Minds, which provides the most complete practitioner's guide to CDM and related cognitive task analysis methods.
Klein developed CDM in the mid-1980s through iterative refinement of interview techniques with fire commanders. Early interviews using standard approaches produced descriptions that did not match the cognitive complexity the incidents obviously involved. Klein's innovation was to structure the interviews around specific non-routine incidents, probe iteratively with questions targeting different cognitive dimensions, and press for concrete details rather than accepting abstract summaries.
The method was formalized through Klein's work at Klein Associates and the development of what became the Naturalistic Decision Making movement. It was further refined through applications across domains including neonatal intensive care, military command, aviation, and, increasingly, AI system design.
Non-routine focus. The method targets specific incidents where routine decision-making broke down, surfacing the cognitive processes normally invisible.
Iterative structure. Multiple passes through the same incident, each focused on different cognitive dimensions.
Probe-driven elicitation. Specific question types designed to surface tacit knowledge practitioners cannot articulate spontaneously.
Empirical foundation. The method produced the evidence base for Klein's RPD model and much of the naturalistic decision-making literature.
AI design application. CDM provides the cognitive task analysis foundation for designing AI systems that support rather than supplant expert cognition.
CDM's reliance on retrospective self-report has drawn skepticism from researchers who favor process-tracing or experimental methods. Klein's defense, grounded in four decades of empirical results, is that CDM's structured iteration surfaces cognitive processes that other methods cannot reach, and that the results are validated through their capacity to predict expert performance in novel situations.