The Pre-Mortem Technique — Orange Pill Wiki
CONCEPT

The Pre-Mortem Technique

Klein's project-planning method in which a team imagines the project has already failed and works backward to identify the causes — and the social process AI cannot reproduce.

The pre-mortem is Klein's most widely adopted practical invention, a structured technique for surfacing project risks by asking teams to imagine the project has already failed and work backward to explain why. The method, developed in the 1990s, exploits a known feature of human cognition: people evaluating their own plans are subject to confirmation bias, interpreting information in ways that confirm viability. The pre-mortem inverts the frame — failure is given, the task is to explain it — which creates psychological permission to identify problems the normal planning process would suppress. Widely adopted in military planning, corporate strategy, medical safety, and AI risk assessment, the technique has a social architecture that Klein's 2025 analysis argued AI cannot reproduce: beyond generating risk lists, the in-person pre-mortem builds team calibration, creates psychological safety for dissent, and generates shared mental models that enable coordination when the anticipated risks materialize.

In the AI Story

Hedcut illustration for The Pre-Mortem Technique
The Pre-Mortem Technique

Klein's April 2025 essay 'Can AI do pre-mortems for us?' became a focal point of his engagement with the AI transition. He acknowledged that large language models produce 'surprisingly good' pre-mortem outputs — coherent, comprehensive, plausible risk lists generated in minutes rather than the hour or more an in-person pre-mortem requires. By metrics of speed, breadth, and cost, the AI version was superior.

But Klein then identified the social functions AI cannot replicate. The first is psychological safety: the pre-mortem's hypothetical framing creates a protected space in which team members can voice concerns they would not raise directly. A junior member who has noticed a flaw in a senior colleague's pet project faces social cost for direct criticism. The pre-mortem removes the cost — she is not criticizing the plan, she is explaining why the project failed. The AI pre-mortem eliminates this function because the LLM faces no social pressure and needs no protective framing.

The second function is team calibration. Experienced leaders use the pre-mortem to assess their teams: which members identify which risks reveals the team's distribution of expertise, attention, and concern. A leader who notices no one identified regulatory risk learns about a blind spot the risk list alone would not reveal. The AI pre-mortem produces the risk list without the calibration. The leader knows what risks the AI identified but not what her team would have identified.

The third function is trust-building through shared vulnerability. Teams that pre-mortem together develop shared mental models — aligned understandings that enable coordination when difficulties materialize. The team that received an AI-generated list has the information but not the shared experience. Klein's conclusion was characteristically precise: 'AI can devolve social tasks and coordination into data tabulations — and we may be poorer for it.'

The pre-mortem case is a microcosm of a broader pattern the Klein volume illuminates across the AI transition. In domain after domain, AI systems produce the informational output of complex human processes while being structurally incapable of reproducing the social processes that gave the output its meaning and value.

Origin

Klein developed the pre-mortem in the mid-1990s while consulting with organizations on project risk management. He was drawing on earlier work in prospective hindsight — research showing that people generate more specific risk predictions when asked to imagine a future outcome as already determined. Klein's innovation was the team-based structured application of this cognitive technique.

The method spread rapidly through business consulting in the 2000s and became particularly prominent in AI development and deployment contexts in the 2020s, where teams used it to surface risks that conventional risk assessment had missed.

Key Ideas

Frame inversion. The technique asks 'why did this fail?' rather than 'what could go wrong?' — exploiting the cognitive asymmetry between prospective and retrospective analysis.

Psychological safety. The hypothetical framing creates protected space for dissent that normal planning meetings suppress.

Team calibration. The leader learns about the team by observing which risks which members identify.

Shared mental models. The collective process builds aligned understanding that supports coordination under pressure.

AI's partial replication. Language models produce good risk lists but cannot reproduce the social functions that make the list valuable in context.

Debates & Critiques

Klein's critique of AI pre-mortems drew responses from AI researchers arguing that hybrid approaches — AI-generated risk lists reviewed and extended by teams — could capture benefits of both. Klein's position is that hybrid approaches are valuable but must preserve the in-person team process rather than substituting AI-generated content for it, because the social functions depend on the active construction of the analysis rather than its passive review.

Appears in the Orange Pill Cycle

Further reading

  1. Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18–19.
  2. Klein, G. (2025). Can AI do pre-mortems for us? Psychology Today, April.
  3. Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). Back to the future: Temporal perspective in the explanation of events. Journal of Behavioral Decision Making, 2(1), 25–38.
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT