Prospect theory is the formal architecture of human decision-making under risk, documented through decades of experiments showing that people violate the axioms of expected utility theory in systematic, predictable, mathematically tractable ways. The theory replaces the assumption of absolute-value evaluation with reference-point-dependent evaluation; replaces linear probability weighting with an S-shaped weighting function that overweights small probabilities and underweights large ones; and replaces symmetric utility with a value function that is concave for gains, convex for losses, and steeper for losses than for gains. Each component has direct implications for how people process the AI transition: the certainty of losses is overweighted relative to the probability of gains, diminishing sensitivity produces habituation to both the excitement and the dread, and catastrophic AI scenarios receive attention wildly disproportionate to their probability because small probabilities are inflated in the weighting function.
Prospect theory emerged from Tversky and Kahneman's systematic dismantling of expected utility theory through a series of experiments demonstrating preference reversals, framing effects, and the certainty effect. Subjects consistently violated the independence axiom — the foundational assumption of rational choice — in patterns too regular to dismiss as noise. The 1979 paper organized these violations into a coherent alternative model, and the 1992 extension (Advances in Prospect Theory: Cumulative Representation of Uncertainty) generalized it to handle outcomes of any number.
The value function's S-shape has specific predictions for the AI transition that are often missed. The concavity on the gain side predicts diminishing sensitivity to positive AI experiences: the first encounter with Claude's capabilities produces disproportionate excitement because it occupies the steep portion of the curve. Subsequent experiences, even when objectively more impressive, produce diminishing emotional impact. This maps precisely onto the trajectory that The Orange Pill describes between initial vertigo and later grinding productive compulsion.
The probability weighting function produces two systematic distortions in the AI discourse. Small probabilities — of catastrophic outcomes, civilizational collapse, mass unemployment — are overweighted, producing fear disproportionate to actual probability. Large probabilities — of moderate, uneven, messy outcomes — are underweighted, producing attentional neglect of the most likely scenarios. The availability heuristic amplifies this distortion by making catastrophic scenarios more mentally accessible than mundane ones.
The certainty effect adds a further layer: outcomes perceived as certain receive weight that exceeds their mathematical probability. In the AI transition, the certain loss of implementation skill weighs more than the probable gain of amplified judgment, producing preference for the status quo independent of actual expected values. This is not irrationality. It is the predictable output of a cognitive architecture optimized for a different environment than the one it now faces.
Prospect theory was developed in Jerusalem and at the Oregon Research Institute in the early 1970s, as Tversky and Kahneman worked through the mathematical implications of the preference-reversal experiments that had convinced them expected utility theory was empirically false. The formal paper was written in 1977 and published in 1979 after extensive revision.
The theory's impact on economics was gradual and initially resisted, but by the 1990s behavioral economics had emerged as a coherent field organized around prospect theory's insights. Kahneman's 2002 Nobel Prize in Economics recognized the work that Tversky, who died in 1996, could not share.
Reference-point dependence. Outcomes are evaluated relative to a reference point, not in absolute terms — and the reference point is manipulable by framing, anchoring, and narrative.
S-shaped value function. Concave for gains (diminishing sensitivity to positive outcomes), convex for losses (diminishing sensitivity to negative ones), with a steeper slope for losses.
Nonlinear probability weighting. Small probabilities are overweighted (producing fear of low-probability catastrophe) and large probabilities are underweighted (producing neglect of high-probability mundane outcomes).
The certainty effect. Certain outcomes receive disproportionate weight, explaining why the certain loss of existing skill is weighted more heavily than the probable gain of amplified capability.
Diminishing sensitivity. Both gains and losses produce less psychological impact as they accumulate — which predicts habituation to AI's capabilities on both the thrilling and the threatening dimensions.
Whether prospect theory's parameters are stable across cultures, contexts, and stakes remains contested. High-stakes field evidence sometimes deviates from laboratory estimates. Recent work on large language model behavior has shown that the models themselves exhibit prospect-theoretic patterns — raising the question of whether this reflects genuine bias absorption or merely statistical regularity in the training distribution.