Belief updating, in Tetlock's framework, is the disciplined revision of probability estimates in response to new evidence. The revision should be proportional: weak evidence produces small updates, strong evidence produces large updates, and the direction of the update should track the diagnosticity of the evidence. Superforecasters in the Good Judgment Project updated their forecasts frequently — often multiple times per week as news arrived — but each update was measured rather than reactive. They practiced Bayesian reasoning intuitively, adjusting priors based on likelihood ratios without necessarily performing formal calculations. The discipline is difficult to maintain because the mind prefers coherent narratives to probability distributions, and updating threatens narrative coherence. AI tools further threaten the discipline by providing comprehensive-seeming answers that discourage the iterative refinement that belief updating requires.
The mathematics of belief updating is straightforward Bayesian inference: P(H|E) = P(E|H) × P(H) / P(E). The posterior probability of a hypothesis given evidence equals the likelihood of the evidence if the hypothesis were true, times the prior probability of the hypothesis, divided by the overall probability of observing that evidence. In practice, superforecasters did not perform these calculations explicitly. They developed an intuitive sense of how much weight different kinds of evidence should carry, and they adjusted their estimates accordingly. The intuition was built through practice: making forecasts, observing outcomes, noticing which kinds of evidence were diagnostic and which were not, and calibrating the internal sense of 'how much should this move me?' against the empirical track record.
The AI environment threatens belief updating through premature closure. The user asks a question; the AI provides a comprehensive answer; the question feels answered. The iterative process of asking, receiving partial information, updating the estimate, asking a refined question, updating again — the process through which superforecasters navigated complex predictions — is bypassed. The AI's answer is not partial; it is complete, or appears so. The completeness discourages further inquiry, which means the updating that would occur through continued engagement with evolving information does not occur. The user receives a snapshot answer to a question that should have been approached as a moving target.
Segal's description of his collaboration with Claude includes moments of genuine belief updating: Claude offers a connection he had not seen, the connection shifts his understanding, the argument adjusts. These are the productive instances. But Segal also describes moments where the smooth output produced premature certainty — where he almost kept a passage because it sounded good, catching himself only later when the nagging returned. The difference between these two outcomes is the presence or absence of the skeptical pause — the moment when the professional treats the AI's output as a hypothesis to be tested rather than a conclusion to be accepted. The pause is where updating lives, and the pause is what the AI's comprehensive fluency makes it easiest to skip.
The formal Bayesian framework for belief revision dates to Thomas Bayes' eighteenth-century theorem, but its application to human judgment is a twentieth-century development. Ward Edwards, in the 1960s, documented that people update their beliefs too conservatively — they adjust probabilities in the right direction but by amounts smaller than Bayes' theorem prescribes. Kahneman and Tversky's heuristics-and-biases program in the 1970s documented the opposite error in some contexts: overreaction to vivid, available information. Tetlock's contribution was to identify the specific updating discipline that superforecasters practiced: frequent small adjustments proportional to evidence quality, avoiding both the conservatism of ignoring information and the overreaction of chasing every headline. The discipline could be taught, and the teaching produced measurable improvement in both calibration and resolution.
Proportional adjustment. The magnitude of the update should match the diagnosticity of the evidence — weak signals move the estimate slightly, strong signals move it substantially.
Frequency matters. Updating frequently as evidence arrives maintains calibration better than waiting for major events, because small adjustments are easier to make and less threatening to narrative coherence.
Directionality from likelihood ratios. Evidence that is more probable if the hypothesis is true than if it is false should increase the probability assigned to the hypothesis — the Bayesian discipline the unaided mind imperfectly approximates.
Avoid anchoring. Initial estimates should be treated as provisional starting points, not commitments to defend — the willingness to move substantially from the anchor distinguishes good forecasters from poor ones.
AI discourages iteration. Comprehensive-seeming answers produce premature closure, eliminating the iterative questioning and partial-information processing through which belief updating occurs.