Superforecasting, published by Crown in 2015 and co-authored with journalist Dan Gardner, translated the Good Judgment Project's findings into operational methodology. The book argued that forecasting excellence is a skill, not a talent, and enumerated the specific cognitive habits that distinguish superforecasters: thinking in probabilities, breaking complex questions into components, updating beliefs based on evidence, seeking disconfirmation, and avoiding identity-protective reasoning. The book became a New York Times bestseller and required reading in intelligence agencies, corporations, and policy schools worldwide. Its central claim — that structured training produces measurable, durable improvement in judgment — represented a paradigm shift from viewing forecasting as an art practiced by credentialed experts to viewing it as a discipline accessible to anyone willing to practice.
The Good Judgment Project (2011–2015), funded by IARPA, was a tournament pitting five research teams against each other in a multi-year forecasting competition. Tetlock's team recruited ordinary citizens — not intelligence professionals, not domain experts, just people who volunteered — and provided a one-hour training module in probabilistic reasoning. The trained forecasters outperformed intelligence analysts with access to classified information by roughly thirty percent. The margin was so large that IARPA declared Tetlock's team the winner and shut down the tournament two years early. The mechanism of victory was replicable: the superforecasters practiced a disciplined method that could be taught, learned, and maintained through continued application.
The book's ten commandments for superforecasters became the operational core of the framework: triage (focus effort where it matters), break problems into components, balance inside and outside views, update incrementally, distinguish degrees of confidence, balance overreaction and underreaction, look for clashing causal forces, compare forecasts with baselines and benchmarks, synthesize multiple perspectives, and cultivate a growth mindset about forecasting ability. Each commandment was grounded in specific empirical findings from the tournament data and illustrated with cases where application of the principle improved accuracy. The aggregation was not a loose collection of tips but a deliberate practice regimen whose components reinforced each other.
The book engaged the AI question obliquely. In 2015, large language models had not yet demonstrated their current capabilities, and Tetlock expressed difficulty imagining existing AI doing what superforecasters collectively accomplished. The human contribution — synthesizing across domains, detecting subtle patterns in qualitative information, updating based on weak signals — seemed irreducibly human. Within a decade, this assessment would require substantial revision. By 2024, Tetlock's research demonstrated that LLM ensemble predictions rivaled human crowd accuracy, and by 2025 he was predicting that within three years, unassisted human forecasting would become obsolete in serious policy contexts. Superforecasting captured the moment before AI crossed the capability threshold, providing a baseline against which the subsequent transformation could be measured.
The book emerged from the Good Judgment Project's unexpected success. When Tetlock's team won the IARPA tournament decisively, the intelligence community and corporate risk managers wanted to know: what did the superforecasters do that everyone else did not? Dan Gardner, a Canadian journalist who had written extensively on prediction and risk, approached Tetlock about translating the research into accessible form. The collaboration paired Tetlock's empirical rigor with Gardner's narrative clarity, producing a book that could be read by practitioners who would never engage the statistical appendices of Expert Political Judgment. The book's success — commercial and institutional — demonstrated that appetite for better judgment exceeded the academic audience, and that the fox's method could be packaged in ways the broader culture could absorb.
Forecasting as trainable skill. An hour of training in probabilistic reasoning produces measurable improvement in prediction accuracy — forecasting is not a gift but a practice.
Granular probabilities matter. The difference between 'likely' and 'sixty-seven percent probable' is the difference between a verbal gesture and a commitment that can be scored and improved.
Frequent small updates. Superforecasters adjusted their estimates continuously as evidence accumulated — neither ignoring new information nor overreacting to it.
Disconfirmation search. Actively seeking evidence that would prove one's forecast wrong is the single most reliable habit distinguishing superior from average forecasters.
Growth mindset about judgment. Believing that forecasting ability can improve through practice is a prerequisite for the deliberate practice that produces improvement.