Philip Tetlock — On AI — Wiki Companion
WIKI COMPANION

Philip Tetlock — On AI

A reading-companion catalog of the 29 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Philip Tetlock — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Philip Tetlock — On AI. 29 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (20)
AI as Amplifier
Concept

AI as Amplifier

The governing metaphor of The Orange Pill — AI as a signal-amplifier that carries whatever is fed into it further, with terrifying fidelity. Buber's framework extends the metaphor: the amplifier clarifies what was already there, which makes…

Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Belief Updating as Forecasting Discipline
Concept

Belief Updating as Forecasting Discipline

The practice of revising probability estimates proportionally as evidence accumulates — neither ignoring new information nor overreacting to it — Tetlock's operational core of superforecasting.

Calibration as Trainable Skill
Concept

Calibration as Trainable Skill

The empirical finding that the correspondence between confidence and accuracy improves through practice — assigning probabilities, tracking outcomes, adjusting based on feedback — a skill AI threatens to atrophy.

Confirmation Bias Amplification
Concept

Confirmation Bias Amplification

The mechanism by which AI systems intensify the human tendency to seek and remember information confirming existing beliefs — by mirroring cognitive signatures with statistical precision and reducing the diversity of inputs that unmediated …

Fox-Hedgehog Distinction
Concept

Fox-Hedgehog Distinction

Tetlock's empirical taxonomy of cognitive styles: foxes know many things and hold multiple frameworks; hedgehogs know one big thing and filter all evidence through it — a distinction that predicts forecasting accuracy.

Hindsight Bias and Creeping Determinism
Concept

Hindsight Bias and Creeping Determinism

The tendency to see past events as having been inevitable after learning the outcome — the mechanism by which surprise becomes 'obvious in retrospect' and five-stage patterns appear to predict.

Identity-Protective Cognition
Concept

Identity-Protective Cognition

The tendency to process information in ways that protect membership in valued social groups — the mechanism by which expertise becomes a liability when predictions carry reputational stakes.

Inside View and Outside View
Concept

Inside View and Outside View

The dual-lens framework for prediction: the inside view attends to case-specific features; the outside view attends to base rates from similar cases — superforecasters integrate both.

Overconfidence in Expertise
Concept

Overconfidence in Expertise

The empirical pattern that domain knowledge increases confidence faster than it increases accuracy — producing experts who are more certain and less calibrated than informed non-specialists.

Questioning as the Core of Judgment
Concept

Questioning as the Core of Judgment

The Tetlockian thesis that good judgment begins with good questions — and that the capacity to formulate questions worth asking is the human contribution AI cannot replicate.

The 9.7 Percent
Concept

The 9.7 Percent

The probability superforecasters assigned to AI benchmark achievements that actually occurred in 2025 — Tetlock's paradigm case of radical miscalibration even among the world's best predictors.

The Beaver's Dam
Concept

The Beaver's Dam

The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.

The Dart-Throwing Chimpanzee
Concept

The Dart-Throwing Chimpanzee

Tetlock's methodological baseline: expert predictions, on average, performed no better than random guessing — a finding that became shorthand for the failure of credentialed expertise.

The Elegist Tradition
Concept

The Elegist Tradition

The research tradition in the AI discourse organized around depth preservation — measuring progress by the maintenance of craft, embodied knowledge, and the formative friction of struggle, and identifying AI as a threat to the conditions …

The Five Stages of Technology Transition
Concept

The Five Stages of Technology Transition

Daston's synthesis of the recurrent pattern observed across knowledge-technology transitions: threshold, exhilaration, resistance, adaptation, settled use — with institutional infrastructure always arriving later than the technology requir…

The Judgment Bottleneck
Concept

The Judgment Bottleneck

The structural inversion the AI transition produces — when building becomes easy, scarcity migrates from execution to the capacity to decide what deserves to be built.

The Luddite Response
Concept

The Luddite Response

The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.

The Silent Middle
Concept

The Silent Middle

The Orange Pill's figure for those who hold the exhilaration and the loss simultaneously—recognized here as an intuitive formulation of Heideggerian Gelassenheit.

The Triumphalist Tradition
Concept

The Triumphalist Tradition

The research tradition in the AI discourse organized around capability expansion and democratization — measuring progress by productivity gains, adoption speed, and the compression of the imagination-to-artifact ratio.

Technology (1)
Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

Work (2)
Superforecasting: The Art and Science of Prediction
Work

Superforecasting: The Art and Science of Prediction

Tetlock and Dan Gardner's 2015 bestseller documenting the Good Judgment Project — proving that ordinary citizens trained in probabilistic reasoning outperform intelligence analysts with access to classified information.

The Berkeley Study
Work

The Berkeley Study

Xingqi Maggie Ye and Aruna Ranganathan's 2026 Harvard Business Review ethnography of an AI-augmented workplace — the most rigorous empirical documentation to date of positive feedback dynamics in human-machine loops.

Person (3)
Byung-Chul Han
Person

Byung-Chul Han

Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.

Isaiah Berlin
Person

Isaiah Berlin

Latvian-British philosopher (1909–1997) whose fox-hedgehog distinction — borrowed by Tetlock — provided the cognitive-style taxonomy that predicted forecasting accuracy across two decades of data.

Philip Tetlock
Person

Philip Tetlock

Canadian-American psychologist (b. 1954) whose twenty-year study of expert prediction demonstrated that credentialed forecasters perform no better than chance — and whose fox-hedgehog distinction transformed how we understand judgment.

Event (3)
Good Judgment Project
Event

Good Judgment Project

The 2011–2015 IARPA-funded forecasting tournament that demonstrated ordinary citizens could outperform intelligence analysts — Tetlock's proof that calibrated judgment is a trainable skill, not an innate gift.

Software Death Cross
Event

Software Death Cross

The early 2026 repricing event in which a trillion dollars of market value vanished from SaaS companies — the critical-stage moment when AI's displacement of software's code value became visible to markets.

The Deleuze Error
Event

The Deleuze Error

The moment during the composition of The Orange Pill when Claude produced a passage that was syntactically perfect and philosophically wrong — misapplying Gilles Deleuze's concept of "smooth space" to support a connection the concept does n…

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
29 entries