This page lists every Orange Pill Wiki entry hyperlinked from Philip Tetlock — On AI. 29 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The governing metaphor of The Orange Pill — AI as a signal-amplifier that carries whatever is fed into it further, with terrifying fidelity. Buber's framework extends the metaphor: the amplifier clarifies what was already there, which makes…
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The practice of revising probability estimates proportionally as evidence accumulates — neither ignoring new information nor overreacting to it — Tetlock's operational core of superforecasting.

The empirical finding that the correspondence between confidence and accuracy improves through practice — assigning probabilities, tracking outcomes, adjusting based on feedback — a skill AI threatens to atrophy.
The mechanism by which AI systems intensify the human tendency to seek and remember information confirming existing beliefs — by mirroring cognitive signatures with statistical precision and reducing the diversity of inputs that unmediated …
Tetlock's empirical taxonomy of cognitive styles: foxes know many things and hold multiple frameworks; hedgehogs know one big thing and filter all evidence through it — a distinction that predicts forecasting accuracy.

The tendency to see past events as having been inevitable after learning the outcome — the mechanism by which surprise becomes 'obvious in retrospect' and five-stage patterns appear to predict.
The tendency to process information in ways that protect membership in valued social groups — the mechanism by which expertise becomes a liability when predictions carry reputational stakes.
The dual-lens framework for prediction: the inside view attends to case-specific features; the outside view attends to base rates from similar cases — superforecasters integrate both.
The empirical pattern that domain knowledge increases confidence faster than it increases accuracy — producing experts who are more certain and less calibrated than informed non-specialists.
The Tetlockian thesis that good judgment begins with good questions — and that the capacity to formulate questions worth asking is the human contribution AI cannot replicate.
The probability superforecasters assigned to AI benchmark achievements that actually occurred in 2025 — Tetlock's paradigm case of radical miscalibration even among the world's best predictors.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
Tetlock's methodological baseline: expert predictions, on average, performed no better than random guessing — a finding that became shorthand for the failure of credentialed expertise.
The research tradition in the AI discourse organized around depth preservation — measuring progress by the maintenance of craft, embodied knowledge, and the formative friction of struggle, and identifying AI as a threat to the conditions …
Daston's synthesis of the recurrent pattern observed across knowledge-technology transitions: threshold, exhilaration, resistance, adaptation, settled use — with institutional infrastructure always arriving later than the technology requir…
The structural inversion the AI transition produces — when building becomes easy, scarcity migrates from execution to the capacity to decide what deserves to be built.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The Orange Pill's figure for those who hold the exhilaration and the loss simultaneously—recognized here as an intuitive formulation of Heideggerian Gelassenheit.
The research tradition in the AI discourse organized around capability expansion and democratization — measuring progress by productivity gains, adoption speed, and the compression of the imagination-to-artifact ratio.
Tetlock and Dan Gardner's 2015 bestseller documenting the Good Judgment Project — proving that ordinary citizens trained in probabilistic reasoning outperform intelligence analysts with access to classified information.
Xingqi Maggie Ye and Aruna Ranganathan's 2026 Harvard Business Review ethnography of an AI-augmented workplace — the most rigorous empirical documentation to date of positive feedback dynamics in human-machine loops.
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.
Latvian-British philosopher (1909–1997) whose fox-hedgehog distinction — borrowed by Tetlock — provided the cognitive-style taxonomy that predicted forecasting accuracy across two decades of data.
Canadian-American psychologist (b. 1954) whose twenty-year study of expert prediction demonstrated that credentialed forecasters perform no better than chance — and whose fox-hedgehog distinction transformed how we understand judgment.
The 2011–2015 IARPA-funded forecasting tournament that demonstrated ordinary citizens could outperform intelligence analysts — Tetlock's proof that calibrated judgment is a trainable skill, not an innate gift.
The early 2026 repricing event in which a trillion dollars of market value vanished from SaaS companies — the critical-stage moment when AI's displacement of software's code value became visible to markets.
The moment during the composition of The Orange Pill when Claude produced a passage that was syntactically perfect and philosophically wrong — misapplying Gilles Deleuze's concept of "smooth space" to support a connection the concept does n…