This page lists every Orange Pill Wiki entry hyperlinked from Daniel Kahneman — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
Kahneman and Shane Frederick's term for System 1's practice of answering an easier question when confronted with a hard one — without the answerer noticing the substitution has occurred.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
Tversky and Kahneman's 1979 finding that losses hurt roughly twice as much as equivalent gains feel good — the asymmetry that explains the expert's resistance to AI tools more powerfully than any rational calculus.
Kahneman, Sibony, and Sunstein's 2021 framework for the random variability in professional decisions that should be identical — the under-recognized twin of bias, and the specific failure that AI systems most reliably eliminate.
Tversky and Kahneman's finding that people assign probabilities to their judgments that systematically exceed actual accuracy — a calibration failure that AI's smooth output makes worse by decoupling surface cues from underlying accuracy.
Tversky and Kahneman's 1979 replacement for expected utility theory — a descriptive model of how people actually evaluate uncertain outcomes, with consequences for every prediction about human response to AI.
Kahneman's functional description of two modes of cognition — fast, automatic, effortless System 1 and slow, deliberate, effortful System 2 — whose asymmetric relationship structures every judgment the human mind produces.
Tversky and Kahneman's 1974 demonstration that estimates start from an initial value and adjust insufficiently — the bias that makes every pre-AI projection of what is possible systematically wrong.
Tversky and Kahneman's 1973 finding that people judge probability by the ease of recall — the cognitive shortcut that makes the AI discourse a case study in systematic distortion at civilizational scale.
Thaler, Kahneman, and Tversky's demonstration that possession inflates value — the bias that explains why experienced professionals overvalue the skills AI is automating.
The cognitive shortcut by which System 1 treats ease of processing as a proxy for truth, familiarity, and quality — the specific mechanism that makes AI's polished output feel reliable whether or not it is.
The cultural habit of treating fluent AI output as competent AI output — an extension of the equation between eloquence and expertise that centuries of human interaction built.
Tversky and Kahneman's demonstration that the presentation of a problem — independent of its underlying facts — determines how it is evaluated. The same AI evidence produces opposite conclusions under "AI as gain" and "AI as loss" frames.
The research tradition Tversky and Kahneman founded in the 1970s to map the systematic departures of human judgment from rational ideals — the intellectual framework this entire book applies to the AI transition.
Kahneman's metaphor for System 2's operating posture — a supervisor capable of correcting System 1's errors but disposed, by default, to endorse whatever System 1 has already decided.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
Kahneman and Tversky's 1979 term for the systematic tendency to underestimate the time, costs, and risks of planned actions while overestimating their benefits — now inverted, complicated, and repurposed by AI collaboration.
Kahneman's acronym for System 1's tendency to construct the best coherent story from available information without flagging what is missing — the bias AI's polished output amplifies beyond historical precedent.
Israeli cognitive psychologist (1937–1996), Kahneman's collaborator of two decades, whose joint work founded the heuristics-and-biases program and produced the empirical foundation Kahneman later carried to the Nobel Prize he would have sha…
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.