Daniel Kahneman — On AI — Wiki Companion
WIKI COMPANION

Daniel Kahneman — On AI

A reading-companion catalog of the 27 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Daniel Kahneman — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Daniel Kahneman — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (22)
Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Attribute Substitution
Concept

Attribute Substitution

Kahneman and Shane Frederick's term for System 1's practice of answering an easier question when confronted with a hard one — without the answerer noticing the substitution has occurred.

Consciousness
Concept

Consciousness

The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.

Democratization of Capability (Senian Reading)
Concept

Democratization of Capability (Senian Reading)

The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?

Flow State
Concept

Flow State

Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.

Human-AI Collaboration
Concept

Human-AI Collaboration

The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."

Loss Aversion
Concept

Loss Aversion

Tversky and Kahneman's 1979 finding that losses hurt roughly twice as much as equivalent gains feel good — the asymmetry that explains the expert's resistance to AI tools more powerfully than any rational calculus.

Noise in Human Judgment
Concept

Noise in Human Judgment

Kahneman, Sibony, and Sunstein's 2021 framework for the random variability in professional decisions that should be identical — the under-recognized twin of bias, and the specific failure that AI systems most reliably eliminate.

Overconfidence and the Calibration Problem
Concept

Overconfidence and the Calibration Problem

Tversky and Kahneman's finding that people assign probabilities to their judgments that systematically exceed actual accuracy — a calibration failure that AI's smooth output makes worse by decoupling surface cues from underlying accuracy.

Prospect Theory
Concept

Prospect Theory

Tversky and Kahneman's 1979 replacement for expected utility theory — a descriptive model of how people actually evaluate uncertain outcomes, with consequences for every prediction about human response to AI.

System 1 and System 2
Concept

System 1 and System 2

Kahneman's functional description of two modes of cognition — fast, automatic, effortless System 1 and slow, deliberate, effortful System 2 — whose asymmetric relationship structures every judgment the human mind produces.

The Anchoring Effect
Concept

The Anchoring Effect

Tversky and Kahneman's 1974 demonstration that estimates start from an initial value and adjust insufficiently — the bias that makes every pre-AI projection of what is possible systematically wrong.

The Availability Heuristic
Concept

The Availability Heuristic

Tversky and Kahneman's 1973 finding that people judge probability by the ease of recall — the cognitive shortcut that makes the AI discourse a case study in systematic distortion at civilizational scale.

The Endowment Effect
Concept

The Endowment Effect

Thaler, Kahneman, and Tversky's demonstration that possession inflates value — the bias that explains why experienced professionals overvalue the skills AI is automating.

The Fluency Heuristic
Concept

The Fluency Heuristic

The cognitive shortcut by which System 1 treats ease of processing as a proxy for truth, familiarity, and quality — the specific mechanism that makes AI's polished output feel reliable whether or not it is.

The Fluency Trap (Brown Reading)
Concept

The Fluency Trap (Brown Reading)

The cultural habit of treating fluent AI output as competent AI output — an extension of the equation between eloquence and expertise that centuries of human interaction built.

The Framing Effect
Concept

The Framing Effect

Tversky and Kahneman's demonstration that the presentation of a problem — independent of its underlying facts — determines how it is evaluated. The same AI evidence produces opposite conclusions under "AI as gain" and "AI as loss" frames.

The Heuristics and Biases Program
Concept

The Heuristics and Biases Program

The research tradition Tversky and Kahneman founded in the 1970s to map the systematic departures of human judgment from rational ideals — the intellectual framework this entire book applies to the AI transition.

The Lazy Monitor
Concept

The Lazy Monitor

Kahneman's metaphor for System 2's operating posture — a supervisor capable of correcting System 1's errors but disposed, by default, to endorse whatever System 1 has already decided.

The Luddite Response
Concept

The Luddite Response

The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.

The Planning Fallacy
Concept

The Planning Fallacy

Kahneman and Tversky's 1979 term for the systematic tendency to underestimate the time, costs, and risks of planned actions while overestimating their benefits — now inverted, complicated, and repurposed by AI collaboration.

What You See Is All There Is
Concept

What You See Is All There Is

Kahneman's acronym for System 1's tendency to construct the best coherent story from available information without flagging what is missing — the bias AI's polished output amplifies beyond historical precedent.

Technology (1)
Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

Work (1)
The Orange Pill (book)
Work

The Orange Pill (book)

Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.

Person (2)
Amos Tversky
Person

Amos Tversky

Israeli cognitive psychologist (1937–1996), Kahneman's collaborator of two decades, whose joint work founded the heuristics-and-biases program and produced the empirical foundation Kahneman later carried to the Nobel Prize he would have sha…

Edo Segal
Person

Edo Segal

Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.

Event (1)
The Trivandrum Training
Event

The Trivandrum Training

The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
27 entries