This page lists every Orange Pill Wiki entry hyperlinked from Amos Tversky — On AI. 30 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The cultural aesthetic dominant in AI-mediated production — frictionless, seamless, without visible seam or accident — which in Moles's framework reveals itself as an aesthetic of maximal redundancy.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The set of strategies for reducing the influence of cognitive biases on judgment — and Tversky's lesson that none eliminates the biases, but some reduce them enough to matter when the amplifier is on.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The layered, embodied form of knowledge that accumulates in a practitioner through years of focal engagement with her material — too slow to notice day-to-day, too deep to transmit by documentation, and invisible to every metric the device …
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
The structural degradation of the shared evidentiary environment on which democratic deliberation depends — caused by the sequential failures of television, social media, and now generative AI.
Tversky and Kahneman's 1979 finding that losses hurt roughly twice as much as equivalent gains feel good — the asymmetry that explains the expert's resistance to AI tools more powerfully than any rational calculus.
Tversky and Kahneman's finding that people assign probabilities to their judgments that systematically exceed actual accuracy — a calibration failure that AI's smooth output makes worse by decoupling surface cues from underlying accuracy.
The AI-augmented pathology of compulsive engagement with tools that generate real value — the collapse of the passions-interests distinction that the Hirschmanian reading identifies as structural, not personal.
Tversky and Kahneman's 1979 replacement for expected utility theory — a descriptive model of how people actually evaluate uncertain outcomes, with consequences for every prediction about human response to AI.
The narratively simple framing of AI that fits loss templates, activates loss aversion, and outcompetes more accurate framings in the attention economy — a case study in framing effects at civilizational scale.
Paul Slovic's formalization of the tendency to judge risks and benefits based on emotional reactions — the mechanism that explains why the AI discourse polarizes into positions of irreconcilable feeling masquerading as analysis.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
Tversky and Kahneman's 1974 demonstration that estimates start from an initial value and adjust insufficiently — the bias that makes every pre-AI projection of what is possible systematically wrong.
Tversky and Kahneman's 1973 finding that people judge probability by the ease of recall — the cognitive shortcut that makes the AI discourse a case study in systematic distortion at civilizational scale.
Thaler, Kahneman, and Tversky's demonstration that possession inflates value — the bias that explains why experienced professionals overvalue the skills AI is automating.
Tversky and Kahneman's demonstration that the presentation of a problem — independent of its underlying facts — determines how it is evaluated. The same AI evidence produces opposite conclusions under "AI as gain" and "AI as loss" frames.
The research tradition Tversky and Kahneman founded in the 1970s to map the systematic departures of human judgment from rational ideals — the intellectual framework this entire book applies to the AI transition.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
Tversky and Kahneman's shortcut by which people judge probability through resemblance to a prototype — the mechanism that makes fluent AI output feel correct before it is verified.
The position, in the AI discourse, of holding contradictory assessments simultaneously — in Tversky's terms, the only cognitively honest response, and the most cognitively costly to maintain.
Kahneman, Sibony, and Sunstein's 2021 extension of the framework from bias to random variability — and the theoretical foundation for understanding one of AI's most underappreciated benefits.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.
Israeli-American psychologist (1934–2024), Tversky's collaborator of two decades, and the author whose 2011 Thinking, Fast and Slow brought the heuristics-and-biases program to public consciousness.
The 2025–2026 phase transition in which AI-assisted software production costs crossed below the costs of maintaining legacy code, triggering a trillion-dollar repricing of the SaaS industry in months.
The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…