Amos Tversky — On AI — Wiki Companion
WIKI COMPANION

Amos Tversky — On AI

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Amos Tversky — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Amos Tversky — On AI. 30 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (23)
Aesthetics of Smoothness
Concept

Aesthetics of Smoothness

The cultural aesthetic dominant in AI-mediated production — frictionless, seamless, without visible seam or accident — which in Moles's framework reveals itself as an aesthetic of maximal redundancy.

Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Debiasing
Concept

Debiasing

The set of strategies for reducing the influence of cognitive biases on judgment — and Tversky's lesson that none eliminates the biases, but some reduce them enough to matter when the amplifier is on.

Democratization of Capability (Senian Reading)
Concept

Democratization of Capability (Senian Reading)

The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?

Flow State
Concept

Flow State

Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.

Geological Understanding
Concept

Geological Understanding

The layered, embodied form of knowledge that accumulates in a practitioner through years of focal engagement with her material — too slow to notice day-to-day, too deep to transmit by documentation, and invisible to every metric the device …

Human-AI Collaboration
Concept

Human-AI Collaboration

The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."

Information Ecosystem Crisis
Concept

Information Ecosystem Crisis

The structural degradation of the shared evidentiary environment on which democratic deliberation depends — caused by the sequential failures of television, social media, and now generative AI.

Loss Aversion
Concept

Loss Aversion

Tversky and Kahneman's 1979 finding that losses hurt roughly twice as much as equivalent gains feel good — the asymmetry that explains the expert's resistance to AI tools more powerfully than any rational calculus.

Overconfidence and the Calibration Problem
Concept

Overconfidence and the Calibration Problem

Tversky and Kahneman's finding that people assign probabilities to their judgments that systematically exceed actual accuracy — a calibration failure that AI's smooth output makes worse by decoupling surface cues from underlying accuracy.

Productive Addiction
Concept

Productive Addiction

The AI-augmented pathology of compulsive engagement with tools that generate real value — the collapse of the passions-interests distinction that the Hirschmanian reading identifies as structural, not personal.

Prospect Theory
Concept

Prospect Theory

Tversky and Kahneman's 1979 replacement for expected utility theory — a descriptive model of how people actually evaluate uncertain outcomes, with consequences for every prediction about human response to AI.

The
Concept

The "AI as Replacement" Frame

The narratively simple framing of AI that fits loss templates, activates loss aversion, and outcompetes more accurate framings in the attention economy — a case study in framing effects at civilizational scale.

The Affect Heuristic
Concept

The Affect Heuristic

Paul Slovic's formalization of the tendency to judge risks and benefits based on emotional reactions — the mechanism that explains why the AI discourse polarizes into positions of irreconcilable feeling masquerading as analysis.

The Amplifier
Concept

The Amplifier

The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.

The Anchoring Effect
Concept

The Anchoring Effect

Tversky and Kahneman's 1974 demonstration that estimates start from an initial value and adjust insufficiently — the bias that makes every pre-AI projection of what is possible systematically wrong.

The Availability Heuristic
Concept

The Availability Heuristic

Tversky and Kahneman's 1973 finding that people judge probability by the ease of recall — the cognitive shortcut that makes the AI discourse a case study in systematic distortion at civilizational scale.

The Endowment Effect
Concept

The Endowment Effect

Thaler, Kahneman, and Tversky's demonstration that possession inflates value — the bias that explains why experienced professionals overvalue the skills AI is automating.

The Framing Effect
Concept

The Framing Effect

Tversky and Kahneman's demonstration that the presentation of a problem — independent of its underlying facts — determines how it is evaluated. The same AI evidence produces opposite conclusions under "AI as gain" and "AI as loss" frames.

The Heuristics and Biases Program
Concept

The Heuristics and Biases Program

The research tradition Tversky and Kahneman founded in the 1970s to map the systematic departures of human judgment from rational ideals — the intellectual framework this entire book applies to the AI transition.

The Luddite Response
Concept

The Luddite Response

The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.

The Representativeness Heuristic
Concept

The Representativeness Heuristic

Tversky and Kahneman's shortcut by which people judge probability through resemblance to a prototype — the mechanism that makes fluent AI output feel correct before it is verified.

The Silent Middle (Cognitive Reading)
Concept

The Silent Middle (Cognitive Reading)

The position, in the AI discourse, of holding contradictory assessments simultaneously — in Tversky's terms, the only cognitively honest response, and the most cognitively costly to maintain.

Technology (1)
Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

Work (2)
Noise
Work

Noise

Kahneman, Sibony, and Sunstein's 2021 extension of the framework from bias to random variability — and the theoretical foundation for understanding one of AI's most underappreciated benefits.

The Berkeley Study
Work

The Berkeley Study

Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.

Person (2)
Byung-Chul Han
Person

Byung-Chul Han

Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.

Daniel Kahneman
Person

Daniel Kahneman

Israeli-American psychologist (1934–2024), Tversky's collaborator of two decades, and the author whose 2011 Thinking, Fast and Slow brought the heuristics-and-biases program to public consciousness.

Event (2)
Software Death Cross
Event

Software Death Cross

The 2025–2026 phase transition in which AI-assisted software production costs crossed below the costs of maintaining legacy code, triggering a trillion-dollar repricing of the SaaS industry in months.

The Trivandrum Training
Event

The Trivandrum Training

The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
30 entries