This page lists every Orange Pill Wiki entry hyperlinked from Cass Sunstein — On AI. 30 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The regulatory and institutional frameworks adequate to govern a technology that evolves faster than legislative processes and operates across every national boundary simultaneously.
The class of software produced when a developer describes intent in natural language and a language model returns working implementation across the full technology stack — the most powerful abstraction ever built, and the one whose structur…
The empirically documented tendency to trust human judgment over algorithmic judgment even when the algorithm demonstrably outperforms the human — a bias whose welfare cost scales with the stakes of the domain.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
A self-reinforcing process in which a belief becomes widely held not because of evidence but because of its salience — vividness compounds repetition, repetition compounds cognitive availability, availability is mistaken for truth.
The structured environment in which decisions are made — never neutral, always shaping behavior through defaults, friction, salience, and social signals.
An AI-powered system that helps people make better decisions — as judged by their own values — by overcoming informational deficits and cognitive biases that predictably distort human judgment.
The most powerful architectural intervention available to any designer — the setting that governs everyone who does nothing, which in every studied domain is the overwhelming majority.
James Fishkin's methodology for producing informed public judgment: randomly selected citizens engage with balanced briefing materials, hear expert testimony from multiple perspectives, and deliberate in facilitated small groups — producing…
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The mandated transparency about a technology's choice architecture — optimization targets, default configurations, engagement mechanisms, data practices — that serves as the informational foundation for every subsequent regulatory interven…
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The empirically robust finding that like-minded people, after deliberating among themselves, converge on a more extreme version of the position they already held — not the group's average.
The position — developed by Sunstein and Thaler — that choice architectures can legitimately steer people toward better decisions while absolutely preserving the right to opt out.
Any aspect of choice architecture that alters behavior in a predictable way without forbidding any options or significantly changing economic incentives — guidance that preserves freedom.
The analytical distinction between friction that serves no beneficial purpose for the person experiencing it (sludge) and friction that builds understanding, enables reconsideration, or protects the decision-maker (protective friction).
The regulatory design mechanism that requires affirmative renewal rather than passive continuation — built-in expiration forcing periodic reassessment against evidence rather than bureaucratic inertia.
The emerging body of 2023-2025 empirical research documenting measurable degradation of professional capability among practitioners who rely heavily on AI tools, precisely as Ericsson's framework predicts.
Tversky and Kahneman's 1973 finding that people judge probability by the ease of recall — the cognitive shortcut that makes the AI discourse a case study in systematic distortion at civilizational scale.
The population mourning what the AI transition eliminates — senior practitioners whose recognition demand is systematically truncated: their diagnosis acknowledged, their claim to institutional response denied.
Segal's term for the population holding contradictory truths about AI in paralyzed equilibrium — reread by Mouffe's framework as the characteristic subject-position of the post-political condition.
The self-reinforcing mechanism — first theorized by Elisabeth Noelle-Neumann — by which perceived minority views are progressively silenced, producing apparent consensus that masks actual opinion distribution.
The thought collective in the AI discourse whose thought style foregrounds capability expansion and backgrounds cost — producing genuine perception of real features of the transition, and genuine blindness to others.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Hilary Gridley's January 2026 Substack essay — 'Help! My Husband is Addicted to Claude Code' — that went viral because it functioned less as an essay than as a mirror held up to thousands of households.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
The early 2026 repricing event in which a trillion dollars of market value vanished from SaaS companies — the critical-stage moment when AI's displacement of software's code value became visible to markets.
The 2025–2026 repricing of the software industry — when AI market capitalization overtook SaaS capitalization — which Abbott's framework reveals as a jurisdictional collapse at industry scale rather than merely a market event.
The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…