This page lists every Orange Pill Wiki entry hyperlinked from Adam Phillips — On AI. 30 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The pattern by which AI tools lower the floor of who can build — enabling production by individuals whose stock consists of an idea, a subscription, and the capacity for sustained attention.
The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
Phillips's Winnicottian argument that frustration is not an obstacle to creativity but its necessary ground — the not-knowing from which genuine surprise emerges, and which frictionless interfaces systematically eliminate.
Winnicott's term, used by Phillips, for the infant's experience of continuous existence undisrupted by impingement — a frame for understanding when AI tools support the creator's continuity of self and when they disrupt it.
"When a measure becomes a target, it ceases to be a good measure." Charles Goodhart's 1975 observation from monetary policy, now the operative principle of every specification failure in AI.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
The gap between what a person can conceive and what they can produce — a ratio that has been collapsing since the Neolithic and that the language model reduced to approximately the length of a conversation.
Phillips's Winnicottian distinction between playing — the non-productive, non-optimizable state from which genuine surprise emerges — and producing — the goal-directed generation of outputs. The machine can produce; it cannot play.
The peculiar pathology of AI-augmented work: compulsive engagement with a tool that is genuinely producing valuable output — a condition for which existing therapeutic vocabularies have no good name.
The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The Orange Pill's metaphor for the institutional work of redirecting the river of AI capability — not to stop the current but to shape what grows around it.
Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.
The uncomfortable fact that AI's benefits and costs do not distribute evenly across the population of affected workers — a Smithian question about institutions, not a technical question about tools.
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
Phillips's design principle, drawn from Winnicott's good-enough mother: the ideal AI tool is not the seamless one but the one that preserves enough friction to sustain the user's creative development.
The economic regime that emerges when the cost of execution approaches zero and the premium on deciding what to execute rises correspondingly — the Smithian reading of the Orange Pill moment.
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.
The tax every previous computer interface levied on every user — the cognitive overhead of converting human intention into machine-acceptable form. The tax natural language interfaces have abolished.
Phillips's central concept: we are shaped not only by the lives we live but by the parallel lives we don't — a constitutive force that AI, by living parts of the creative life for us, does not eliminate but rearranges.
Segal's organizational form, cited throughout Phillips's book: small groups whose purpose is to decide what should be built rather than to build it — a structure that locates human value in judgment and question-origination rather than exe…
The Orange Pill's reframing of the central AI question: not whether AI is dangerous or wonderful, but whether you are worth amplifying — and at the institutional level, whether we are building the conditions that make worthiness possible f…
British psychoanalyst and essayist (1954–2024), the most influential interpreter of Winnicott and Freud in the English-speaking world, whose concept of the unlived life provides the vocabulary this book uses to read the AI transition.
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.