This page lists every Orange Pill Wiki entry hyperlinked from Bernard Williams — On AI. 33 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Williams's name for the specifically first-personal regret an agent feels for outcomes her agency produced — even when blameless, even when she would act the same again. The emotion the dominant moral traditions cannot accommodate and the…
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
Williams's name for the deep commitments that give a life its character — not preferences, but the projects without which the agent would not recognize her life as her own. The units of identity the AI transition is reorganizing faster than…
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
Not honesty or consistency but Williams's structural concept — the relationship between an agent and the commitments that constitute her as a person with a life distinctively her own, whose violation is not moral failure but identity disso…
Williams's thesis that genuine reasons for action must connect with something the agent already cares about — that reasons floating free of the agent's motivational set are not reasons she has failed to recognize but not reasons for her at…
Williams's 1976 thesis that moral assessment depends on factors beyond the agent's control — an uncomfortable fact that threatens every system presupposing a tight connection between desert and choice, and that the AI transition has made u…
Williams's term for the residue of value that survives the justification of an action — the loss that persists after the correct decision has been made and that the correctness does not dissolve. The concept the AI transition generates at…
Williams's devastating phrase for the moralist's demand that agents justify their deepest commitments — the intellectual move that reveals the morality system's inability to accommodate the particular attachments that constitute a human li…
A coherent and complex form of socially established cooperative human activity through which internal goods are realized — the conceptual pivot of MacIntyre's ethics and the unit of analysis for understanding what AI threatens.
The Orange Pill's term for compulsive engagement with generative tools — re-specified by the Skinner volume not as metaphor but as the precise behavioral signature of a continuous reinforcement schedule without an extinction point.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The population mourning what the AI transition eliminates — senior practitioners whose recognition demand is systematically truncated: their diagnosis acknowledged, their claim to institutional response denied.
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
Williams's term for the peculiar institution modern moral philosophy has constructed — the demand that every moral question have a determinate answer derived from foundational principles, and the systematic blindness to moral reality the de…
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
The vast population of ambivalent AI users whose compound emotional response to the transition lacks a deep story and is suppressed by feeling rules that permit only enthusiasm or refusal.
The tax every previous computer interface levied on every user — the cognitive overhead of converting human intention into machine-acceptable form. The tax natural language interfaces have abolished.
AI's early enthusiasts — the builders posting productivity metrics, shipping solo products, experiencing genuine creative release. Partly right, structurally blind, and the largest obstacle to the voice the transition needs.
Williams's distinction between thick concepts (courageous, cruel, gracious) that fuse description and evaluation and thin concepts (good, right, wrong) that abstract from particularity — and the argument that moral life is impoverished w…
Williams's 2002 distinction between truth (a property of statements) and truthfulness (the character virtues of accuracy and sincerity) — and the argument that the AI moment produces more truth and makes truthfulness harder, a paradox no …
Dispositions of character cultivated through sustained engagement with practices — not skills, not capabilities, but the settled habits of excellent action that partly constitute a flourishing human life.
British moral philosopher (1929–2003), widely regarded as the most important English-language ethicist of the second half of the twentieth century, whose attacks on systematic moral theory and concepts of agent-regret, moral luck, and ground …
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.
French philosopher (1925–1995) whose late engagement with Whitehead shaped the contemporary Whitehead renaissance — and whose name, ironically, featured in Segal's clearest example of AI confident-wrongness in The Orange Pill.
The American philosopher (1929–2017) whose What Computers Can't Do (1972) used Heideggerian phenomenology to critique early AI — a critique whose structure remains relevant to contemporary language models.
American cognitive scientist (1927–2016), co-founder of the MIT AI Laboratory, Dartmouth Workshop organizer, and author of The Society of Mind.