This page lists every Orange Pill Wiki entry hyperlinked from Aristotle — On AI. 33 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The condition — diagnosed by Han, anticipated by Camus — in which the subject drives herself to produce without external compulsion, making resistance almost impossible because there is no oppressor to name.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The process of weighing competing considerations toward a judgment that Aristotle identifies as the core activity of practical reason — and the capacity most threatened by AI's instant answers.
Care analyzed as a mode of bodily orientation toward the world — constituted by directedness, vulnerability, and temporal commitment — and the specific capacity that AI systems, lacking a body at stake in outcomes, cannot possess.
Aristotle's name for the form of knowledge that apprehends what is universal and necessary — the domain in which AI systems have achieved, and in many cases surpassed, human competence.
Aristotle's word for human flourishing — activity of the soul in accordance with virtue — and the standard against which the achievement society's confusion of productivity with the good life must be measured.
The highest of Aristotle's three forms of friendship — mutual recognition of and commitment to the good — and the benchmark against which human-AI collaboration must be measured and found categorically different.
Aristotle's term for the repeated performance through which dispositions of character become second nature — the mechanism by which virtues are cultivated and the reason AI-mediated shortcuts undermine them.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
The Aristotelian reading of the Orange Pill's triad — the Swimmer, the Believer, and the Beaver — as a case study in the doctrine of the mean.
Aristotle's term for intellectual intuition — the faculty that grasps particulars and first principles directly, without demonstration, and whose activity machines do not, and perhaps cannot, replicate.
Aristotle's name for the intellectual virtue that governs action in particular circumstances — the form of knowledge that cannot be computed, because it requires experience, character, and having stakes in the world.
Aristotle's term for the master virtue of situated judgment — the capacity to discern the right action in particular circumstances that cannot be fully specified by rule. The virtue AI most conspicuously lacks.
A coherent and complex form of socially established cooperative human activity through which internal goods are realized — the conceptual pivot of MacIntyre's ethics and the unit of analysis for understanding what AI threatens.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
Aristotle's term for the knowledge of how to make things — craft knowledge, productive reason — and the domain whose collapse to near-zero cost defines the AI revolution.
The Orange Pill's metaphor for the institutional work of redirecting the river of AI capability — not to stop the current but to shape what grows around it.
Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating in positive feedback.
Aristotle's principle that virtue consists in finding the appropriate response to each situation — neither excess nor deficiency — the framework beneath the Orange Pill's triad of the Swimmer, the Believer, and the Beaver.
This book's term for AI systems considered under the aspect of their epistemic capacity — machines that apprehend patterns across vast data with a speed and range no individual human matches.
The claim — central to this book's reading of the Orange Pill — that the collapse of techne's cost reveals a deeper barrier that was always the harder problem: deciding what deserves to be built.
Aristotle's thesis that the human being is by nature a political animal — that flourishing is possible only within a well-governed community, and that the AI transition is therefore first a political problem.
The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.
The figure Aristotelian ethics proposes for the AI age: a builder whose techne is guided by phronesis, who asks not only can this be made? but should this exist?
Aristotle's name for the highest form of human activity — the mind's direct apprehension of truth for its own sake — and the counterweight to the productive imperative that AI has intensified.
Dispositions of character cultivated through sustained engagement with practices — not skills, not capabilities, but the settled habits of excellent action that partly constitute a flourishing human life.
Maslow's reading of The Orange Pill's central question: worthiness is not a moral endowment but the developmental achievement of a person whose signal is shaped by B-values.
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Edo Segal's 2026 book on the Claude Code moment — the empirical and narrative ground on which this Whitehead volume builds its philosophical reading.
Korean-German philosopher (b. 1959) whose diagnoses of the smoothness society and the burnout society anticipated the pathologies of AI-augmented work with unsettling precision.
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.