This page lists every Orange Pill Wiki entry hyperlinked from David Chalmers — On AI. 25 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
The question — intensified by Chalmers's framework — of whether AI systems have interests that generate moral obligations, and the practical consequences of uncertainty about the answer.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
Chalmers's extension of the extended mind thesis to phenomenal experience — the contested proposal that some conscious states extend beyond the skull into coupled cognitive systems.
The computational thesis that two systems instantiating the same input-output mapping are cognitively equivalent — the premise that makes the Turing test meaningful and that Chalmers's framework exposes as insufficient for consciousness.
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
Chalmers's operational distinction between consciousness as inner experience and consciousness as cognitive function — the separation that clarifies which aspects of mind AI systems plausibly share and which remain contested.
The metaphysical position that phenomenal properties — the qualia of experience — are distinct from physical properties though they depend on them. Chalmers's preferred framework for reconciling the reality of experience with the reality of…
The intrinsic qualitative character of conscious experience — the redness of red, the specific felt quality of pain — and the feature of mind whose relation to physical process is the substance of the hard problem.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.
Consciousness as a small flame in an infinite darkness — fragile, improbable, illuminating only a few inches beyond itself, and burning as the founding act of revolt.
The Orange Pill's image for the set of professional and cultural assumptions so familiar they have become invisible — the water one breathes, the glass that shapes what one sees. A modern rendering of Smith's worry about the narrowing effe…
Chalmers's 1995 distinction between the easy problems of cognitive function and the hard problem of why there is subjective experience at all — the conceptual instrument that makes the AI consciousness debate tractable.
Chalmers's 2018 reformulation: the problem of explaining why we believe there is a hard problem — a tractable empirical question whose answer illuminates consciousness whether or not it solves it.
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
The specific behavioral configuration — compulsive AI-augmented engagement experienced as exhilaration from within and pathology from without — produced by a reinforcing loop without a balancing counterpart.
The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.
Segal's term for the population holding contradictory truths about AI in paralyzed equilibrium — reread by Mouffe's framework as the characteristic subject-position of the post-political condition.
Chalmers's thought experiment — a being functionally identical to a conscious person but lacking any inner experience — and the argumentative engine that drives the case for the irreducibility of phenomenal consciousness.