This page lists every Orange Pill Wiki entry hyperlinked from George Lakoff — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The conceptual frame that positions AI as a partner contributing what the human cannot produce alone — generating questions about what emerges from the joint process and how the partnership reshapes both participants.
The conceptual frame that treats AI systems as agents with goals, understanding, and potential consciousness — generating the questions that dominate existential-risk and alignment discourse.
The dominant public metaphor for artificial intelligence, generating a specific set of entailments — passivity, control, instrumentality — that determine which questions the discourse can ask and which it cannot.
AGI: a hypothetical system with human-level cognitive ability across essentially every domain. The transition-point that AI-safety thinking orients around, even when no one agrees on what it is.
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
Lakoff and Johnson's 1980 thesis that abstract thought is systematically structured by mappings from concrete, bodily experience — the discovery that metaphor is not decoration but constitution.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The emerging third frame that positions AI as a capacity to be cultivated rather than accelerated or constrained — directing capability expansion toward human flourishing through deliberate institutional design.
The interdisciplinary thesis — central to Lakoff's framework — that cognition is not separable from the body, and that abstract thought is implemented in neural circuits originally evolved for sensorimotor interaction.
Aristotle's word for human flourishing — activity of the soul in accordance with virtue — and the standard against which the achievement society's confusion of productivity with the good life must be measured.
The cognitive process by which conceptual metaphors structure reasoning before reasoning begins — determining which questions are askable, which evidence counts, and which conclusions feel natural.
The alternative frame to HUMANS ARE ARTIFACTS — reconceiving human value from function to capacity, from utility to flourishing, from competition with machines to cultivation of what machines do not possess.
The pre-conceptual patterns of recurrent bodily experience — containment, balance, path, force — from which all abstract reasoning is constructed.
The hidden conceptual metaphor that treats intelligence as a commodity existing in quantities, coming in grades, manufacturable and measurable — the foundation on which the entire AGI discourse is built.
The master frame that positions AI as the latest chapter in a history of technological advancement — generating policy positions in favor of acceleration, light regulation, and market-driven adoption.
The master frame that positions AI as an intrusion into domains previously reserved for human agency — generating policy positions in favor of precautionary regulation, preservation of human capacities, and cultural resistance to accelerati…
Lakoff's model of two competing moral worldviews — each structured by a metaphorical family — that generate coherent political positions across dozens of issues, with each making the other's positions seem incomprehensible.
The embodied evaluative capacity through which humans detect that AI output is off before they can articulate what is wrong — a pre-reflective somatic assessment available only to creatures with bodies.
The conceptual metaphor that maps comprehension onto manual manipulation — generating the common but contested claim that AI systems do not "truly" understand what they process.
Lakoff and Srini Narayanan's 2025 book arguing that the neural implementation of embodied cognition establishes a categorical difference between biological minds and deep-learning AI — a claim Lakoff summarized bluntly: it "kills" the possi…
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
Indian-American cognitive scientist and senior research director at Google DeepMind whose collaboration with Lakoff on The Neural Mind (2025) brought embodied-cognition analysis directly inside the AI industry.