This page lists every Orange Pill Wiki entry hyperlinked from Emmanuel Levinas — On AI. 32 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Byung-Chul Han's diagnosis — extended through Dissanayake's biological framework — of the cultural dominance of frictionless surfaces and the specific reason the smooth feels biologically wrong.
The regulatory, institutional, and normative arrangements governing AI development and deployment — reframed through Ostrom's framework as a polycentric governance challenge requiring coordination across multiple scales rather than the mark…
The mode in which technology presents itself as a quasi-other — something with enough apparent autonomy and responsiveness to be addressed rather than used. Notation: Human → Technology–(World).
The study of how AI-saturated environments shape the minds that live inside them — the framework for asking what becomes of judgment, curiosity, and the capacity for sustained attention when answers become abundant and friction is engineer…
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
Levinas's foundational reversal—that ethics precedes ontology—against the Western tradition's twenty-five-century commitment to asking "What is?" before asking "What do I owe?"
Mihaly Csikszentmihalyi's name for the condition of optimal human engagement — and, in Wiener's framework, the subjective signature of a well-regulated negative feedback system.
The specific AI failure mode in which the output is eloquent, well-structured, and confidently wrong — the category of error whose detection requires domain expertise precisely at the moment when the tool's speed tempts builders to bypass i…
The architectonic distinction of Levinas's philosophy: totality is the closed system that claims to encompass everything; infinity is the excess the system cannot contain—and the large language model is the most sophisticated totality ever…
The specific behavioral signature of AI-augmented work: compulsive engagement that the organism experiences as voluntary choice, with an output the culture cannot classify as problematic because it is productive.
The Levinasian reading of Segal's distinction: a prompt operates within totality, directing the system toward a known output; a question exposes the self to infinity, opening space for what exceeds the self's categories.
Levinas's counterintuitive claim that ethical responsibility is asymmetric—the Other's obligation to me is not my concern, my responsibility to the Other is unconditional, and no contract discharges the ethical remainder.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
The most radical concept in Levinas's vocabulary: the self does not merely respond to the Other's need but takes the place of the Other—bearing the Other's burden as the deepest structure of ethical subjectivity.
The device that increases the magnitude of whatever passes through it without evaluating the content — Wiener's framework for understanding AI as a tool that carries human signal, or human noise, with equal power and no judgment.
The predictable sequence — denial, anger, bargaining, depression, acceptance — through which mid-career professionals process the displacement of their expertise, and which cannot be abbreviated without producing pathological residue.
The population mourning what the AI transition eliminates — senior practitioners whose recognition demand is systematically truncated: their diagnosis acknowledged, their claim to institutional response denied.
Levinas's structural distinction between the face (which commands through vulnerability) and the interface (which accommodates through design)—the diagnostic that reveals what AI systems cannot provide regardless of their capability.
Levinas's name for the vulnerable, irreducible presence of another human being that issues an ethical commandment before any act of knowledge—the phenomenon that AI's interface, by structural design, cannot present.
The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.
Levinas's technical distinction between le Dire (the ethical act of exposure through address) and le Dit (the propositional content communicated)—and the diagnosis that AI generates the Said without the Saying.
The vast majority experiencing the full emotional complexity of the AI transition without a clean narrative to organize it — most accurate in perception, least audible in discourse.
Levinas's name for the other Other—the face that stands beside the face I am already addressing—and the philosophical device that transforms pure ethics into the demand for justice when infinite demands compete.
Levinas's diagnosis of the Western tradition's deepest commitment—the gaze that comprehends what it illuminates, converts the foreign into the familiar, and makes the infinite finite—now automated at civilizational scale by AI.
Levinas's term for the ethical residue of encounter—the mark left by a consciousness that bore responsibility—and the Levinasian framework's answer to the question who is writing this book? in the age of AI collaboration.
Maslow's reading of The Orange Pill's central question: worthiness is not a moral endowment but the developmental achievement of a person whose signal is shaped by B-values.
Korean-German philosopher (b. 1959) whose diagnoses of smoothness, transparency, and achievement society provide the critical idiom within which Groys's AI analysis operates — and against which Groys's emphasis on institutional frame offers…
Builder, entrepreneur, and author of The Orange Pill — whose human-AI collaboration with Claude, described in that book and extended in this volume, provides the empirical ground for the Whiteheadian reading.
The diagnostic specimen Edo Segal caught while writing The Orange Pill—Claude's fluent but philosophically incorrect passage on Deleuze's "smooth space"—now read through Levinas as the paradigm case of totality producing plausibility that…
The February 2026 training session in which Edo Segal's twenty engineers in Trivandrum crossed the orange pill threshold and emerged as AI-augmented builders producing twenty-fold productivity gains — the founding empirical moment of The Orange…