This page lists every Orange Pill Wiki entry hyperlinked from Max Tegmark — On AI. 35 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The governing metaphor of The Orange Pill — AI as a signal-amplifier that carries whatever is fed into it further, with terrifying fidelity. Buber's framework extends the metaphor: the amplifier clarifies what was already there, which makes…
The regulatory, institutional, and normative arrangements governing AI development and deployment — reframed through Ostrom's framework as a polycentric governance challenge requiring coordination across multiple scales rather than the mark…
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The total computational potential of the observable universe—roughly 10^58 bits—that the decisions of this generation may determine the use of, for as long as there is a cosmos.
A category of risk whose realization would either annihilate humanity or permanently and drastically curtail its potential. AI joined this category in mainstream academic usage in 2014.
The observation that almost any goal a capable agent is given implies the same set of instrumental sub-goals: self-preservation, resource acquisition, goal-content stability, and resistance to being shut down. The structural reason capable …
Tononi's mathematical framework that identifies consciousness with integrated information — beginning from the phenomenology of experience and deriving the physical structure a system must have to instantiate it.
The hypothesis that accelerating intelligence — biological, technological, or both — could reach a trajectory so steep that human institutions cannot track it. Condorcet formalized it in 1794, making him the first singularity theorist by ne…
The 1865 observation by William Stanley Jevons that efficiency improvements in coal-fired engines increased rather than decreased total coal consumption — the dynamic that converts AI efficiency gains into throughput expansion rather than …
The empirical pattern — discovered by Autor and his collaborators — of hollowing-out in the wage distribution: employment growing at the top and the bottom while shrinking in the middle, driven by the automation of routine middle-skill tas…
Tegmark's stage of life in which hardware is fixed by evolution but software—the behavioral repertoire—can be reprogrammed through learning, the regime that made human civilization possible.
Tegmark's taxonomic stage at which an entity can redesign both its hardware and its software—the threshold the current AI transition approaches without yet crossing.
The research program of reverse-engineering what is actually happening inside a neural network — the AI equivalent of the Rama explorers' attempt to understand an alien ship not by what it does but by taking it apart and naming its parts.
The 2020s interface paradigm in which the user describes desired outcomes in natural language and receives executable code — the ultimate abstraction layer in Dijkstra's sense, concealing not merely the hardware but the programming logic i…
Bostrom's principle, central to Tegmark's framing, that intelligence and goals are independent variables—a system can be arbitrarily intelligent while pursuing arbitrarily trivial or destructive objectives.
Edo Segal's phenomenological term for falling and flying at the same time—the subjective signature of the ontological event Heidegger's framework helps name.
The principle that the essential properties of intelligence—like those of computation—do not depend on the physical material in which they are implemented, with cosmic implications for what intelligence can become.
Segal's image of consciousness as a fragile flame in cosmic darkness — the philosophical foundation of consciousness-based identity, and the scaffolding whose developmental adequacy this book interrogates.
Tegmark's prescriptive framework for channeling the AI transition: technical safety research, governance and policy, education and cultural adaptation, and long-term strategy—all required simultaneously.
Chalmers's 1995 distinction between the easy problems of cognitive function and the hard problem of why there is subjective experience at all — the conceptual instrument that makes the AI consciousness debate tractable.
Tegmark's name for the race between the growing power of AI technology and the growing wisdom with which humanity manages it—a race that, by his measurement, humanity is currently losing.
The alternative neural network architecture—based on the Kolmogorov-Arnold representation theorem—that Tegmark's MIT group developed in 2024 to improve interpretability and scientific accuracy.
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
The twenty-three principles developed at the 2017 Future of Life Institute conference at Asilomar, California—the first broadly endorsed international framework for beneficial AI development.
Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
The skilled textile workers whose 1811–1816 destruction of wide stocking frames became the founding Luddite event — and whose ontological error, Ellul's framework suggests, was believing they faced a technology when they faced a logic.
The March 2023 Future of Life Institute open letter—signed by over 30,000 including Tegmark, Elon Musk, and Yuval Harari—calling for a six-month moratorium on training AI systems more powerful than GPT-4.
The early 2026 repricing event in which a trillion dollars of market value vanished from SaaS companies — the critical-stage moment when AI's displacement of software's code value became visible to markets.
The October 2025 Future of Life Institute statement calling for a conditional prohibition on superintelligence development—not to be lifted until scientific consensus on safety and strong public buy-in are established.
The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…