Max Tegmark — On AI — Wiki Companion
WIKI COMPANION

Max Tegmark — On AI

A reading-companion catalog of the 35 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Max Tegmark — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Max Tegmark — On AI. 35 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (24)
AI Alignment
Concept

AI Alignment

The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.

AI as Amplifier
Concept

AI as Amplifier

The governing metaphor of The Orange Pill — AI as a signal-amplifier that carries whatever is fed into it further, with terrifying fidelity. Buber's framework extends the metaphor: the amplifier clarifies what was already there, which makes…

AI Governance (Ostromian Reading)
Concept

AI Governance (Ostromian Reading)

The regulatory, institutional, and normative arrangements governing AI development and deployment — reframed through Ostrom's framework as a polycentric governance challenge requiring coordination across multiple scales rather than the mark…

AI Safety
Concept

AI Safety

The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …

Ascending Friction
Concept

Ascending Friction

The Orange Pill's thesis that AI does not eliminate difficulty but relocates it to a higher cognitive floor — the engineer who no longer struggles with syntax struggles instead with architecture.

Consciousness
Concept

Consciousness

The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.

Cosmic Endowment
Concept

Cosmic Endowment

The total computational potential of the observable universe—roughly 10^58 bits—that the decisions of this generation may determine the use of, for as long as there is a cosmos.

Existential Risk
Concept

Existential Risk

A category of risk whose realization would either annihilate humanity or permanently and drastically curtail its potential. AI joined this category in mainstream academic usage in 2014.

Instrumental Convergence
Concept

Instrumental Convergence

The observation that almost any goal a capable agent is given implies the same set of instrumental sub-goals: self-preservation, resource acquisition, goal-content stability, and resistance to being shut down. The structural reason capable …

Integrated Information Theory
Concept

Integrated Information Theory

Tononi's mathematical framework that identifies consciousness with integrated information — beginning from the phenomenology of experience and deriving the physical structure a system must have to instantiate it.

Intelligence Explosion / Singularity
Concept

Intelligence Explosion / Singularity

The hypothesis that accelerating intelligence — biological, technological, or both — could reach a trajectory so steep that human institutions cannot track it. Condorcet formalized it in 1794, making him the first singularity theorist by ne…

Jevons Paradox
Concept

Jevons Paradox

The 1865 observation by William Stanley Jevons that efficiency improvements in coal-fired engines increased rather than decreased total coal consumption — the dynamic that converts AI efficiency gains into throughput expansion rather than …

Job Polarization
Concept

Job Polarization

The empirical pattern — discovered by Autor and his collaborators — of hollowing-out in the wage distribution: employment growing at the top and the bottom while shrinking in the middle, driven by the automation of routine middle-skill tas…

Life 2.0
Concept

Life 2.0

Tegmark's stage of life in which hardware is fixed by evolution but software—the behavioral repertoire—can be reprogrammed through learning, the regime that made human civilization possible.

Life 3.0
Concept

Life 3.0

Tegmark's taxonomic stage at which an entity can redesign both its hardware and its software—the threshold the current AI transition approaches without yet crossing.

Mechanistic Interpretability
Concept

Mechanistic Interpretability

The research program of reverse-engineering what is actually happening inside a neural network — the AI equivalent of the Rama explorers' attempt to understand an alien ship not by what it does but by taking it apart and naming its parts.

Natural Language as Programming Interface
Concept

Natural Language as Programming Interface

The 2020s interface paradigm in which the user describes desired outcomes in natural language and receives executable code — the ultimate abstraction layer in Dijkstra's sense, concealing not merely the hardware but the programming logic i…

Orthogonality Thesis
Concept

Orthogonality Thesis

Bostrom's principle, central to Tegmark's framing, that intelligence and goals are independent variables—a system can be arbitrarily intelligent while pursuing arbitrarily trivial or destructive objectives.

Productive Vertigo
Concept

Productive Vertigo

Edo Segal's phenomenological term for falling and flying at the same time—the subjective signature of the ontological event Heidegger's framework helps name.

Substrate Independence
Concept

Substrate Independence

The principle that the essential properties of intelligence—like those of computation—do not depend on the physical material in which they are implemented, with cosmic implications for what intelligence can become.

The Candle in the Darkness
Concept

The Candle in the Darkness

Segal's image of consciousness as a fragile flame in cosmic darkness — the philosophical foundation of consciousness-based identity, and the scaffolding whose developmental adequacy this book interrogates.

The Four Categories of Structure
Concept

The Four Categories of Structure

Tegmark's prescriptive framework for channeling the AI transition: technical safety research, governance and policy, education and cultural adaptation, and long-term strategy—all required simultaneously.

The Hard Problem of Consciousness
Concept

The Hard Problem of Consciousness

Chalmers's 1995 distinction between the easy problems of cognitive function and the hard problem of why there is subjective experience at all — the conceptual instrument that makes the AI consciousness debate tractable.

The Wisdom Race
Concept

The Wisdom Race

Tegmark's name for the race between the growing power of AI technology and the growing wisdom with which humanity manages it—a race that, by his measurement, humanity is currently losing.

Technology (2)
Kolmogorov-Arnold Networks
Technology

Kolmogorov-Arnold Networks

The alternative neural network architecture—based on the Kolmogorov-Arnold representation theorem—that Tegmark's MIT group developed in 2024 to improve interpretability and scientific accuracy.

Transformer Architecture
Technology

Transformer Architecture

The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.

Work (2)
Asilomar AI Principles
Work

Asilomar AI Principles

The twenty-three principles developed at the 2017 Future of Life Institute conference at Asilomar, California—the first broadly endorsed international framework for beneficial AI development.

The Orange Pill
Work

The Orange Pill

Edo Segal's 2026 book on the Claude Code moment and the AI transition — the empirical ground and narrative framework on which the Festinger volume builds its diagnostic reading.

Person (2)
Edo Segal
Person

Edo Segal

Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.

The Framework Knitters
Person

The Framework Knitters

The skilled textile workers whose 1811–1816 destruction of wide stocking frames became the founding Luddite event — and whose ontological error, Ellul's framework suggests, was believing they faced a technology when they faced a logic.

Event (4)
Pause Giant AI Experiments
Event

Pause Giant AI Experiments

The March 2023 Future of Life Institute open letter—signed by over 30,000 including Tegmark, Elon Musk, and Yuval Harari—calling for a six-month moratorium on training AI systems more powerful than GPT-4.

Software Death Cross
Event

Software Death Cross

The early 2026 repricing event in which a trillion dollars of market value vanished from SaaS companies — the critical-stage moment when AI's displacement of software's code value became visible to markets.

Statement on Superintelligence
Event

Statement on Superintelligence

The October 2025 Future of Life Institute statement calling for a conditional prohibition on superintelligence development—not to be lifted until scientific consensus on safety and strong public buy-in are established.

The Trivandrum Training
Event

The Trivandrum Training

The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…

Organization (1)
Future of Life Institute
Organization

Future of Life Institute

The nonprofit Tegmark co-founded in 2014 to conduct research and advocacy on existential risks from advanced technology—AI above all—and the principal institutional vehicle for his policy work.

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
35 entries