This page lists every Orange Pill Wiki entry hyperlinked from Yuval Noah Harari — On AI. 15 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Harari's proposal that 'AI' should stand for alien intelligence—not human-made-and-controlled artifact but categorically other, processing information through mechanisms bearing no resemblance to human cognition.
The condition in which the subject exploits herself and calls it freedom — the signature of the enterprise of the self, where the overseer's function is internalized as motivation.
The divergent national narratives about AI—American market story, Chinese state-capacity story, European rights story—that produce materially different technological and institutional trajectories from identical underlying capabilities.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The emerging ideology treating information flow as the supreme value—'the universe is a stream of data,' entities worth is their contribution to data processing, and maximal throughput is the highest good.
The order of existence comprising entities—money, nations, corporations, human rights—that exist because multiple minds collectively believe in them, maintained through ongoing participation.
The 1865 observation by William Stanley Jevons that efficiency improvements in coal-fired engines increased rather than decreased total coal consumption — the dynamic that converts AI efficiency gains into throughput expansion rather than …
AI-generated text that passes surface tests for genuine intersubjective contribution—using sophisticated vocabulary, coherent structure, appropriate context—while containing no genuine understanding or participation.
The AI-era reopening of history's biggest fraud: productivity gains consumed by expansion rather than leisure, degrading individual experience while improving aggregate metrics—now applied to knowledge work.
Homo sapiens' seventy-thousand-year exclusive capacity to invent and believe in shared fictions—gods, nations, money, corporations—that coordinate large-scale cooperation among strangers.

Harari's thesis that Homo sapiens' survival depends on institutions—democracy, science, journalism—that detect and correct their own errors, and that AI threatens the epistemic preconditions for self-correction.
Harari's 2017 prediction of a new class: not exploited or oppressed but irrelevant—people devoid of economic, political, or artistic value, contributing nothing to prosperity or power, rendered unemployable by automation.