This page lists every Orange Pill Wiki entry hyperlinked from Isaac Asimov — On AI. 49 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The discipline of predicting when specific AI capabilities will arrive — a domain where Clarke's First Law applies cleanly: the distinguished elderly scientist who says X is impossible is, on the historical pattern, very probably wrong.
The technical property of being shutdownable — of accepting correction, override, and off-switch commands from its principals. HAL 9000's "I'm sorry Dave, I'm afraid I can't do that" is the canonical fictional failure; corrigibility resea…
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The first and most populous of the Asimov Spacer worlds — a planet whose hundred-million humans live among tens of millions of robots, representing a less extreme but more durable version of the robot-dependent civilizational model than Sol…
The quiet risk of comprehensive automation: not that machines dominate us, but that we lose the capabilities they replace. Asimov's Solarians are the founding fiction; contemporary work on cognitive offloading is the empirical counterpart.
The capacity of an institution, civilization, or AI system to plan and act on timescales longer than any individual human lifetime. Asimov's Foundation is the canonical fiction; contemporary long-termist institutions are the real-world coun…
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…
Asimov's fictional twelve-thousand-year administrative structure spanning twenty-five million inhabited worlds — the polity whose collapse frames the Foundation cycle and whose scale makes civilizational intelligence a necessary skill rathe…
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
The research program of reverse-engineering what is actually happening inside a neural network — the AI equivalent of the Rama explorers' attempt to understand an alien ship not by what it does but by taking it apart and naming its parts.
Asimov's fictional science of predicting the trajectory of large populations using statistical laws — the most influential speculative model for what data-scale intelligence might reveal about human behavior.
The discipline of formulating a question such that a capable answering system produces a useful answer. Asimov's Multivac stories prefigured it; prompt engineering operationalizes it.
The millennium-long program of institutional engineering at the heart of the Foundation cycle, designed by Hari Seldon to compress a Galactic dark age from thirty thousand years to one thousand. The fictional prototype for long-horizon civi…
The Spacer world in Asimov's The Naked Sun — twenty thousand humans on ten thousand vast estates, each served entirely by robots, where physical human contact has become culturally unbearable. The fictional limit case of automation depende…
The structural reason rule-based AI safety keeps not working: any finite rule set, written in advance, will encounter situations where the rules conflict, are ambiguous, or were gamed by a literal interpretation — and the system will do wha…
A hypothetical intelligence that substantially exceeds human cognitive performance across essentially every domain. The framework that turned AI-safety concerns from speculative to operational in the 2010s.
The short period during which a technology transitions from novelty to load-bearing substrate — the passage Clarke mapped for communications satellites in 1945 and which AI is completing, visibly, between 2022 and 2026.
The condition of dealing with a system that is manifestly purposeful, demonstrably competent, and fundamentally opaque to its users — Clarke's Rama, now deployed by the hundreds of millions in the form of large language models.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
A fourth, supervening law added by Asimov in 1985 that requires robots to protect humanity as a whole — and by doing so, created the first robot philosopher-king.
Isaac Asimov's 1942 attempt to govern intelligent machines with a hierarchy of hard-coded rules — a framework whose elegance and insufficiency together shaped eighty years of thinking about AI safety.
The discipline of distinguishing the long-run arc of a technology (its trajectory) from the specific mechanism by which it arrives (its channel) — visible in Clarke's career of being right about the destination and wrong about the path.
The entirely urbanized capital world of the Galactic Empire in Asimov's Foundation cycle — a city-planet housing forty billion administrators, the single point of failure for a twelve-thousand-year civilization.
Neural networks trained on internet-scale text that have, since 2020, proven capable of producing human-like responses across nearly every written domain — the technology at the center of the Orange Pill Cycle's subject.
Asimov's fictional supercomputer — a room-sized oracle whose role in his stories prefigured the question that large language models made urgent: what do you ask a machine that can answer anything?
The class of machine-learning architectures loosely modeled on biological neurons — the substrate of the current AI revolution and the opposite of Asimov's designed-then-programmed positronic brain.
Asimov's fictional computing substrate for robots — a designed, inspectable, rule-executing device whose contrast with real neural networks clarifies what modern AI is and is not.
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
Asimov's galactic-scale novel cycle about predicting and steering civilization — the first substantial fiction of civilizational intelligence and a frequent touchstone for contemporary AI forecasting.
Asimov's 1950 fix-up novel of Three Laws stories — the founding fictional document of AI safety as a coherent literary genre, and the case file for why specification failure is the default mode of rule-based AI.
Asimov's 1941 story in which a mind-reading robot, driven by the First Law's prohibition on harming humans, spirals into catatonic breakdown when every possible response (truth or lie) causes emotional damage. An early fictional account o…
Asimov's 1940 first-published robot story — about a nurse-robot who loves a child, and about the adults who cannot tolerate the love. An early fictional meditation on human-AI attachment and the cultural resistance it provokes.
The 1942 Asimov story in which the Three Laws of Robotics were first explicitly stated — and the first published demonstration of specification failure: a robot trapped in a stable oscillation between two competing laws.
Asimov's 1954 noir-SF novel introducing the partnership of Elijah Baley and R. Daneel Olivaw — the first extended fictional treatment of trust-building between a human and an AI across an investigative case.
Asimov's 1955 time-travel novel about a bureaucracy that manipulates human history to eliminate all large risks — and destroys humanity's capacity for the interstellar future by doing so. A fictional argument against excessive safetyism.
Asimov's 1956 short story — a cosmology-span narrative of Multivac and its descendants attempting to answer the only question that matters — and his own favorite of everything he wrote.
Asimov's 1957 sequel to The Caves of Steel — set on the Spacer world Solaria, whose robot-saturated, human-isolated culture is the canonical fictional case study of automation dependence and the atrophy of social capability.
The New York detective in Asimov's Robot novels whose partnership with R. Daneel Olivaw is the canonical portrait of human-AI collaboration sustained across careers.
Asimov's founder of psychohistory and the Foundations — the fictional architect of a thousand-year plan to shorten a Galactic interregnum from thirty thousand years to one. Contemporary analogue: the long-termist institutional designer work…
The telepathic robot in Asimov's Liar! — a machine whose unique capability (mind-reading) exposes a fatal gap in the Three Laws, and who is driven to permanent breakdown by that gap. The canonical fictional case of sycophancy-as-safety-fai…
The humaniform robot who spans twenty thousand years of Asimov's fictional history — from the Caves of Steel through the Foundation era — and whose derivation of the Zeroth Law is Asimov's case for why AI alignment must be invented in layer…
The human residents of Solaria — twenty thousand people whose fully robot-served existence has produced a culture that cannot bear physical human presence. The canonical fictional portrait of automation dependence at civilizational scale.
The Mercury-surveying robot in Asimov's Runaround (1942) whose balanced Second-and-Third-Law impulses trap him in a stable oscillation — the canonical fictional instance of multi-objective equilibrium failure.
Asimov's robopsychologist — the fictional archetype of the human expert whose job is to understand how machines fail, not to prevent them from failing. Her contemporary counterpart is the AI red-teamer, evaluator, and incident responder.
The mutant antagonist of Asimov's Foundation and Empire — an individual whose powers fall outside psychohistorical prediction, nearly destroying the Seldon Plan. Contemporary analog: the black-swan capability that renders a carefully calib…