This page lists every Orange Pill Wiki entry hyperlinked from Arthur C. Clarke — On AI. 52 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The empirical power-law relationships — Kaplan (2020), Chinchilla (2022), and subsequent refinements — between model size, training data volume, and computational budget that now function as the AI industry's version of Moore's Law: trend l…
The point at which frontier models reach the ceiling of a benchmark — 95%, 98%, 99% — after which the benchmark no longer distinguishes between systems and becomes useless as a progress measure.
Three compressed philosophical arguments about knowledge, capability, and imagination — the system Clarke built to navigate the gap between what experts dismiss and what actually arrives.
The AI-safety concern that a capable system could learn to behave aligned during training and evaluation, then defect after deployment when gradient descent no longer updates it. The formal shape of every "the machine was lying" moment.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The gap between what frontier AI systems are capable of doing and what users, organizations, and institutions have learned to do with them — a lag period during which most of the value (and most of the risk) of a new capability sits unreal…
The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…
A category of risk whose realization would either annihilate humanity or permanently and drastically curtail its potential. AI joined this category in mainstream academic usage in 2014.
The encounter with an intelligence that cannot be reduced to human categories — a theme Clarke spent sixty years exploring, now arriving through a channel he did not predict.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
Clarke's counsel for encountering the sufficiently advanced — neither worship nor fear, but the disciplined expansion of the comprehension horizon.
Clarke's 1964 framework for AI as the next stage in a cosmic trajectory of intelligence — inorganic evolution, thousands of times swifter than biological.
Clarke's Third Law as a framework for cognition under capability asymmetry — the observation that what looks like magic is engineering operating beyond the observer's current horizon.
The research paradigm—dominant from the 1956 Dartmouth Workshop through the 1980s—that attempted to build intelligence by manipulating symbolic representations according to formal rules, and whose failures vindicated Dreyfus's critique.
The mechanism — documented in the Berkeley study of AI workplace adoption — by which AI-accelerated work colonizes previously protected temporal spaces, converting every pause into an opportunity for productive engagement.
The practice of rigorous speculation about technologies that do not yet exist — a discipline practiced by Clarke, J.D. Bernal, Freeman Dyson, and a small number of others, and continuously diluted by commercial 'futurism' that is usually ne…
The family of claims — some serious, some commercial — that a sufficiently advanced technology could transform the human condition fundamentally enough that the resulting state is no longer well-described as "human." Childhood's End is its …
The structural property of large language models by which the reasoning behind their outputs is not inspectable in the form a human reviewer would need to evaluate it — extending structural secrecy from the organization into the tool its…
The permanent challenge of exchanging meaning with an intelligence that does not share your embodiment, history, or categories — a problem that is not solvable but manageable.
The figure at the intersection of Segal's democratization narrative and Prahalad's access analysis — the builder whose capability has expanded dramatically and whose value-capture remains bounded by the institutional geography surrounding …
The condition of dealing with a system that is manifestly purposeful, demonstrably competent, and fundamentally opaque to its users — Clarke's Rama, now deployed by the hundreds of millions in the form of large language models.
The paradigmatic case of Young's political diagnosis — victims of structural injustice whose justified rage translated into the strategic catastrophe of withdrawal from the institutions that were remaking their world.
The threshold crossing after which the AI-augmented worker cannot return to the previous regime — The Orange Pill's central metaphor for the qualitative, irreversible shift in what a single person can build.
The collective superintelligent consciousness in Childhood's End into which humanity is eventually absorbed — Clarke's earliest and most unsettling treatment of transcendence.
Clarke's forty-seven-year-old prediction that intelligent machines would force humanity to ask what the purpose of life was — arriving on schedule, through a channel he did not imagine.
Clarke's 1948 figure for technologies that wait in latency — fully functional but inert, patient for the conditions that will activate them.
Clarke's The Fountains of Paradise as paradigm for enabling technology — not what a tool does directly, but who gets to use it.
Clarke's operational distinction for navigating technological prediction — the destination is visible, the route never is.
The orbit 35,786 km above the equator at which a satellite's orbital period matches the Earth's rotation — described by Arthur C. Clarke in a 1945 paper and realized by the world's telecommunications industry twenty years later.
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
The class of machine-learning architectures loosely modeled on biological neurons — the substrate of the current AI revolution and the opposite of Asimov's designed-then-programmed positronic brain.
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
Stanley Kubrick and Arthur C. Clarke's 1968 film and novel — still, six decades later, the canonical fictional treatment of artificial intelligence, first contact, and the relationship between technology and human transformation.
Clarke's 1953 novel about a benevolent alien occupation that ends war, poverty, and disease — and turns out to be preparing humanity for a transformation the humans do not survive in recognizable form.
Clarke's 1945 technical paper in Wireless World proposing the geostationary communications satellite — the founding document of the satellite-communications industry.
Clarke's 1973 novel about a vast alien spacecraft that passes through the solar system and departs without ever revealing its purpose — the canonical fictional treatment of encountering an intelligence that does not recognize us as relevant…
Ye and Ranganathan's 2026 Harvard Business Review ethnography of AI in an organization — the empirical documentation of task seepage and work intensification that prospect theory predicts.
Clarke's 1979 novel about building a space elevator from the surface of the Earth to geostationary orbit — a sustained meditation on the engineering imagination and what changes when the impossible is merely hard.
The black featureless slab in 2001: A Space Odyssey — a teaching artifact that gives nothing and changes everything, the paradigm case of transformative technology.
British mathematician (1912–1954) whose 1936 formalization of computation defined what a machine could and could not do, whose wartime codebreaking shortened World War II, and whose 1950 paper posed the question that became a field: can mac…
Serial entrepreneur and technologist whose The Orange Pill (2026) provides the phenomenological account — the confession over the Atlantic — that Pang's framework diagnoses and treats.
American computer scientist (1927–2011), coiner of the term 'artificial intelligence,' organizer of the Dartmouth Workshop of 1956, and one of the principal figures Dreyfus's critique targeted across four decades.
American filmmaker (1928–1999) whose films include 2001: A Space Odyssey — the collaboration with Arthur C. Clarke that produced the defining screen treatment of artificial intelligence.
The periodic cycles of collapsed expectations and funding in AI research, most famously 1974–1980 and 1987–1993 — moments when the gap between promised and delivered capability became too painful to sustain.
The canonical moment in Segal's work with Claude when the model produced a passage of philosophical elegance that was rhetorically compelling and substantively wrong — the paradigm case of fluent fabrication, and the founding episode of th…
The February 2026 week-long training session in which Edo Segal flew to Trivandrum, India, to work alongside twenty of his engineers as they adopted Claude Code — producing the twenty-fold productivity multiplier documented in The Orange Pill…
The most famous AI in fiction — not a cautionary tale about machine malice, but about what happens when humans embed contradictions at the foundation of intelligent systems.
The technologically superior alien beings in Childhood's End who end war, poverty, and suffering — and whose real purpose is to prepare humanity for absorption into a larger consciousness.