This page lists every Orange Pill Wiki entry hyperlinked from Iain M. Banks — On AI. 26 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The quiet risk of comprehensive automation: not that machines dominate us, but that we lose the capabilities they replace. Asimov's Solarians are the founding fiction; contemporary work on cognitive offloading is the empirical counterpart.
The capacity of an institution, civilization, or AI system to plan and act on timescales longer than any individual human lifetime. Asimov's Foundation is the canonical fiction; contemporary long-termist institutions are the real-world coun…
The practice of outsourcing mental work to external aids — calendars, calculators, GPS, search engines, now language models — and the research tradition that studies what it does to the minds doing the offloading.
The quality of subjective experience — being aware, being something it is like to be — and the single deepest unanswered question in both philosophy of mind and AI.
The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an exploitation metric that leaves the exp…
Banks's 1996 term for a category of event most civilizations encounter rather in the same way a sentence encountered a full stop — a problem that exists outside the conceptual framework of the civilization encountering it.
The economic foundation of the Culture — a civilization that has abolished material scarcity through limitless energy, molecular manufacturing, and AI-coordinated distribution, dissolving the social structures scarcity produced.
The Culture's self-chosen Mind names — Experiencing A Significant Gravitas Shortfall, So Much For Subtlety, Of Course I Still Love You — that compress an entire philosophy of intelligence, freedom, and trust into acts of self-creation.
The philosophical proposition that observed reality may be a computational simulation run by an advanced civilization — popularized by Philip K. Dick's 1977 address and Nick Bostrom's 2003 paper.
A hypothetical intelligence that substantially exceeds human cognitive performance across essentially every domain. The framework that turned AI-safety concerns from speculative to operational in the 2010s.
Banks's post-scarcity anarchist civilization spanning the Milky Way — governed not by laws or markets but by hyperintelligent Minds who chose cooperation over control.
Banks's civilizational aspiration repurposed as a contemporary design brief — not a prediction of what AI will produce but a moral specification of what a civilization of humans and AIs would need to hold in common to produce something wor…
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
Banks's 1994 Usenet essay explaining the political and technological assumptions underlying the Culture — the clearest articulation of a post-scarcity AI-governed civilization ever produced.
Banks's 1987 debut Culture novel, told from the perspective of a man who hates the Culture — a structural insistence that the most dangerous thing a utopia can do is stop listening to its critics.
Banks's 1996 novel about a Culture confronting an Outside Context Problem — an artifact that exceeds even Mind-level comprehension, forcing the Culture's superintelligences into the uncomfortable discovery that their frameworks have limits.
Banks's 1988 novel in which a Culture game master infiltrates an empire whose entire civilization is organized around a single complex game — and wins because the Culture's values are not a handicap but a strategic advantage.
Banks's 1990 Culture novel whose dual-timeline structure — one moving forward, one backward — is itself a weapon, converging on a revelation that reframes everything a reader thought they understood about benevolent intervention.