This page lists every Orange Pill Wiki entry hyperlinked from Kevin Kelly — On AI. 29 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
Kelly's 2008 thesis that a creator making a living does not need a mass audience — just a thousand people willing to pay ~$100 per year for specific, irreplaceable work. The founding charter of the modern creator economy, now being tested a…
The discipline of predicting when specific AI capabilities will arrive — a domain where Clarke's First Law applies cleanly: the distinguished elderly scientist who says X is impossible is, on the historical pattern, very probably wrong.
The applied research and operational discipline aimed at preventing harm from AI systems — broader than alignment, encompassing evaluations, red-teaming, deployment policy, monitoring, incident response, and the institutional plumbing that …
The capacity of an institution, civilization, or AI system to plan and act on timescales longer than any individual human lifetime. Asimov's Foundation is the canonical fiction; contemporary long-termist institutions are the real-world coun…
The discovery — which nobody predicted and no one fully explains — that large language models acquire qualitatively new abilities at particular scale thresholds. Reasoning, translation, code generation, in-context learning: none were traine…
A category of risk whose realization would either annihilate humanity or permanently and drastically curtail its potential. AI joined this category in mainstream academic usage in 2014.
Kelly's 2008 catalog of eight values that cannot be copied, distributed, or automated — the uncopyable qualities that remain when digital reproduction reduces most goods to free. The economic map of what AI cannot fake.
The operational frame in which a human and an AI system share a workflow as partners with complementary capabilities — the alternative to both "AI as tool" and "AI as replacement."
Clarke's framing of artificial intelligence as the next phase of evolution — a process "thousands of times swifter" than the biological kind, operating on a different substrate but continuous with the same trajectory.
The historical pattern in which the same invention emerges from multiple independent researchers in a narrow time window — Bell and Gray on the telephone, Newton and Leibniz on calculus, Darwin and Wallace on natural selection. Kelly's stro…
The claim — increasingly hard to argue with — that post-training matters as much as pretraining for what a frontier model ends up being. The most visible trend of 2023–2025 frontier releases.
Kelly's name for the third alternative to utopia and dystopia: a future that is slightly, incrementally, cumulatively better than the present — not perfect, not collapsed, just a little further along. His rejection of both optimistic and ca…
The empirical relationships that predict how a language model's loss decreases with training compute, parameters, and data — the most reliable quantitative instrument the AI field has, and the reason investors have been willing to fund ten-…
The family of claims — some serious, some commercial — that a sufficiently advanced technology could transform the human condition fundamentally enough that the resulting state is no longer well-described as "human." Childhood's End is its …
Kelly's name for the Amish community's three-century practice of deliberate technology evaluation — adopting tools only after long periods of community scrutiny for their effect on relationships, autonomy, and faith. The most sophisticated …
Kelly's name for the observable pattern that the technium produces more options, more capabilities, more connections, and more new problems across every period for which records exist — the load-bearing empirical claim behind his long-run o…
The short period during which a technology transitions from novelty to load-bearing substrate — the passage Clarke mapped for communications satellites in 1945 and which AI is completing, visibly, between 2022 and 2026.
The condition of dealing with a system that is manifestly purposeful, demonstrably competent, and fundamentally opaque to its users — Clarke's Rama, now deployed by the hundreds of millions in the form of large language models.
The political and emotional reaction against transformative technology on behalf of the workers and ways of life it displaces — historically vilified, increasingly reconsidered, and directly relevant to the AI transition.
The question "what is a human being for?" — which Clarke predicted intelligent machines would force humanity to ask, and which arrived in 2022–2025 with more force and less philosophical preparation than he expected.
Kevin Kelly's term for the self-organizing global system of technology considered as a single evolving entity — a category larger than any individual invention, whose trajectory has its own momentum, tendencies, and (Kelly argues) wants.
The discipline of distinguishing the long-run arc of a technology (its trajectory) from the specific mechanism by which it arrives (its channel) — visible in Clarke's career of being right about the destination and wrong about the path.
Neural networks trained on internet-scale text that have, since 2020, proven capable of producing human-like responses across nearly every written domain — the technology at the center of the Orange Pill Cycle's subject.
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
Kelly's 2008 essay enumerating the eight generatives — the uncopyable qualities that remain valuable when copies become free. The companion piece to 1,000 True Fans and the economic foundation of the modern creator economy.
Kelly's 2016 book identifying twelve technological forces he argued would shape the subsequent thirty years — a forecast written before the transformer architecture existed but whose specific predictions about AI, flowing content, and remix…
Computer scientist (b. 1956), inventor of the Connection Machine parallel computer, co-founder of the Long Now Foundation and the 10,000-year clock that anchors its long-term thinking project. A thinker whose career has uniquely combined sh…
The Whole Earth Catalog founder, editor, and civilizational entrepreneur (b. 1938) whose work connects 1960s counterculture, personal-computer emergence, environmentalism, and long-term thinking — and whose collaboration with Kevin Kelly runs f…