This page lists every Orange Pill Wiki entry hyperlinked from Stephen Jay Gould — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.
AGI: a hypothetical system with human-level cognitive ability across essentially every domain. The transition-point that AI-safety thinking orients around, even when no one agrees on what it is.

The thesis that evolutionary and technological outcomes depend on unrepeatable sequences of accidents—replay the tape from the Cambrian or from 1950, and you get a fundamentally different world.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The co-optation of AI capabilities for functions they were not designed to serve—feathers evolved for warmth, repurposed for flight; text predictors designed for completion, repurposed for creative partnership.
The resistance AI tools eliminate from knowledge work — a category whose composition (wolf or parasite?) determines whether its elimination is liberation or erosion.
The distributional analysis revealing that apparent progress often reflects expanding variance from a constrained starting point rather than directional movement—the right tail extends while the mean barely moves.
"When a measure becomes a target, it ceases to be a good measure." Charles Goodhart's 1975 observation from monetary policy, now the operative principle of every specification failure in AI.
The foundational contrast between viewing evolution (or technological change) as linear ascent toward a predetermined summit versus copiously branching diversification with no main trunk and no direction.
The principle that where you are constrains where you can go—the sequence of decisions already made narrows future options, producing outcomes rational actors would not choose if they could see the full trajectory.
The pattern by which technological change occurs not gradually but in rapid bursts separated by long periods of stasis—Gould and Eldredge's 1972 evolutionary framework applied to AI development.
Honneth's 2008 revival of Lukács's concept as the forgetfulness of recognition — the habit of treating persons as things, now inverted in the AI moment into the treatment of things as persons.
Gould's counterfactual methodology—wind history back to a branching point and let it run forward again with different contingent events—revealing that specific outcomes are not inevitable but fortunate products of unrepeatable accidents.

The cognitive habit of looking backward from a known outcome and concluding the path was the only possible path—the fallacy that makes Darwin's finches look like they were collected to prove natural selection.
Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.
Capabilities that emerge as structural byproducts of AI architecture rather than designed features—the pendentives between intended function and actual behavior, named after Gould and Lewontin's architectural metaphor.
The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.

The 530-million-year-old fossil deposit revealing Cambrian diversity—dozens of viable body plans, most extinct—as template for understanding AI's present: a moment of proliferating forms before selection prunes the bush.
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…
The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.
American cognitive scientist (1927–2016), co-founder of the MIT AI Laboratory, father of artificial intelligence, and Carl Sagan's intellectual companion in the work that linked AI research to the cosmic question of other minds.
American paleontologist (b. 1943) whose study of Devonian trilobites produced punctuated equilibrium theory — the empirical demonstration that stasis, not gradual change, is the norm in the fossil record.
The periodic cycles of collapsed expectations and funding in AI research, most famously 1974–1980 and 1987–1993 — moments when the gap between promised and delivered capability became too painful to sustain.
The birds Darwin collected carelessly in 1835 and mislabeled, whose significance was revealed only through John Gould's January 1837 taxonomic expertise—Gould's paradigm for how discovery depends on contingent collaboration.

The phase transition when AI crossed from incremental improvement to qualitative reorganization of knowledge work — Segal's orange pill, Tillich's kairos.