Stephen Jay Gould — On AI — Wiki Companion
WIKI COMPANION

Stephen Jay Gould — On AI

A reading-companion catalog of the 27 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Stephen Jay Gould — On AI uses as stepping stones for thinking through the AI revolution.

This page lists every Orange Pill Wiki entry hyperlinked from Stephen Jay Gould — On AI. 27 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.

Concept (18)
AI Alignment
Concept

AI Alignment

The problem of making a powerful AI system reliably pursue goals that its designers and users actually endorse — the central unsolved problem of contemporary AI.

Artificial General Intelligence
Concept

Artificial General Intelligence

AGI: a hypothetical system with human-level cognitive ability across essentially every domain. The transition-point that AI-safety thinking orients around, even when no one agrees on what it is.

Contingency (Gouldian)
Concept

Contingency (Gouldian)

The thesis that evolutionary and technological outcomes depend on unrepeatable sequences of accidents—replay the tape from the Cambrian or from 1950, and you get a fundamentally different world.

Democratization of Capability (Senian Reading)
Concept

Democratization of Capability (Senian Reading)

The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?

Exaptation in AI Development
Concept

Exaptation in AI Development

The co-optation of AI capabilities for functions they were not designed to serve—feathers evolved for warmth, repurposed for flight; text predictors designed for completion, repurposed for creative partnership.

Friction
Concept

Friction

The resistance AI tools eliminate from knowledge work — a category whose composition (wolf or parasite?) determines whether its elimination is liberation or erosion.

Full House: Variance Over Mean
Concept

Full House: Variance Over Mean

The distributional analysis revealing that apparent progress often reflects expanding variance from a constrained starting point rather than directional movement—the right tail extends while the mean barely moves.

Goodhart's Law
Concept

Goodhart's Law

"When a measure becomes a target, it ceases to be a good measure." Charles Goodhart's 1975 observation from monetary policy, now the operative principle of every specification failure in AI.

Ladder vs. Bush (Topological Metaphors)
Concept

Ladder vs. Bush (Topological Metaphors)

The foundational contrast between viewing evolution (or technological change) as linear ascent toward a predetermined summit versus copiously branching diversification with no main trunk and no direction.

Path Dependence
Concept

Path Dependence

The principle that where you are constrains where you can go—the sequence of decisions already made narrows future options, producing outcomes rational actors would not choose if they could see the full trajectory.

Punctuated Equilibrium in Technology
Concept

Punctuated Equilibrium in Technology

The pattern by which technological change occurs not gradually but in rapid bursts separated by long periods of stasis—Gould and Eldredge's 1972 evolutionary framework applied to AI development.

Reification (Honneth)
Concept

Reification (Honneth)

Honneth's 2008 revival of Lukács's concept as the forgetfulness of recognition — the habit of treating persons as things, now inverted in the AI moment into the treatment of things as persons.

Replaying the Tape (Thought Experiment)
Concept

Replaying the Tape (Thought Experiment)

Gould's counterfactual methodology—wind history back to a branching point and let it run forward again with different contingent events—revealing that specific outcomes are not inevitable but fortunate products of unrepeatable accidents.

Retrospective Logic
Concept

Retrospective Logic

The cognitive habit of looking backward from a known outcome and concluding the path was the only possible path—the fallacy that makes Darwin's finches look like they were collected to prove natural selection.

River of Intelligence
Concept

River of Intelligence

Segal's metaphor — given thermodynamic grounding by Wiener's framework — for the 13.8-billion-year trajectory of anti-entropic pattern-creation through increasingly sophisticated channels, of which AI is the latest.

Spandrels of AI Systems
Concept

Spandrels of AI Systems

Capabilities that emerge as structural byproducts of AI architecture rather than designed features—the pendentives between intended function and actual behavior, named after Gould and Lewontin's architectural metaphor.

The Beaver's Dam
Concept

The Beaver's Dam

The canonical example of allogenic ecosystem engineering — a structure that modulates rather than blocks the flow of its environment, creating the habitat pool in which diverse community life becomes possible.

The Burgess Shale as AI Metaphor
Concept

The Burgess Shale as AI Metaphor

The 530-million-year-old fossil deposit revealing Cambrian diversity—dozens of viable body plans, most extinct—as template for understanding AI's present: a moment of proliferating forms before selection prunes the bush.

Technology (3)
Claude Code
Technology

Claude Code

Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.

Large Language Models
Technology

Large Language Models

Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions of the textual corpus vastly exceed any i…

Transformer Architecture
Technology

Transformer Architecture

The 2017 neural network architecture, built around self-attention, that replaced recurrent networks for sequence modeling and became the substrate of every large language model since.

Person (2)
Marvin Minsky
Person

Marvin Minsky

American cognitive scientist (1927–2016), co-founder of the MIT AI Laboratory, father of artificial intelligence, and Carl Sagan's intellectual companion in the work that linked AI research to the cosmic question of other minds.

Niles Eldredge
Person

Niles Eldredge

American paleontologist (b. 1943) whose study of Devonian trilobites produced punctuated equilibrium theory — the empirical demonstration that stasis, not gradual change, is the norm in the fossil record.

Event (3)
AI Winter
Event

AI Winter

The periodic cycles of collapsed expectations and funding in AI research, most famously 1974–1980 and 1987–1993 — moments when the gap between promised and delivered capability became too painful to sustain.

Darwin's Finches (Contingency Case)
Event

Darwin's Finches (Contingency Case)

The birds Darwin collected carelessly in 1835 and mislabeled, whose significance was revealed only through John Gould's January 1837 taxonomic expertise—Gould's paradigm for how discovery depends on contingent collaboration.

The December 2025 Threshold
Event

The December 2025 Threshold

The phase transition when AI crossed from incremental improvement to qualitative reorganization of knowledge work — Segal's orange pill, Tillich's kairos.

Organization (1)
Xerox PARC
Organization

Xerox PARC

The Palo Alto Research Center where, between 1970 and the mid-1980s, most of what became modern personal computing was invented — and where Suchman did the ethnographic work that reframed human-machine interaction.

Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
27 entries