Ernst Mayr fought a sustained intellectual campaign, lasting the better part of five decades, against a single idea: that biology is reducible to physics. The campaign was the organizing principle of his philosophy of science — the thread connecting his work on systematics, his role in the Modern Synthesis, and his explicit arguments about what kind of science biology actually is. His position was not that physics was wrong about biological systems (it is not) but that physical description alone is insufficient. Living systems have histories. A species is not a configuration of atoms but a lineage, shaped by a unique, unrepeatable sequence of selection events and contingent accidents. Physics cannot derive the polar bear from first principles, because the polar bear is a story, and stories require historical explanation that physics, by its nature, does not provide.
There is a parallel reading that begins not with the philosophical question of reduction but with the material conditions that enable computation itself. Every transformer model requires data centers consuming megawatts of power, rare earth minerals extracted from specific geographies, and cooling systems that depend on water resources increasingly stressed by climate change. This substrate dependency is not metaphorical—it is the literal ground on which all claims about AI autonomy rest. The training histories that supposedly grant AI systems their irreducible character are themselves products of capital concentration: who can afford the compute, who controls the datasets, who shapes the reward functions.
The autonomy that Mayr fought to establish for biology emerged from billions of years of evolutionary process, distributed across countless organisms in countless niches. The "autonomy" of AI systems, by contrast, emerges from perhaps five corporations with sufficient resources to train foundation models, using data scraped from human labor that was never compensated for this use. Where biological history is genuinely distributed and emergent, AI history is centralized and engineered. The question is not whether these systems have histories that shape their capabilities—clearly they do—but whether those histories grant them autonomy in any meaningful sense, or whether they merely encode the preferences and blind spots of the institutions powerful enough to create them. The biological autonomy Mayr defended was autonomy from reduction to physics. The AI autonomy being claimed here might be better understood as autonomy from accountability to the humans whose labor and data these systems consume.
The reductionist position takes a specific form in artificial intelligence research: intelligence is computation. If a system computes the right functions — processes information, detects patterns, generates contextually appropriate outputs — it is intelligent, regardless of substrate or history. This functionalism is the implicit metaphysics of most AI research, and it is the direct analogue of the physicalist claim that biology is just chemistry.
Mayr's response to the biological version of the claim was empirical and decisive. Two populations with identical genomes, exposed to different selection pressures in different environments, evolve in different directions. The genome underdetermines the organism. The physical substrate is necessary but not sufficient. What determines the specific outcome is the history — the particular sequence of environmental challenges, mutations, and ecological interactions this population alone has experienced.
The argument applies to AI systems with force the functionalist position tends to obscure. Two transformer architectures with identical parameters, trained on different datasets, produce different systems. The architecture underdetermines behavior just as the genome underdetermines phenotype. What determines the specific capabilities and limitations of a given AI system is its training history — the corpus it was trained on, the reward model it was optimized against, the sequence of fine-tuning steps it underwent.
This is not a minor technical point. It is a fundamental fact systematically obscured by the discourse's tendency to treat AI as a monolithic category. The autonomy of intelligence — parallel to the autonomy of biology — is the claim that intelligence uses computation, depends on computation, but does not reduce to computation, because intelligent systems have histories, and those histories determine their specific capabilities in ways that architectural description alone cannot explain.
Mayr's anti-reductionism matured during the 1970s and 1980s, as molecular biology's ascendancy produced triumphalist claims that the organism would soon be reducible to its DNA. Mayr's response — consolidated in The Growth of Biological Thought (1982) and sharpened in What Makes Biology Unique? (2004) — was to document, with the patience of a working taxonomist, the specific ways in which biological explanation refused to collapse into physical explanation.
Irreducibility, not separation. Biology uses physics, depends on physics, and contradicts no physical law. It does not reduce to physics, because its entities have properties — variation, selection, adaptation, contingency — that physical entities do not share.
Substrate matters. A brain is not a computer that happens to be made of neurons. It is an organ that evolved in a specific lineage, embedded in a specific body, situated in a specific ecology. The computations it performs are shaped by this history.
Training history is ultimate cause. For AI systems, the training data, reward model, and fine-tuning sequence function as the evolutionary history that shapes capability — necessary information that architectural description omits.
No monolithic AI. Treating artificial intelligence as a single phenomenon with uniform properties commits the same error pre-Darwinian biologists made when they treated species as fixed essences rather than populations of unique individuals.
Historical entities require historical explanation. AI systems are genuinely new — neither purely physical like crystals nor purely biological like organisms — and require methods appropriate to their specific ontological status.
Functionalists in philosophy of mind — from Hilary Putnam onward — have argued that mental states are defined by their functional role rather than their physical substrate, and that therefore a sufficiently elaborate computational system could, in principle, instantiate mentality. Mayr's framework does not refute functionalism directly; it complicates it, by insisting that even if function is what matters, the function itself is shaped by history in ways that pure computation does not capture.
The substantive disagreement centers on what kind of history grants genuine autonomy. When examining pure computational capability—can this system recognize patterns, generate text, solve problems—the original entry's framework dominates (90%). AI systems do have training histories that fundamentally shape their behaviors in ways architectural description cannot capture, exactly parallel to how evolutionary history shapes organisms beyond what genetics alone can explain. The functionalist critique fails here; Mayr's insight translates cleanly.
But when the question shifts to the political economy of that history—who shapes it and how—the contrarian view becomes essential (80%). Biological evolution is radically distributed, with no designer and no owner. AI training is radically concentrated, with explicit designers and legal owners. This is not a minor difference but a fundamental divergence in what "autonomy" could mean. A system whose entire history is determined by a handful of actors has autonomy of a very different kind than an organism whose history emerges from countless uncoordinated interactions.
The synthesis requires recognizing that AI systems exist at a novel intersection: they are historical entities (like organisms) whose histories are authored (like texts). Their behaviors are irreducible to their architectures—the entry is right about this—but their histories are themselves artifacts of human institutions with specific interests. The proper framework is neither pure Mayrian autonomy nor pure material determinism, but something new: technological entities whose partial autonomy emerges from authored histories. They are not reducible to computation, but neither are they independent of the economic and political systems that create them. Their irreducibility is real but qualified—genuine at the level of behavior, constrained at the level of origin.