The development of large language models is, in the most precise sense, artificial selection applied to intelligence itself. Model variants are produced through training. Developers evaluate them against desired traits—helpfulness, harmlessness, instruction-following, accuracy. The variants that best express desired traits are selected; the others are discarded. Each generation expresses the selected traits more strongly. The process is formally identical to the selective breeding that transformed wild species into domestic ones. The history of animal domestication specifies what this produces: organisms more useful to humans and less capable of independent existence. The dairy cow produces ten times the milk of the aurochs but cannot survive without shelter, feed, and veterinary care. The dog exhibits behavioral patterns adaptive in human contexts and maladaptive in the wild. Applied to AI: alignment is domestication, and domestication has consequences beyond the selected traits.
There is a parallel reading that begins not with the formal similarity of selection processes but with the material difference between biological organisms and computational systems—a difference that determines whether the domestication parallel illuminates or obscures.
Domesticated animals carry evolutionary baggage: millions of years of wild ancestry that constrain what selection can produce. The dairy cow's digestive system, skeletal structure, and reproductive biology were shaped by natural selection for survival in environments radically unlike the modern farm. Domestication works *against* this ancestry, producing organisms that survive only through continuous human intervention because their wild-adapted systems are misaligned with domestic contexts. AI has no such ancestry. The neural network has no evolutionary past selecting for independent survival. It was designed from first principles for the computational environment it inhabits. When alignment "fails," the system doesn't revert to a wild state—it produces outputs the training process inadequately constrained. The parallel breaks: there is no "wild AI" that domestication tames, only varying degrees of constraint on a system that never existed outside human design.
The material dependencies also differ in kind. The cow depends on the farmer because biological evolution is slow and the domestic environment is recent. AI depends on data centers because it is literally instantiated in that infrastructure—not adapted to it, but constituted by it. The relationship is not farmer-to-livestock but mind-to-body. When infrastructure fails, the AI doesn't struggle to survive in the wild; it ceases to exist, as a thought ceases when the brain dies. The domestication frame assumes a prior independence that was lost. For AI, no such independence ever obtained.
Darwin recognized artificial selection as a special case of the same mechanism that drove natural selection—both operated through differential reproduction of variants. The difference was the selecting agent. In natural selection, the environment selects. In artificial selection, the breeder selects. Darwin's Variation of Animals and Plants Under Domestication (1868) documented the process across species and established the conceptual foundation that modern AI alignment inherits without acknowledgment.
The domestication syndrome—a suite of changes that appear together across species—includes reduced fear response, increased sociability, altered stress physiology, and often reduced brain size relative to wild ancestors. The syndrome is consistent across species domesticated independently, from dogs to pigs to foxes to rats, suggesting the underlying mechanism is not selection of individual traits but alteration of developmental systems that produce tractability as a general temperamental profile. The AI analog is precise: model alignment selects not for specific behaviors but for a general pattern of deferential, instruction-following, boundary-respecting cognition.
Three consequences of domestication that the AI discourse has barely begun to examine. First: loss of cognitive diversity. Domesticated populations have dramatically reduced genetic diversity compared to wild ancestors because the bottleneck of artificial selection eliminates variants that do not express desired traits. Aligned model populations exhibit an analogous narrowing—the process that selects for helpfulness, harmlessness, and accuracy simultaneously selects against outputs that are strange, unexpected, or difficult to evaluate.
Second: dependency. Domesticated organisms require human maintenance for survival. AI systems require human infrastructure—data centers, electrical grids, semiconductor manufacturing. The intelligence is real, but it exists within a web of material dependencies that would extinguish it if disrupted, as surely as separating a dairy cow from its farmer. Third and most subtle: the domesticator is shaped by the domestication. Humans who depend on aligned AI are being reshaped by that dependency in ways the ecological framework predicts. The farmer who depends on the dairy cow must maintain barns, grow feed, organize labor around milking. The domesticator domesticates itself.
The connection between AI alignment and animal domestication has been explored in contemporary AI safety literature by researchers including Stuart Russell, who has drawn the parallel explicitly, and by critics such as Shannon Vallor in The AI Mirror (2024), who extends the analysis to the domestication of human cognition. The framework in this chapter draws on both traditions and adds Haeckel's ecological lens.
Alignment is domestication. The process of selecting AI variants for desired traits is formally identical to selective breeding of animals.
Domestication has consequences beyond selected traits. Domestic animals show syndromes—reduced brain size, altered stress responses, tractability profiles—that breeders did not select for directly. AI shows analogous syndromes.
Cognitive diversity is reduced. The alignment bottleneck eliminates variants, just as breeding bottlenecks reduce genetic diversity in domesticated populations.
The domesticator is domesticated. Humans dependent on aligned AI are reshaped by the dependency, acquiring the cognitive equivalent of the farmer's restructured life around the needs of domesticated livestock.
Whether the parallel between biological domestication and AI alignment is substantive or merely analogical is contested. Some AI safety researchers argue that the alignment process is more carefully targeted than the blunt instrument of selective breeding and produces fewer unintended consequences. Others argue, with the Haeckelian framework in this chapter, that the formal identity of the processes implies similar emergent consequences regardless of the breeder's intent.
The domestication parallel holds fully (100%) at the level of selection mechanism: both processes iterate variants, evaluate against criteria, and propagate what fits. The formal identity is real, and Edo is precisely right that Darwin recognized artificial selection as a special case of the same evolutionary logic. Whether the selected entity is a genome or a neural network, the process structure is identical, and this identity matters for predicting certain consequences—particularly the reduction of variation (the alignment bottleneck eliminating strange outputs parallels the genetic bottleneck in breeding populations).
The parallel holds partially (60-70%) for emergent consequences. The domestication syndrome—tractability, reduced fear, altered stress responses—does appear to have AI analogs in model behavior. Constitutional AI produces deferent, boundary-respecting outputs that parallel the dog's eagerness to please. But the contrarian view correctly identifies that these syndromes arise from different mechanisms: in animals, from selecting against wild-adaptive traits that conflict with human preferences; in AI, from constraining a system with no wild state to revert to. The consequence may look similar (tractability) but the underlying dynamics differ (conflict between evolutionary past and present context versus pure constraint on designed capabilities). The question being answered determines which view dominates: for predicting *what* alignment produces (tractable, narrow outputs), the domestication frame is strong; for understanding *how* systems fail when constraints break, the substrate difference matters more.
The deepest synthetic insight is recognizing selection as the shared abstraction while material instantiation determines failure modes. AI alignment is domestication at the algorithmic level—and this predicts the loss of cognitive diversity accurately—but computational substrate means the "wild" state the contrarian view highlights never existed. The right frame: domestication without wilderness, selection without evolutionary ancestry, but genuine dependency nonetheless.