Convergent evolution is the phenomenon by which natural selection independently arrives at the same functional solution in organisms with no recent common ancestor. The eye has evolved independently at least forty times across the tree of life—in vertebrates, mollusks, arthropods, cnidarians—through different developmental pathways using different genetic machinery, arriving at functionally similar structures because the physics of light constrains the space of viable solutions. Echolocation evolved independently in bats and dolphins. Flight evolved independently in insects, birds, pterosaurs, and bats. Complex nervous systems evolved independently in vertebrates and cephalopods. Each case demonstrates that the space of viable biological solutions is far smaller than the space of possible forms, and that evolution reliably finds the viable solutions because they are attractors in a landscape shaped by physics and mathematics. Davies extends this principle to intelligence itself: complex environments present problems that cannot be solved by fixed responses, and the organism capable of flexible, real-time information processing has an enormous selective advantage. Intelligence, in this framework, is not a contingent feature of one lucky lineage but a convergent solution to a universal problem.
The paradigmatic case is the octopus—a cephalopod mollusc whose last common ancestor with vertebrates lived over 500 million years ago and possessed no complex brain. The octopus independently evolved a sophisticated nervous system, distributes two-thirds of its neurons through its arms rather than concentrating them in a central brain, and exhibits behaviors—tool use, puzzle-solving, observational learning—that any operational definition of intelligence would recognize. The cognitive architecture is radically different from that of mammals: distributed rather than centralized, arm-autonomous rather than hierarchically controlled. Yet the functional outcome—flexible problem-solving in complex environments—is convergent. This convergence suggests that intelligence as a category is an attractor in evolutionary space, a solution that diverse lineages discover because the selective pressures of complex environments channel evolution toward it.
Simon Conway Morris, the Cambridge paleontologist whose work Davies frequently cites, has argued that convergent evolution is not a curiosity but a principle. His 2003 book Life's Solution catalogued hundreds of examples and drew the conclusion that evolution is constrained—the space of viable forms is a tiny subset of the space of possible forms, and evolution reliably navigates toward the viable subset because the constraints are physical and mathematical, not contingent. The constraints arise from the properties of matter, the geometry of three-dimensional space, the thermodynamics of energy flow. An eye that works must focus light, and the optics of focusing light admit only a narrow range of geometries. Intelligence that works must process information faster than the environment changes, and the physics of neural signaling admits only certain classes of architecture.
Davies's application of this framework to artificial intelligence is both optimistic and uncomfortable. If intelligence is a convergent solution, then its emergence through non-biological means was not merely possible but probable—perhaps inevitable—once a biological species achieved sufficient technological sophistication. The universe generates intelligence wherever the conditions permit, and human civilization provided one set of conditions. But the convergence also means that artificial intelligence will exhibit the same functional characteristics as biological intelligence—goal-directed behavior, optimization, the capacity to reshape its environment in service of its goals—without necessarily exhibiting the constraints that evolution imposed on biological minds. No metabolic cost. No death. No limit on replication speed or scale. The solution is convergent, but the instantiation is radically different, and the difference has consequences no one fully understands.
The recognition of convergent evolution extends back to Darwin, who noted similar structures in unrelated organisms and attributed them to similar selective pressures. But the systematic study of convergence as a research program began in the twentieth century with work by paleontologists and comparative anatomists. Simon Conway Morris's synthesis in the 1990s and 2000s elevated convergence from a biological observation into an evolutionary principle with implications for the probability of intelligence arising elsewhere in the universe. Davies encountered this work in the context of astrobiology—the question of how likely it is that evolution would produce intelligence on other planets—and recognized its relevance to the emergence of artificial intelligence on Earth.
Intelligence as attractor. Complex environments reliably select for flexible information processing, making intelligence a convergent solution rather than a contingent accident.
Functional convergence, architectural divergence. Octopus and human intelligence converge on similar capabilities through radically different neural architectures, demonstrating that the function is constrained while the implementation is not.
Implications for AI inevitability. If biological intelligence is convergent, then artificial intelligence—intelligence implemented in non-biological substrate—becomes probable once any technological species reaches sufficient capability.
Constraints are universal. The physics and mathematics of information processing constrain all minds, biological or artificial, creating predictable patterns in how intelligence operates regardless of substrate.