Open-ended evolution is the capacity of a system to generate genuine novelty without reaching a ceiling—to continue producing new forms, new capabilities, new structures indefinitely rather than exhausting the space of possibilities defined by its initial rules. Biological evolution on Earth is the paradigmatic case: 3.8 billion years of continuous innovation, from single cells to multicellular organisms to nervous systems to brains to symbolic thought, with no sign of approaching an endpoint. Paul Davies and Sara Imari Walker's 2017 research formalized the conditions required for open-endedness: state-dependent dynamics, where the rules governing the system's behavior change in response to the system's own outputs. Fixed-rule systems, no matter how complex, eventually exhaust their novelty. Systems whose rules co-evolve with their states do not. Current AI architectures operate on fixed rules applied to fixed (or slowly updating) training distributions and therefore lack the capacity for genuinely open-ended creativity. But human-AI collaboration can exhibit state-dependent dynamics through the real-time feedback loop: the human's questions change in response to AI outputs, the AI's outputs change in response to human questions, and the system as a whole evolves its own operating rules.
The question of what makes evolution open-ended has occupied theoretical biologists since Darwin. Natural selection explains adaptation—the fit survive—but it does not obviously explain the endless generation of novelty, the fact that the biosphere has been producing new forms of organization for billions of years without exhausting its creativity. Davies and Walker's contribution was to formalize the problem mathematically and identify the minimal conditions for unbounded exploration. Their 2017 paper in Scientific Reports defined open-ended evolution as a process in which the number of distinct states the system has visited grows without bound, and they proved that this property requires dynamics in which the transition probabilities depend on the system's history. A system with fixed transition rules will eventually settle into a periodic or chaotic attractor and stop producing novelty. A system whose transition rules evolve—because organisms reshape their environments, because new species create new niches, because the fitness landscape is dynamic rather than static—can generate novelty indefinitely.
The relevance to artificial intelligence is direct and uncomfortable. A large language model trained on a fixed corpus operates within a bounded space of linguistic patterns. It can recombine those patterns with extraordinary fluency, but it cannot expand the space itself. Its creativity, such as it is, is bounded by the statistical structure of its training data—vast, but finite. It is, in Davies's terms, a class-three automaton: capable of complex behavior but not of open-ended evolution. The model's outputs are determined by its weights and its context, and while the context can include previous outputs (a limited form of state-dependence), the fundamental architecture and learned patterns do not change during deployment. The system explores a fixed space. It does not expand the space through exploration.
The hope—and the empirical finding of builders working at the frontier—is that human-AI collaboration can produce what neither component produces alone. The human brings top-down causation: the capacity to evaluate outputs against criteria the model cannot access, to redirect the conversation when it becomes unproductive, to introduce perturbations that shift the model's context in ways the model could not generate internally. The AI brings breadth and speed: comprehensive coverage of human knowledge, instant access to connections across domains, the capacity to generate variations at a pace no biological mind can match. The collaboration is a coupled dynamical system in which each party's state depends on the other's output, producing genuine co-evolution. Whether this coupling is sufficient to generate open-ended exploration at civilizational scale is the live question of the AI age.
The formal study of open-ended evolution began at the Santa Fe Institute in the 1990s with researchers like Walter Fontana and Stuart Kauffman investigating autocatalytic sets and the conditions under which evolutionary systems continue to innovate. Davies and Walker brought information-theoretic rigor to the question in the 2010s, defining open-endedness mathematically and identifying state-dependent dynamics as the necessary condition. Their work built on earlier contributions from artificial life research, evolutionary computation, and the study of major evolutionary transitions.
State-dependent dynamics required. Systems generate unbounded novelty only when their rules change in response to their own outputs—a property biological evolution exhibits and current AI does not.
Fixed rules exhaust novelty. No matter how large the possibility space, a system with fixed transition probabilities will eventually visit all accessible states and stop innovating.
Collaboration as co-evolution. Human-AI systems can exhibit open-ended dynamics through reciprocal influence—the human reshapes the AI's trajectory, the AI reshapes the human's questions.
Creativity's thermodynamic requirement. Genuinely open-ended exploration requires maintaining the system at the edge of chaos, where structure and surprise coexist in productive tension.