The Autodidactic Universe is a 2021 research paper — co-authored by Smolin with Jaron Lanier, Stephon Alexander, William Cunningham, Andrew Friedland, Marina Cortês, and researchers at Microsoft — that proposes a formal correspondence between the mathematical structure of neural network learning and the mathematical structure of physical law. Write Einstein's general relativity in a specific form (the Plebanski action), and the equations governing spacetime curvature correspond, at a certain level of abstraction, to the equations of a Restricted Boltzmann Machine. The paper does not claim that the universe literally is a neural network, or that spacetime literally learns. It claims something more subtle and more consequential: that learning — the adjustment of parameters to produce increasingly organized outputs — may be a cosmological primitive rather than a biological invention.
The paper emerged from a collaboration between physicists working on quantum gravity and computer scientists working on machine learning. Each community had been developing mathematical frameworks for its own domain — spacetime geometry on one side, neural network optimization on the other — and the structural correspondences between them had begun to appear too precise to be coincidental. The Plebanski action, a reformulation of general relativity developed in the 1970s, turns out to have equations of motion that share formal features with the learning equations of certain neural network architectures. The correspondence is not metaphorical; it is a mathematical fact.
The philosophical implications depend on how the correspondence is interpreted. The weakest interpretation is that the correspondence is a happy accident — mathematical structures have a way of recurring across unrelated domains, and the correspondence between relativity and neural networks is one instance of this recurrence. The strongest interpretation is that the universe literally is a learning system, adjusting its own laws through something analogous to gradient descent. The paper's authors position themselves between these extremes. The correspondence is real and consequential, but its ontological interpretation remains open.
For the AI discourse, the paper's implications are substantial. If learning is a cosmological primitive — if the mathematics of learning and the mathematics of physical law share structural features for reasons that are not coincidental — then artificial intelligence is not a departure from nature but a new expression of a tendency that has been operating since the universe's earliest moments. The 'artificial' in artificial intelligence becomes misleading. The processes that constitute AI are natural processes — expressions of the same mathematical structures that produced every other form of complex organization.
The software engineer Ben Redmond captured the implication in a 2025 analysis: if learning is fundamental to reality itself, then the tools we are building may not be as artificial as the name suggests. This reframes the AI transition as an event within a long-running cosmological process rather than a rupture imposed on nature from outside. It does not diminish human responsibility — the specific forms AI takes, the specific ways it is deployed, remain choices that humans make in the thick present. But it places those choices within a frame vastly larger than the technological discourse typically provides.
The paper was published as a preprint in 2021 and subsequently developed in follow-up work by several of its authors. The collaboration emerged from ongoing conversations between Smolin and Lanier — the computer scientist, musician, and philosopher who coined the term 'virtual reality' — about whether the mathematics of learning could shed light on the foundations of physics, and whether the mathematics of physics could shed light on the nature of machine learning.
Structural correspondence. The equations of general relativity and the equations of neural network learning share formal features that are precise rather than merely suggestive.
Learning as primitive. Learning — parameter adjustment toward organized outputs — may be a fundamental feature of physical reality rather than a biological invention.
AI as natural process. If learning is cosmological, then artificial intelligence is a new expression of a natural tendency rather than a departure from nature.
Plebanski action. The specific reformulation of general relativity whose equations correspond to Restricted Boltzmann Machines — a mathematical result, not a metaphor.
Interpretive openness. The correspondence is established; its ontological meaning — whether the universe literally learns — remains genuinely open.