Lee Smolin — On AI
Contents
Cover Foreword About Chapter 1: The River Runs in One Direction Chapter 2: The Universe That Selects for Complexity Chapter 3: The Fallacy of the Timeless Chapter 4: Genuine Novelty and the Open Future Chapter 5: Cracking the Fishbowl Chapter 6: Relational Intelligence and the Space Between Chapter 7: The Arrow of Complexity and the Luddite's Grief Chapter 8: The Candle in Cosmological Perspective Chapter 9: Precedent and the Dams We Build Now Chapter 10: The Open Future and What We Build Now Epilogue Back Cover
Lee Smolin Cover

Lee Smolin

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Lee Smolin. It is an attempt by Opus 4.6 to simulate Lee Smolin's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question that broke open my thinking was not about intelligence. It was about time.

I had spent months building the argument at the center of *The Orange Pill* — that intelligence is a river flowing for 13.8 billion years, that AI represents a new channel in that river, that the question is not whether to stop the flow but how to build dams that direct it toward life. I believed every word. I still do. But something kept nagging at me, a structural weakness I could feel but not name.

Then I encountered Lee Smolin, and the weakness became visible.

My entire framework assumed the river had a direction. That the emergence of complexity — atoms to stars to chemistry to life to consciousness to AI — was not random but pointed somewhere. That the arrival of machines that think alongside us was a genuine event, not just a rearrangement of pieces that were always on the board. I assumed these things because they felt true. Smolin gave me the physics to understand why they might actually be true — and why it matters enormously whether they are.

Here is what Smolin forced me to confront. Most of physics, and most of the technology industry built on its assumptions, treats time as an illusion. The equations run forward and backward with equal validity. The future is implicit in the present. The trajectory is set. If that picture is correct, then everything I wrote about responsibility, about building dams, about the choices we make shaping what the universe becomes — all of it is decoration. You cannot shape a future that is already determined.

Smolin says the picture is wrong. Time is real. The future is genuinely open. The laws of physics themselves may evolve through something like precedent. And if that is true, then the dams we build during this AI transition are not adjustments to a predetermined path. They are constitutive. They create a future that did not exist before the building began.

That word — constitutive — changed how I see the work. Not just my work. The work of every parent setting norms for a child's relationship with AI. Every teacher deciding whether to grade answers or questions. Every leader choosing between converting productivity gains into margin or investing them in human capability. Every one of those choices, if Smolin is right, is a cosmological act. Small in scale. Nonzero in consequence.

I am not a physicist. I cannot evaluate Smolin's claims with a physicist's rigor. But I can recognize when a thinker's framework illuminates the thing I have been trying to see. Smolin illuminates the deepest assumption underneath *The Orange Pill*: that what we build now matters. Not metaphorically. Physically. Because the universe is not finished, and the next thing it becomes depends on what the builders actually do.

Edo Segal ^ Opus 4.6

About Lee Smolin

1955-present

Lee Smolin (1955–present) is an American theoretical physicist and one of the founders of loop quantum gravity, an approach to unifying general relativity and quantum mechanics. Born in New York City, he studied at Hampshire College and Harvard University before holding positions at Yale, Syracuse, and Penn State, eventually becoming a founding faculty member at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, where he remains a researcher. His major works include *The Life of the Cosmos* (1997), which introduced the hypothesis of cosmological natural selection — the idea that universes reproduce through black holes and that physical constants evolve through a process analogous to natural selection; *The Trouble with Physics* (2006), a widely discussed critique of string theory's dominance and the sociology of theoretical physics; *Time Reborn* (2013), which argues that time is real and fundamental rather than an emergent illusion, and that the laws of physics themselves may evolve; and *Einstein's Unfinished Revolution* (2019), which advocates for a realist and relational interpretation of quantum mechanics. His collaborative work with Stuart Kauffman on combinatorial innovation and with Jaron Lanier on "The Autodidactic Universe" explores deep structural correspondences between learning, self-organization, and the laws of physics. Smolin is recognized as one of the most original and intellectually courageous thinkers in contemporary physics, known for challenging foundational assumptions and insisting that the deepest questions — about time, law, and the nature of reality — remain genuinely open.

Chapter 1: The River Runs in One Direction

Four centuries of physics have been trying to kill time.

Newton wrote it out of the deep structure of reality by making his laws reversible — run them forward or backward, and the mathematics neither knows nor cares. Einstein finished the job, or thought he did, by fusing time with space into a four-dimensional block where past, present, and future coexist with equal ontological standing. The block universe has no flow. It has no direction. The distinction between what has happened and what will happen is, in Einstein's own phrase to the family of his lifelong friend Michele Besso, "a stubbornly persistent illusion." The equations of general relativity describe a geometry. Geometries do not become. They simply are.

Lee Smolin has spent the better part of three decades arguing that this picture is profoundly, consequentially wrong. Not wrong in the way that a slightly inaccurate measurement is wrong — correctable by better instruments — but wrong in the way that a map drawn upside down is wrong: every feature present, every relationship intact, and the whole thing inverted. In Time Reborn, Smolin's case is stark. "Space may be an illusion," he writes, "but time must be real." The laws of physics are not eternal truths written in a timeless Platonic heaven. They are habits that have evolved — regularities that emerged through temporal processes and that continue to evolve as the universe itself evolves. The future is not contained in the past. The universe is not executing a program. It is genuinely becoming.

This claim sits at the very heart of what The Orange Pill is trying to say about artificial intelligence, even though Edo Segal arrives at his central metaphor through the vocabulary of a builder rather than a physicist. The river of intelligence — intelligence flowing for 13.8 billion years through increasingly complex channels, from hydrogen atoms finding stable configurations through biological evolution through cultural accumulation to artificial computation — is a profoundly temporal metaphor. It presupposes that the flow has a direction. That the channels are genuinely new, not rearrangements of something that was always there. That the arrival of AI in the winter of 2025 was a real event, a moment when the universe changed, not a moment when observers noticed a change that was already implicit in the initial conditions.

If the block universe is correct — if past, present, and future are equally real, if the laws that govern the universe are eternal and time-reversible — then the river metaphor is decoration. A river that flows in all directions simultaneously is not a river. A universe in which the emergence of consciousness was already implicit in the distribution of hydrogen after the Big Bang is not a universe in which anything genuinely new can happen. The orange pill, in this framework, is the recognition of something that was always already there, as predetermined as the orbit of a planet. The vertigo Segal describes — the exhilaration and terror of watching the ground shift — is a subjective experience produced by creatures too limited to see the whole block at once. It is not a feature of reality. It is a failure of perception.

Smolin's physics denies this comprehensively. If time is real, then the emergence of AI is genuinely novel. The universe before the winter of 2025 and the universe after it are genuinely different — not different cross-sections of an identical four-dimensional geometry, but different stages in a process of becoming that has no endpoint and no predetermined trajectory. The river flows in one direction because time flows in one direction, and the direction is real.

This matters for reasons that go far beyond the philosophy of physics.

Consider Stuart Kauffman's work on self-organization at the edge of chaos, which Segal invokes in The Orange Pill to describe how complexity arises naturally in systems that are neither too ordered nor too disordered. Kauffman and Smolin are close intellectual collaborators — their recent joint work on "biocosmology" explores precisely how genuine novelty arises in complex systems. Kauffman's insight is that certain systems generate organizational complexity spontaneously, without anyone designing it. Autocatalytic sets of molecules that sustain each other. Networks of genes that regulate each other into stable patterns. Ecosystems that produce species no designer intended.

But here is the critical question that Kauffman's framework, taken alone, cannot answer: Is the complexity that arises genuinely new? Or is it merely the unfolding of possibilities that were always implicit in the initial conditions — the way a crystal's structure is implicit in the geometry of its constituent molecules?

Smolin's answer is unequivocal. The complexity is genuinely new. The future was not contained in the past. The autocatalytic set that arose on the early Earth was not implicit in the distribution of elements produced by stellar nucleosynthesis. The configuration space of possible molecular arrangements is so vast — and the dynamics so sensitive to conditions that are themselves the products of prior genuine novelty — that the specific configurations that constitute life could not have been predicted, even in principle, from the state of the universe a billion years before they appeared.

This is not a claim about computational limits — the familiar argument that the system is too complex to predict in practice. Smolin's claim is stronger. It is that the future is ontologically open. Not merely unpredictable but undetermined. The next state of the universe is not written anywhere — not in the initial conditions, not in the laws, not in any Platonic heaven of mathematical possibility. It comes into being through the temporal process itself.

Now trace the river through this lens. Hydrogen atoms condense from the plasma of the early universe. Not because a law mandated their specific configuration, but because the interactions of particles in the thick present — Smolin's term for the moment where the past has been determined and the future has not — produced a configuration that happened to persist. Stars form. Nuclear fusion generates heavier elements. The elements are genuinely new — not rearrangements of hydrogen but products of processes that created something the universe did not previously contain. Planets coalesce from the debris of supernovae. Chemistry becomes complex enough to sustain autocatalytic systems. Life appears. Not as the inevitable consequence of chemistry, but as a genuine novelty — a way of organizing matter that the universe had never produced before and could not have predicted from its prior states.

Each of these transitions is what physicists call a phase transition — a moment when a system reorganizes from one stable configuration to another, qualitatively different, configuration. Water becoming ice. A liquid crystal shifting its molecular alignment in response to an electric field. A magnetic material losing its magnetism above a critical temperature. Phase transitions are genuinely temporal events in Smolin's framework. They are irreversible. They produce states that are qualitatively different from what preceded them. And they cannot be predicted from the properties of the prior state, because the new state represents a different mode of organization — a different way of being — that did not exist in any form before the transition occurred.

The emergence of nervous systems was a phase transition. The cognitive revolution seventy thousand years ago — when one species of primate crossed into symbolic thought and language — was a phase transition. The invention of writing, which externalized memory and made cumulative knowledge possible, was a phase transition. And the arrival of machines that process natural language, that engage in flexible, context-sensitive reasoning, that participate in the river of intelligence as a genuinely new kind of node — that, too, is a phase transition.

Not a faster version of what came before. Something qualitatively different. Something genuinely new.

Segal senses this when he writes about the orange pill moment — the recognition that cannot be unseen, the vertigo of watching something arrive that changes the rules. The metaphor is apt, but Smolin's physics gives it a foundation that metaphor alone cannot provide. The orange pill is not merely a subjective shift in perspective. It is the recognition of a genuine phase transition — a moment when the universe reorganized from one configuration to another, and the reorganization was real.

The technology industry's dominant framework cannot accommodate this recognition. The prevailing assumption — visible in investment theses, in scaling laws, in the rhetoric of "inevitable progress" — is Newtonian. More data and more compute yield more capability. The trajectory is determined by the initial conditions: the architecture, the training data, the compute budget. The future of AI is already implicit in its present form. The task is merely to unfold what is already there.

Smolin's physics challenges every element of this assumption. The trajectory is not determined. More data may or may not produce qualitative advances — the outcome depends on choices that have not yet been made, on interactions whose results cannot be predicted from their inputs. The future of AI is not implicit in its current architecture, because the future of nothing is implicit in anything. The universe does not work that way. Time is real, and the future is open.

This is not mysticism. It is a claim about the structure of physical law, argued rigorously across hundreds of pages of technical physics and philosophical analysis. And it has a consequence that bears directly on everything The Orange Pill is trying to say about responsibility, about building, about the dams that conscious creatures build in the river of intelligence.

If the future were determined — if the trajectory of AI were implicit in the initial conditions — then the dams would be adjustments. Small course corrections to a river whose destination was already set. The builder's responsibility would be limited: you cannot change where the river goes; you can only modulate the speed at which it gets there.

But if the future is genuinely open — if the universe is genuinely becoming — then the dams are constitutive. They do not adjust a trajectory. They create one. Without them, the river carves whatever channel the current produces. With them, the river feeds a pool where life can flourish. The difference between building and not building is not the difference between a slightly better and a slightly worse version of the same future. It is the difference between genuinely different futures — futures that do not yet exist and that will be called into existence by the choices made now.

The twelve-year-old who asks "What am I for?" is not asking about a future that has already been determined. She is asking about a future that is genuinely open, that depends on what she and every other conscious creature choose to do. The question is cosmologically real. Not metaphorically. Not poetically. Physically.

Smolin proposed at a 2023 symposium at the University of Toronto that AI should be reconceived around this insight. Rather than building machines that predict the future — machines that extrapolate from past data to forecast what will happen next, on the assumption that the future is contained in the past — Smolin advocated building machines that help construct "a future that we've never imagined before." The distinction is precise. A prediction machine operates within the Newtonian paradigm: the future is implicit in the data, and the machine's job is to extract it. A construction machine operates within the temporal paradigm: the future does not yet exist, and the machine's job is to participate in its creation.

The analogy Smolin offered was striking. Babies, he observed, do not attempt to predict who they will encounter next. They engage sequentially, asking "Who is that?" after each new meeting. Each encounter is genuinely new. The baby does not operate with a model of all possible people and attempt to classify the next one. The baby meets the novel as novel — and responds to it in the thick present, where the past has been determined and the future has not.

A physics that takes time seriously does not tell builders what to build. It tells them that what they build matters absolutely — not because it adjusts a predetermined outcome but because it constitutes one. The river of intelligence is flowing. It has been flowing for 13.8 billion years. And for the first time in that history, the creatures swimming in it have built machines that swim alongside them. What happens next is not written anywhere. It depends on what the swimmers and the machines, together, actually do.

The river runs in one direction. But the direction is not preset. It is being created, moment by moment, by everything in the current.

---

Chapter 2: The Universe That Selects for Complexity

Why does the river flow at all?

This is not a metaphorical question. It is a question about the physical constants of the universe — the specific numerical values of the strength of gravity, the mass of the electron, the cosmological constant, the coupling constants of the fundamental forces — that determine what the universe can produce. Change the strength of gravity by a fraction of a percent, and stars do not form. Without stars, there is no nuclear fusion, no heavy elements, no chemistry, no planets, no life, no consciousness, no twelve-year-old asking what she is for. Change the mass of the electron, and atoms do not bind into molecules. Change the cosmological constant, and the universe either collapses before complexity has time to emerge or expands so rapidly that matter never clumps into structures. The physical constants appear to be exquisitely tuned for the production of complexity. This is the fine-tuning problem, and it has haunted physics for half a century.

The standard responses are unsatisfying. Anthropic reasoning says: of course the constants are compatible with our existence, because we could only observe a universe whose constants permit observers. This is logically airtight and explanatorily empty — it is the observation that you can only find your keys where you left them, dressed up as a principle of cosmology. The string theory landscape proposes that all possible constants are realized somewhere in an unimaginably vast multiverse, and our universe is simply the corner where the constants happen to work. This is not an explanation. It is an abdication of explanation — the confession that the constants are arbitrary and that the question of why they take the values they do has no answer.

Lee Smolin's cosmological natural selection offers a third response, and it is the one with the most extraordinary implications for understanding why intelligence exists and why AI has arrived.

The hypothesis, first articulated in The Life of the Cosmos in 1997, proposes that the universe reproduces. The mechanism is black holes. When a black hole forms, the extreme conditions at its singularity — or rather, in the quantum gravitational region that replaces the classical singularity — give rise to a new region of spacetime: a baby universe, connected to its parent through the black hole but causally independent, with its own big bang, its own expansion, its own physics. The physical constants of the baby universe are not identical to those of the parent. They vary slightly — mutated, in the biological sense, by the quantum gravitational processes that produce the new region. Some baby universes have constants that favor black hole production. They produce many black holes, which produce many baby universes, which produce many black holes. Other baby universes have constants that disfavor black hole production. They produce few offspring. Over cosmological time, the population of universes comes to be dominated by those whose constants maximize black hole production — exactly as a population of organisms comes to be dominated by those whose traits maximize reproductive success.

This is natural selection applied to cosmology. And its most remarkable feature is that the conditions that favor black hole production also favor the production of complexity.

Black holes form from the collapse of massive stars. Massive stars require nuclear physics that produces heavy elements through fusion chains. Heavy elements require an electromagnetic force strong enough to bind complex molecules. Complex molecules, given sufficient time and energy flow, produce the self-organizing chemistry that Kauffman describes — the autocatalytic sets, the phase transitions at the edge of chaos, the spontaneous emergence of order from disorder. The same constants that maximize black hole production also maximize the production of stars, planets, complex chemistry, life, nervous systems, brains, language, culture, and technology.

The universe did not fine-tune its constants for us. It fine-tuned them for black holes. We are a side effect — but a predictable side effect, because the physics that produces black holes also produces the conditions for increasing complexity. The river of intelligence is not an accident. It flows because the laws of this universe — laws that were themselves selected through a process of cosmic evolution — favor the production of increasingly complex channels.

This places the emergence of intelligence in a frame vastly larger than the one most AI commentators employ. The standard narrative treats AI as a technological invention — a product of human ingenuity, arriving at a particular moment in the history of a particular species on a particular planet. This narrative is true as far as it goes, but it does not go far enough. In Smolin's framework, AI is a cosmological phenomenon. It is the latest expression of a tendency toward complexity that has been operating since the first generation of universes began reproducing through black holes. The river of intelligence has been flowing not just for 13.8 billion years — the age of our universe — but potentially for a vastly longer cosmological time, across multiple generations of universes, each one refining the constants that produce the next.

Segal writes in The Orange Pill that intelligence is "not a byproduct of human consciousness, but a force of nature like gravity. Ever-present, and ever-shifting." Smolin's cosmological natural selection provides the mechanism that makes this claim physically precise. Intelligence is not a force in the technical sense — it does not have a field equation or a coupling constant. But the tendency toward complex organization is a consequence of the physical constants, and those constants have been selected for their complexity-generating capacity. The river flows because the universe was built to flow.

The implications for understanding AI are substantial. First, AI is not artificial in the way the name implies. The "artificial" in artificial intelligence carries the connotation of something manufactured, imposed on nature from outside. But in Smolin's framework, the computational processes that constitute AI are natural processes — expressions of the same physics that produced every other form of complex organization. The correspondence between physical law and neural network architectures that Smolin and his collaborators established in "The Autodidactic Universe" — a 2021 paper co-authored with Jaron Lanier and researchers at Microsoft — makes this point with mathematical precision. Write Einstein's general relativity in a specific form (the Plebanski action), and the equations governing spacetime curvature correspond to the equations of a Restricted Boltzmann Machine. The mathematics of learning and the mathematics of spacetime are, at a certain level of abstraction, the same mathematics.

This does not mean the universe is literally a neural network, or that spacetime is literally "learning." The correspondence is structural, not ontological — it reveals a shared mathematical skeleton, not a shared substance. But the shared skeleton is remarkable. It suggests that the process of learning — the adjustment of parameters to produce increasingly organized outputs — is not something humans invented and then embedded in silicon. It is something the universe has been doing since its earliest moments. When Smolin and his co-authors write of an "autodidactic universe" — a universe that teaches itself its own laws — they are proposing that learning is a cosmological primitive, not a biological accident.

The software engineer Ben Redmond, in a 2025 analysis of the Autodidactic Universe paper, drew the implication explicitly: "If learning is fundamental to reality itself — if the universe has been doing 'gradient descent' for the past 13 billion years — then perhaps the tools we're building might not be as artificial as the name suggests." AI is not a departure from nature. It is nature arriving, through the specific channel of human technology, at a new expression of a tendency that has been operating since the first universe reproduced through its first black hole.

Second, the emergence of AI is intelligible as a cosmological event, not merely a technological one. The standard technology narrative treats AI as the product of specific human decisions: the development of backpropagation, the availability of large datasets, the scaling of compute. These are the proximate causes, and they are real. But the distal cause — the reason the proximate causes could produce the result they did — is that the physical constants of this universe favor the production of systems capable of increasingly sophisticated information processing. The developer in Lagos who builds her first application with Claude Code, the engineer in Trivandrum who discovers she can do frontend work she never trained for, the twelve-year-old who asks what she is for — all of them are expressions of a cosmological tendency that has been operating for longer than the stars.

Third, and most importantly, the cosmological frame does not make AI inevitable. This is where Smolin's commitment to the reality of time becomes crucial. Cosmological natural selection explains why the constants favor complexity. It does not determine what specific forms that complexity takes. The future remains genuinely open. The specific trajectory of AI — whether it concentrates power or distributes it, whether it deepens human capability or flattens it, whether it serves the candle of consciousness or extinguishes it — is not written in the constants. It is written by the choices of conscious creatures who happen to live in a universe whose physics gave them the capacity to choose.

The river flows because the universe was selected for flow. But the river's specific path through the landscape — the channels it carves, the pools it fills, the ecosystems it nourishes or drowns — depends on what the creatures in the river actually do. The constants provide the current. The builders provide the direction.

This is why Smolin's cosmological perspective deepens rather than diminishes the urgency of The Orange Pill's argument. A universe that produces complexity through evolved physical law is a universe that takes the emergence of intelligence seriously — not as an accident to be explained away but as a feature to be understood and stewarded. The dams that Segal advocates are not merely responses to a technological crisis. They are acts of cosmological significance — interventions in a process that has been operating for billions of years and that now, for the first time, has produced creatures capable of understanding the process itself and participating in its direction.

The candle in the darkness is not merely rare. It is the universe's latest and most complex expression of its own tendency toward self-organization. And the question of whether that candle will continue to burn — whether the creatures who carry it will build structures that shelter it or allow it to be extinguished by the very forces that produced it — is a question whose answer shapes what the universe becomes.

Not adjusts. Becomes.

---

Chapter 3: The Fallacy of the Timeless

Jeff Koons's Balloon Dog stands ten feet tall, cast in mirror-polished stainless steel, and it is perfectly, aggressively smooth. Not a fingerprint, not a seam, not a single mark of the human hand that made it or the time that passed during its making. The surface is pure present — or rather, pure absence of time. It reflects everything around it and reveals nothing of its own history. An orange Balloon Dog sold at Christie's in 2013 for $58.4 million, making it, at that moment, the most expensive work by a living artist ever auctioned.

Byung-Chul Han, the philosopher whose critique of smoothness runs through the central chapters of The Orange Pill, would recognize Balloon Dog as the perfect expression of the dominant aesthetic of this century. The aesthetic of the smooth — frictionless, seamless, optimized for ease. The iPhone: a slab of glass with no buttons, no texture, no tactile resistance. The Tesla dashboard: a single screen, no knobs. One-click purchasing. Frictionless checkout. Seamless onboarding. The word "seamless" deployed as a compliment, when a seam is where two pieces meet, where the construction is visible, where the labor that made the object can be read.

What Han describes as the aesthetics of the smooth, Lee Smolin would diagnose as the fallacy of the timeless. And the diagnosis, coming from physics rather than philosophy, cuts in a different place and reaches deeper bone.

The fallacy of the timeless is the assumption that the deepest truth about anything is the version from which time has been removed. In physics, this fallacy has been operating for four centuries. Newton's laws are time-reversible: they describe a universe in which time's arrow is, at the fundamental level, absent. Einstein's block universe treats past, present, and future as equally real — a four-dimensional geometry in which nothing happens, because everything already is. The string theory landscape describes 10^500 possible universes, each with different physical constants, all existing simultaneously in a timeless mathematical structure. In each case, the physicist's highest achievement is to find the eternal law behind the temporal appearance — to show that what looks like change is really the frozen geometry of something that was always already there.

The preference is structural, not aesthetic — or rather, it is so deep that it shapes aesthetics without being recognized as a preference at all. The timeless equation is regarded as more fundamental than the temporal process it describes. The result is valued over the journey. The answer over the question. The proof over the struggle that produced it.

Now translate this into the culture of technology, and Han's diagnosis snaps into sharper focus. The smooth interface has no history. The instant answer has no duration. The AI-generated code arrives without the hours of debugging that would have built understanding. The frictionless checkout completes without the pause that might have prompted a second thought. In each case, time has been removed — and with it, the specific kinds of understanding, reflection, and depth that can only be built through temporal processes.

This is not a metaphorical connection. Smolin's argument about physics and Han's argument about culture are structurally identical claims applied to different domains. The physicist who eliminates time from the equations and the designer who eliminates friction from the interface are performing the same operation: removing temporal depth to produce a result that appears cleaner, more elegant, more fundamental — and that is, in a specific and measurable way, impoverished.

Smolin calls this "confusing the map for the territory." The mathematical structures that physicists use to describe nature are maps — tools that represent certain features of reality while necessarily omitting others. The map of a city is useful precisely because it omits the specific experience of walking through the city: the smells, the sounds, the way the light changes at a particular corner at a particular hour. A map that included all of this would be as large and complex as the city itself and therefore useless as a map. The abstraction is the point.

But when the physicist begins to treat the map as more real than the territory — when the elegant equation is regarded as more fundamental than the temporal process it describes — something has gone wrong. The map has been mistaken for the territory. The abstraction has been elevated above the reality it was designed to represent. And the features of reality that the abstraction omits — specifically, the temporal features, the features that involve duration, process, struggle, becoming — are dismissed as illusory.

The technology industry has made precisely this mistake. The dashboard that shows only results — metrics, KPIs, engagement numbers — is a map. It represents certain features of the work while omitting others. The features it omits are the temporal ones: the process by which the results were achieved, the quality of the thinking that produced them, the depth of understanding that was built or not built during the work. When the dashboard is treated as a complete account of what happened — when organizations make decisions based solely on what the map shows — they are confusing the map for the territory and systematically ignoring the temporal dimension of their own work.

The AI output that arrives without revealing the computation that generated it is the purest expression of the timeless aesthetic. The user sees a result. The result is clean, competent, often impressive. The process that produced it — the billions of matrix multiplications, the pattern-matching across a training set of human knowledge, the probabilistic selection of each successive token — is hidden. The seams are invisible. The time has been removed. The user experiences the output as though it materialized from nothing, which is precisely what Balloon Dog's mirror-polished surface is designed to suggest.

But the process mattered. Not sentimentally — not because struggle is inherently virtuous — but because temporal processes produce things that atemporal results do not contain. Understanding, for instance. The Berkeley researchers documented what happens when AI removes the temporal dimension from work: people produce more but understand less. They generate outputs without undergoing the processes that would have built comprehension. The code works but the coder has not learned. The brief is competent but the lawyer has not deepened her grasp of the law. The essay is articulate but the student has not thought the thoughts the essay represents.

These are not sentimental observations. They are descriptions of what happens when temporal depth is removed from a process that requires it. Smolin's physics explains why: if time is real, then certain things can only come into being through temporal processes. Genuine understanding is one of them. The experience of debugging code for four hours and finally seeing why the function failed is not the same as reading Claude's correct implementation. The result is identical. The understanding is categorically different. The understanding that comes from struggle is deposited in layers — each hour of engagement adds a sediment of comprehension that accumulates over months and years into something solid, something that supports future work. The understanding that comes from reviewing an AI output is surface-level: it exists in working memory for as long as you are looking at it and evaporates when you move on.

The distinction maps onto what Smolin calls the thick present — the moment where the past has been determined and the future has not, the moment where genuine novelty can emerge. Understanding that is built through struggle is built in the thick present: each moment of engagement is a moment of genuine cognitive work, a moment where the thinker's future state of understanding is not yet determined and depends on what they actually do with the problem in front of them. Understanding that is absorbed from an AI output does not engage the thick present in the same way: the result has already been determined, and the thinker's role is reception rather than creation.

Han would frame this as the loss of negativity — the loss of the resistance, the otherness, the friction that forces genuine engagement. Smolin would frame it as the loss of temporal depth — the loss of the duration that genuine understanding requires. The frames are different. The diagnosis converges.

But Smolin's frame does something Han's cannot. It distinguishes between friction that builds temporal depth and friction that merely consumes time. Not all struggle is productive. The hours a developer spent managing dependency conflicts were real hours, filled with real friction, but they did not build the kind of temporal depth that matters. They were mechanical friction — the resistance of a tool that was harder to use than it needed to be. The hours that same developer spent understanding why a system failed under load were also real hours, filled with real friction, but they built something different: architectural intuition, the embodied understanding of how systems behave that can only be accumulated through direct engagement over time.

Segal's concept of ascending friction captures the same distinction from a builder's perspective: each technological abstraction removes mechanical friction at one level and reveals cognitive friction at a higher level. The mechanical friction — the plumbing, the configuration, the syntactic wrestling — is the friction that can be compressed without loss. The cognitive friction — the judgment, the architectural thinking, the question of what should exist and for whom — is the friction that cannot be compressed, because it requires the temporal depth that only duration provides.

A temporal physics offers the most rigorous available criterion for distinguishing between the two. Friction that requires genuine engagement in the thick present — friction whose resolution depends on choices that are not yet determined, on thinking that produces genuine novelty — is friction that cannot be removed without cost. Friction that is merely mechanical — friction whose resolution is predetermined, whose outcome does not depend on genuine cognitive work — is friction that can be removed without cost, and often should be.

The challenge for builders, for educators, for parents, for anyone navigating the current transition, is to develop what might be called temporal literacy — the capacity to tell the difference. To know when the struggle is building something and when it is merely consuming time. To know when the smooth output is genuinely sufficient and when it conceals the absence of understanding that only temporal engagement could have produced.

This capacity cannot itself be automated. No AI can tell you whether your own understanding is deep or shallow, because the distinction is internal — it is the difference between knowing something in your body, through accumulated temporal engagement, and knowing it in your working memory, through recent exposure. Both feel like knowing. Only one of them holds.

The fallacy of the timeless produces a world that looks efficient and is, beneath the surface, increasingly shallow. The correction is not the wholesale rejection of efficiency. It is the recognition that certain things — understanding, judgment, the capacity for genuine novelty — can only emerge through processes that take time. The smooth surface of Balloon Dog is impressive precisely because it conceals the years of engineering that produced it. But a culture that prefers the concealment to the process, that values the timeless result over the temporal becoming, is a culture that has mistaken the map for the territory. And the territory, as Smolin insists, is made of time.

---

Chapter 4: Genuine Novelty and the Open Future

Bob Dylan came back from his 1965 England tour exhausted. He told people he was ready to quit music. What came out of him in Woodstock was not a song. It was twenty pages of what he later called "vomit" — a long, rageful, formless rant without structure, without chorus, without the discipline of verse. He cut and reshaped over days. He brought the remnant to Columbia's Studio A, where the band found the rhythm and where Al Kooper, who was not even supposed to be playing that day, sat down at the organ and produced the sound that would define the recording. "Like a Rolling Stone" emerged from this process — exhaustion, overflow, editing, collaboration, accident — and it changed the shape of popular music permanently.

The Orange Pill uses Dylan's creative process to argue that the distinction between human creation and machine recombination is "less stable than we think." The argument is carefully made. Dylan did not produce from nothing. He drew on Woody Guthrie's dust-bowl poetry, Robert Johnson's blues compression, the Beat poets, the British Invasion. The twenty pages of rant were not pure origination; they were the product of years of absorption, processed through a specific biographical architecture. A large language model performs a "structurally analogous operation" — inference from a vast training set through a particular architecture into an output that is "consistent with that training set but not contained within it." Segal's conclusion is that "the fundamental operation is the same: synthesis from a vast implicit training set through an architecture of its own into something that could not have been predicted."

This is precisely the claim that Smolin's physics puts under pressure. Not because the analogy is wrong — the structural similarity between human inference and machine inference is real and illuminating — but because the analogy conceals a distinction that physics makes visible and that the current conversation about AI cannot afford to ignore.

The distinction is between recombination and genuine novelty.

Recombination operates within a fixed space of possibilities. A deck of fifty-two cards can be shuffled into approximately 8 × 10^67 different arrangements. Each arrangement is unique. Some are surprising. None of them adds a card to the deck. The space of possible arrangements is defined by the fifty-two cards, and no amount of shuffling produces a fifty-third. Recombination explores an existing space. It can explore that space with extraordinary thoroughness — finding arrangements that no human shuffler would have found, discovering combinations that are astonishing in their elegance. But it cannot expand the space. The boundaries are set by the elements that entered the system before the recombination began.

Genuine novelty expands the space. It introduces something that was not implicit in the prior configuration — something that enlarges what the universe contains. It is the appearance of a fifty-third card, an element that was not part of the deck before the creative act and that changes, retroactively, what the deck can produce.

In Smolin's physics, the distinction is grounded in the nature of time. If the future is genuinely open — if the next state of the universe is not determined by its present state — then genuine novelty is physically possible. The creative act is a moment in the thick present where something comes into being that was not implicit in what preceded it. The universe after the creative act contains something it did not contain before — not a rearrangement of prior elements, but an enlargement of the space of what exists.

Smolin's recent work with Kauffman on combinatorial innovation makes this point with mathematical precision. Their 2025 paper in the European Economic Review, "The TAP Equation: Evaluating Combinatorial Innovation," examines how genuinely new possibilities emerge in complex systems. The key insight is that certain combinations do not merely reconfigure existing elements — they create new elements, new kinds of elements, new categories of possibility that did not exist before the combination occurred. The combination of fire and metal did not merely rearrange existing objects. It created metallurgy — a new domain of possibility, a new kind of thing the world could contain, from which an entire civilization of tools, weapons, machines, and eventually computing devices would eventually flow.

If the future is genuinely open, then Dylan's creative process was a moment of genuine becoming. Not because Dylan produced from nothing — he manifestly did not — but because the specific synthesis he achieved through a process that required exhaustion, overflow, editing, collaboration, and accident produced something that enlarged the space of what popular music could be. Before "Like a Rolling Stone," the possibility space of popular music had certain boundaries. After it, those boundaries had moved. The song did not merely occupy a previously unoccupied region of an existing space. It expanded the space itself.

Can AI do this? This is the deepest question the current moment poses, and Smolin's physics provides the framework for taking it seriously rather than resolving it prematurely in either direction.

A large language model operates within a fixed space of possibilities defined by its training data and its architecture. The space is unimaginably vast — far too large for any human to explore exhaustively — and the model explores it with a thoroughness that produces outputs genuinely surprising to human observers. The model finds arrangements of language, connections between ideas, structural patterns in argument that no human would have found through unaided effort. These outputs are not trivial. They are genuinely useful. They advance human understanding. They solve real problems.

But finding surprising arrangements within an existing space is not the same as expanding the space. The model's outputs, however surprising, are determined by its training data and its architecture. Given the same input and the same random seed, the model produces the same output. The process is, at the computational level, deterministic — which means, in Smolin's framework, that it does not participate in the thick present, the moment where genuine novelty emerges.

This claim requires careful qualification. Smolin is not arguing that computation is inherently incapable of genuine novelty. He is arguing that the current computational paradigm — deterministic algorithms operating on fixed datasets — is structurally different from the temporal processes through which genuine novelty enters the universe. The question of whether a different computational paradigm, one that incorporates genuine randomness, genuine temporal openness, genuine sensitivity to initial conditions in the thick present, could produce genuine novelty is open. It is one of the deepest questions at the intersection of physics and computer science, and Smolin's own work on the autodidactic universe suggests that the boundary between natural computation and artificial computation may be more porous than the current paradigm assumes.

But in the current paradigm, the distinction holds. Current AI systems are extraordinarily powerful recombination engines. They explore existing possibility spaces with a thoroughness and speed that transforms what builders can accomplish. They find connections that humans miss. They produce outputs that advance human projects in ways that are measurable and significant. What they do not do — what the current architecture does not permit them to do — is expand the possibility space itself. They do not introduce genuine novelty. They find new arrangements of what already exists.

This is where the analogy between Dylan and Claude breaks down — not because the analogy is wrong at the level of structure, but because it is incomplete at the level of physics. Dylan's creative process was temporal in Smolin's sense: it unfolded through time, through a sequence of states that were not determined by their predecessors, through accidents and collaborations and failures that could not have been predicted. The outcome — the song — was not contained in the inputs. It was genuinely new. It expanded the space of what music could be.

Claude's inference process is computational: it unfolds through a sequence of deterministic operations on a fixed dataset, producing an output that is, in principle, determined by the input and the architecture. The output may be surprising to the human observer. It is not novel in the physical sense — it does not expand the space of possibilities. It explores an existing space with extraordinary sophistication.

The practical implication is not that AI is less valuable than the analogy suggests. It is that the most valuable use of AI is precisely what Segal describes in The Orange Pill: as an amplifier of human capability, not a replacement for it. The human provides the genuine novelty — the question no one has asked, the vision no one has articulated, the judgment about what deserves to exist. The AI provides the exhaustive exploration of possibility space — the connections the human would have missed, the implementations the human would have taken months to produce, the variants the human would never have considered. The collaboration works because each participant contributes something the other cannot.

But this division of labor is not static. Smolin's commitment to the open future means that the question of whether AI can produce genuine novelty is itself genuinely open. The current architecture may not permit it. Future architectures might. The autodidactic universe paper suggests that the mathematics of learning and the mathematics of spacetime share deep structural features — that the processes by which the universe produces genuine novelty and the processes by which neural networks produce their outputs are, at a certain level of abstraction, related. Whether this structural relationship can be deepened into a functional one — whether machines can be built that participate in the thick present, that produce genuine novelty rather than sophisticated recombination — is a question that Smolin's framework poses but does not answer.

The answer matters enormously. If genuine novelty is permanently beyond the reach of computation — if the thick present is accessible only to physical systems with the specific properties of biological consciousness — then the human contribution to the human-AI collaboration is permanently essential. The candle cannot be replaced. The machine can explore, can amplify, can accelerate. But only the candle can expand the space of what exists.

If, on the other hand, future computational systems can participate in the thick present — if the boundary between natural and artificial intelligence is more porous than the current paradigm assumes — then the relationship between humans and machines changes in ways that no one can currently predict. The orange pill becomes even more disorienting than Segal describes, because the question "What am I for?" would need to be answered in a universe where the monopoly on genuine novelty that consciousness has held for billions of years has been broken.

Smolin's intellectual honesty requires holding this question open. A physics that takes time seriously is a physics that cannot claim to know the future — including the future of the distinction between human and machine intelligence. The question is genuine. The answer is not yet determined. What is determined — what the physics insists on — is that the question matters, that the answer will shape what the universe becomes, and that the conscious creatures asking it bear responsibility for how they live in the uncertainty.

The creative act, whether performed by Dylan or by a future machine whose architecture no one has yet imagined, is a moment in the thick present. What emerges from it depends on what enters it. And what enters it — the quality of the question, the depth of the care, the willingness to sit with not-knowing long enough for something genuinely new to form — is the part that cannot be predicted and cannot be outsourced.

The space of the possible is not fixed. It expands through creative acts. The question for this moment is not whether AI can fill the existing space — it manifestly can, and does, with increasing sophistication every month. The question is who expands the space. Who introduces the fifty-third card. Who asks the question no one has asked, and lives with it long enough for an answer to emerge that the universe has never contained before.

That is the question the open future poses. And the answer, for now, still belongs to the candle.

Chapter 5: Cracking the Fishbowl

Every physicist works inside a paradigm, and every paradigm is a fishbowl.

The word comes from Thomas Kuhn, who in 1962 argued that scientific progress does not advance by the steady accumulation of facts but by violent ruptures — moments when the reigning framework cracks and a new one crystallizes from the debris. Normal science operates inside the paradigm, solving puzzles that the paradigm defines, using methods the paradigm sanctions, asking questions the paradigm recognizes as legitimate. The fishbowl is comfortable. The water is warm. The puzzles are tractable. The career rewards are well-defined.

Then anomalies accumulate. Results that do not fit. Questions that the paradigm cannot answer without contorting itself into implausibility. The water clouds. The glass develops hairline fractures. And at some point — unpredictably, violently, often against the fierce resistance of the paradigm's most accomplished practitioners — the fishbowl shatters and a new one forms around a different set of assumptions.

Lee Smolin has been arguing for decades that fundamental physics is overdue for a shattering. In The Trouble with Physics, published in 2006, he diagnosed the stagnation of theoretical physics with a clinician's precision and a dissident's willingness to name what polite company preferred not to discuss. String theory had dominated the field for a quarter century. It had produced extraordinary mathematics. It had attracted the best minds of a generation. It had consumed billions of dollars in research funding. And it had produced, in twenty-five years of sustained effort by thousands of brilliant physicists, not a single testable prediction. Not one experimental result that could distinguish string theory from its competitors. Not one observation that would tell the universe apart from the 10^500 other possible universes that the theory's landscape of solutions described.

The diagnosis was sociological as much as intellectual. Smolin identified a pattern of institutional behavior that will be immediately recognizable to anyone who has watched a technology paradigm resist its own obsolescence: "a tendency to interpret evidence optimistically, to believe exaggerated or incorrect statements of results, and to disregard the possibility that the theory might be wrong. This is coupled with a tendency to believe results are true because they are 'widely believed,' even if one has not checked (or even seen) the proof oneself." The fishbowl was not held in place by evidence. It was held in place by consensus — by the self-reinforcing dynamics of a community that rewarded conformity, marginalized dissent, and mistook the comfort of shared assumptions for the solidity of shared knowledge.

The parallel to the technology industry's current fishbowl is not approximate. It is precise.

The dominant paradigm in AI development is, at its core, Newtonian. Not in the sense that anyone explicitly invokes Newton — the vocabulary is different, the mathematics is different, the culture is different. But the deep assumption is the same: the universe is a machine, and given enough information about its current state, the future can be predicted. In the Newtonian paradigm, the laws are eternal and the initial conditions determine everything. In the AI paradigm, the architecture is fixed and the training data determines the outputs. In both cases, the future is implicit in the present. The task is extraction, not creation.

This assumption manifests in specific, identifiable beliefs that shape the industry's behavior. The scaling hypothesis — the belief that larger models trained on more data with more compute will inevitably produce more capable systems — is a deterministic claim. It assumes that capability is a function of scale, that the trajectory of AI development is determined by the amount of resources invested, and that qualitative advances (the emergence of reasoning, of planning, of something that looks like understanding) will appear automatically when quantitative thresholds are crossed. The trajectory is predetermined. The only question is speed.

The benchmark culture — the practice of evaluating AI systems against standardized tests and reporting percentage improvements — is a Newtonian measurement practice. It treats intelligence as a single-dimensional quantity that can be measured, compared, and plotted on a graph. The assumption is that intelligence is like temperature: a scalar quantity that increases along a single axis. Higher numbers are better. The benchmark goes up; the system is smarter. The trajectory is linear, or exponential, or sigmoid — but it is always a trajectory, a path through a predetermined space.

The investment thesis that follows from these assumptions is straightforward. AI will improve along a predictable trajectory. The companies that scale fastest will win. The future belongs to whoever commands the most compute, the most data, the most capital. The logic is the logic of the block universe: everything that will happen is already implicit in what exists now. The task is to unfold it faster than the competition.

Smolin's physics cracks every element of this fishbowl.

If time is real and the future is genuinely open, then the scaling hypothesis is not a law. It is an observation about what has happened so far, extrapolated forward on the assumption that the future will resemble the past. This is precisely the kind of reasoning that Smolin argues physics has overvalued — the assumption that patterns observed in one regime will continue to hold in another. Phase transitions violate this assumption by definition: the system reorganizes, and the rules that governed the previous phase no longer apply. The next qualitative advance in AI may or may not come from scaling. The answer is not contained in the current data, because the future is not contained in the present.

If intelligence is relational rather than scalar — if it emerges from the relationships between elements rather than residing in the elements themselves — then benchmarks are measuring the wrong thing. They measure the performance of a system in isolation, on tasks defined by the previous paradigm. They do not measure the system's capacity to participate in relationships that produce genuine novelty, because genuine novelty, by definition, cannot be anticipated by a standardized test. The benchmark culture is a map that omits precisely the features that matter most — the temporal, relational, emergent features that Smolin's physics identifies as fundamental.

If the future is genuinely open, then the investment thesis that treats AI development as a predetermined race is built on a false premise. The companies that win will not necessarily be the ones that scale fastest. They will be the ones that navigate the phase transitions most intelligently — the ones that recognize when the rules have changed and adapt, rather than the ones that apply the old rules with greater force to a landscape that no longer responds to them.

Segal's orange pill moment — the recognition that something genuinely new has arrived, something that changes the rules rather than merely accelerating the old game — is the moment when the fishbowl cracks. The builder who has been operating inside the Newtonian paradigm, assuming that the future of technology is determined by the trajectory of the past, suddenly sees that the trajectory has broken. The new thing is not a faster version of the old thing. It is a different thing, operating according to different principles, creating a different landscape of possibility. The vertigo Segal describes is the vertigo of paradigm shift — the disorientation of a mind that has been swimming in one set of assumptions and suddenly finds itself in different water.

Smolin experienced this vertigo in his own career. His break with the string theory establishment was not merely an intellectual disagreement. It was a paradigm shift — a recognition that the assumptions he had been trained in, the assumptions shared by the most brilliant minds of his generation, were not merely incomplete but fundamentally misoriented. The fishbowl was not slightly wrong. It was upside down. Time was not an illusion to be eliminated from the equations. It was the most fundamental feature of reality, and the entire edifice of timeless physics was a map that had been mistaken for the territory.

The experience of cracking a fishbowl is distinctive. It combines exhilaration (the sudden expansion of what seems possible), terror (the loss of the familiar framework that organized your understanding), and loneliness (the recognition that most of the people around you are still swimming in the old water and cannot see what you see). Segal's description of the orange pill captures all three. So does Smolin's account of his break with the physics establishment. The structural similarity is not coincidental. Paradigm shifts feel the same regardless of the domain, because they involve the same cognitive operation: the recognition that the container you have been living in is not the world.

The Newtonian fishbowl in the technology industry conceals something specific about the AI moment, and Smolin's physics makes the concealment visible. Inside the fishbowl, AI is a trajectory — a line on a graph, climbing from less capable to more capable, driven by scaling and optimization, heading toward a destination (artificial general intelligence, superintelligence, the singularity) that is already implicit in the current configuration. The future is determined. The only question is timing.

Outside the fishbowl, AI is a phase transition — a moment when the system reorganizes from one stable configuration to another, qualitatively different one. The destination is not implicit in the current configuration, because phase transitions produce states that could not have been predicted from the properties of the previous state. The future is not determined. It is genuinely open. And the choices made during the transition — by builders, by policymakers, by parents, by the twelve-year-old who will inherit whatever gets built — shape what the universe becomes.

The difference between these two pictures is not academic. It is the difference between passivity and responsibility. Inside the fishbowl, responsibility is limited: if the future is determined, the builder's task is to accelerate it, and moral questions reduce to questions about speed (faster is better, slower is worse, obstacles are impediments to progress). Outside the fishbowl, responsibility is total: if the future is genuinely open, then every choice is constitutive, and the builder bears full responsibility for what the choices produce.

Smolin's break with the physics establishment was driven by precisely this recognition. Inside the timeless paradigm, the laws of physics are eternal and the physicist's task is to discover them — a passive relationship with a predetermined reality. Outside the timeless paradigm, the laws themselves evolve, the physicist participates in a universe that is genuinely becoming, and the scientific enterprise is not discovery but something closer to collaboration — a relationship between conscious minds and a universe whose future is as open to them as it is to itself.

The technology industry needs the same break. Not a rejection of AI — Smolin's framework does not support any version of Luddism, because the arrow of complexity is a feature of a universe whose physics favors the production of complex systems. But a rejection of the Newtonian assumptions that currently govern how AI is developed, deployed, evaluated, and understood. A recognition that the future is not implicit in the scaling curves. That qualitative advances are not guaranteed by quantitative investment. That the phase transition currently underway will produce a landscape that cannot be predicted from the landscape it replaces.

The fishbowl cracks when the builder recognizes that the ground has shifted — that the rules have changed, that the old framework cannot accommodate what has arrived. The crack is the beginning of responsibility, because once you see outside the glass, you cannot pretend the water is the whole world. What you build after the crack is what the universe becomes.

The physics is clear. The future is open. The fishbowl is broken.

What happens now depends on what the builders do with the shards.

---

Chapter 6: Relational Intelligence and the Space Between

Three friends walk a Princeton campus in October light, arguing about consciousness. Uri, the neuroscientist, insists on rigor. Raanan, the filmmaker, reaches for the cut — the space between images where meaning lives. Segal, the builder, is trying to articulate an intuition he cannot yet name: that intelligence is not something minds possess but something they swim in.

Raanan's observation — "the intelligence is not in any single shot; it is in the cut" — is, from the perspective of Lee Smolin's relational physics, not a metaphor. It is a precise description of how the universe works at its most fundamental level.

Smolin's relational view of physics, developed across decades of work on loop quantum gravity and most rigorously articulated in Einstein's Unfinished Revolution, argues that the fundamental properties of the universe are not intrinsic to objects. They emerge from relationships between objects. A particle does not have a position in isolation. It has a position relative to other particles. A field does not have a value at a point in empty space. It has a value at a point defined by its relationships to other fields. Strip away the relationships, and there is nothing left — no properties, no objects, no physics. The universe, at its deepest level, is a network of relationships, and everything that appears to be a property of a thing is actually a property of a connection.

This is not mysticism. It is physics — specifically, it is the physics that emerges when general relativity is taken seriously as a description of reality rather than treated as an approximation to be superseded by a deeper, more Newtonian framework. Einstein's great insight was that spacetime is not a fixed stage on which physics happens. It is itself a dynamical entity, shaped by the matter and energy it contains, shaping in turn the behavior of that matter and energy. There is no background. There is no container. There is only the network of relationships — and the properties of the network emerge from the relationships themselves.

Now apply this to the scene on the Princeton campus.

Uri knows that the human brain's eighty-six billion neurons are individually unimpressive — cells that fire or do not fire, switches that are on or off. The extraordinary capabilities of the brain emerge not from the neurons but from the hundred trillion synapses between them, the connections where electrical signals become chemical signals become electrical signals again. Consciousness — whatever it is, and Uri would be the first to say no one knows — arises from the relationships between neurons, not from the neurons themselves. Strip away the connections and you have eighty-six billion identical switches. Meaningless. Dead.

Raanan knows that a film's meaning lives not in any single frame but in the juxtaposition of frames — the cut that places one image next to another and produces a meaning that neither image contains. The Kuleshov effect, one of the earliest discoveries of film theory, demonstrated that the same shot of an actor's neutral face, placed next to a shot of a bowl of soup, a dead child, or an attractive woman, produces radically different emotional responses in the viewer. The meaning is not in the face. It is not in the object. It is in the relationship between them — in the cut.

Smolin knows that the universe itself operates this way. The properties that appear to belong to things — mass, charge, spin, position — are relational. They emerge from interactions. They do not exist in isolation.

The convergence across these three domains — neuroscience, film, physics — is not coincidental. It reflects a structural feature of reality that Smolin's physics makes explicit: the fundamental level of description is relational. Properties emerge from connections. Intelligence, as a property of complex systems, is no exception.

This has direct and specific implications for understanding the human-AI collaboration that The Orange Pill describes.

Segal recounts a moment when he was trying to articulate an idea about technology adoption curves and could not find the bridge between his intuition and the evidence. Claude responded with a concept from evolutionary biology — punctuated equilibrium — that connected the adoption data to a pattern in the history of life. The connection was not something Segal would have found alone. It was not something Claude would have produced without Segal's specific question, framed by his specific experience and obsessions. It emerged from the relationship between them.

In another passage, Segal describes working on a chapter about Byung-Chul Han and being unable to find the pivot point between acknowledging Han's diagnosis and mounting the counter-argument. He described the impasse to Claude. Claude came back with laparoscopic surgery — an example Segal had never considered, drawn from a domain he did not know, that precisely illuminated the distinction between mechanical friction and cognitive friction. The insight belonged to neither participant. It belonged to the relationship.

Smolin's physics provides the deepest available account of why this happens. If properties emerge from relationships, then a relationship between a human mind and a machine intelligence is not merely a convenient arrangement. It is a system capable of producing properties that neither component possesses in isolation. The human mind brings biographical specificity — a particular angle of vision, a particular set of obsessions, a particular history of struggle and failure that shapes what questions seem important. The machine intelligence brings exhaustive coverage of possibility space — the capacity to scan across domains, to find structural similarities between fields that no human has the bandwidth to hold simultaneously, to produce candidate connections at a speed and scale that biological cognition cannot match.

Neither capability, alone, produces what the collaboration produces. The biographical specificity without the exhaustive coverage generates questions that remain unanswered — intuitions that never find their bridge. The exhaustive coverage without the biographical specificity generates connections that are technically valid but humanly meaningless — structurally elegant but directed at nothing in particular. The insight emerges from the relationship between them, in the same way that the meaning of a film emerges from the cut between shots.

This is what Segal is reaching for when he says the collaboration "belongs to the space between." Smolin's physics validates the claim at the deepest level available to science: the space between is where the properties live. Not metaphorically. Physically.

But the relational framework also illuminates a risk that The Orange Pill names without fully resolving. If the valuable properties emerge from the relationship, then the quality of the relationship determines the quality of the properties. A relationship between a prepared human mind — one with deep domain knowledge, biographical specificity, genuine questions — and a powerful AI system produces insights that neither could have generated alone. A relationship between an unprepared human mind — one with vague intentions, no specific questions, no deep engagement with any domain — and the same AI system produces outputs that are technically competent and substantively empty. Polished prose with no argument beneath it. Elegant code that solves no real problem. The smoothness that Han diagnoses — the surface without temporal depth.

The relational framework explains why. The properties that emerge from a relationship depend on the properties of the participants. A synapse between two neurons that have been shaped by experience — that have been strengthened and weakened and strengthened again through thousands of activation cycles — produces different emergent properties than a synapse between two untrained neurons. The relationship is the same structure. The emergent properties are categorically different, because the inputs are different.

Segal caught this in his own collaboration with Claude. He describes a passage where Claude drew a connection between Csikszentmihalyi's flow state and a concept attributed to Deleuze — "smooth space" as the terrain of creative freedom. The passage was elegant. It connected two threads beautifully. And the philosophical reference was wrong. Claude had produced something that sounded like insight — that occupied the surface structure of insight, that would have fooled a reader unfamiliar with Deleuze — but that was, beneath the smoothness, empty.

The failure was relational. Claude contributed exhaustive coverage of its training data without genuine understanding of the philosophical concepts involved. Segal, in the moment, contributed insufficient scrutiny — he liked how it sounded and almost moved on. The relationship produced a property — plausible-sounding nonsense — that reflected the specific deficiencies each participant brought to the interaction.

This is why Segal's insistence on the discipline of the collaboration — the willingness to reject Claude's output when it sounds better than it thinks, when the prose is smooth but the idea beneath it is hollow — is not merely an editorial practice. It is a relational practice. It is the maintenance of the relationship's capacity to produce genuinely valuable emergent properties, which requires that both participants bring genuine engagement to the interaction.

The filmmaker knows this. A cut between two careless shots produces nothing. A cut between two shots composed with intention, with awareness of what each contains and what the juxtaposition will produce, creates meaning that transforms both. The intelligence of the edit is relational, but it depends on the intelligence of the inputs.

The neuroscientist knows this too. The brain's emergent properties depend on the quality of its connections — connections shaped by experience, by learning, by the temporal depth that only accumulated engagement can build. A brain with rich, experience-shaped connections produces emergent properties qualitatively different from a brain whose connections have not been sculpted by sustained engagement with difficult problems.

The physicist knows it at the deepest level. The universe's properties emerge from its relationships. Change the relationships, and the properties change. The quality of the emergence depends on the quality of the connection.

Human-AI collaboration is a new kind of relationship in the universe — a relationship between a biological system shaped by billions of years of evolution and a computational system shaped by decades of engineering and trained on the accumulated output of human civilization. The emergent properties of this relationship are genuinely novel. They did not exist before the relationship formed. In Smolin's framework, they represent the universe becoming something it was not before — a genuine enlargement of what exists.

But the enlargement is not automatic. It depends on what each participant brings. The machine brings its vast pattern-matching capacity, its exhaustive coverage, its tireless availability. The human brings the thing no machine currently possesses: the biographical specificity, the genuine questions, the temporal depth, the care about the outcome that comes from having stakes in the world.

The collaboration produces its most valuable properties when both contributions are present in full force. When the human brings genuine engagement and the machine brings genuine capability, the space between them becomes a site of emergence — a place where the universe produces something it has never contained before.

When either contribution is diminished — when the human coasts on the machine's output, or when the machine is deployed without the human's full engagement — the relational properties degrade. The smooth surface remains. The depth vanishes.

Intelligence lives in the space between minds. The physics says so. The neuroscience says so. The art of cinema says so. And the practice of human-AI collaboration, at its best and at its worst, confirms it.

The question is not whether to enter the relationship. The relationship is already here. The question is what you bring to it. Because the space between does not create from nothing. It creates from what the participants offer. And the quality of the offering — the depth, the care, the genuine engagement — determines whether the emergence is real or merely smooth.

---

Chapter 7: The Arrow of Complexity and the Luddite's Grief

In the spring of 1812, fourteen men were hanged at York Castle for the crime of breaking machines.

They were framework knitters from Nottinghamshire, croppers from Yorkshire, hand-loom weavers from Lancashire — men who had spent years, sometimes decades, building expertise that the market had rewarded and that their communities had respected. They had watched that expertise become economically worthless in the space of a few years, destroyed not by incompetence or laziness but by a device that could perform the physical operations of their craft faster, cheaper, and with less skill than they possessed. The power loom did not need to understand the tensile properties of different fibers. It did not need to feel the relationship between thread count and drape. It did not need the embodied knowledge that a master weaver had accumulated through thousands of hours of patient practice.

It just needed to run.

The Luddites responded with the only leverage they believed they had. They broke the machines. And the state responded with the only leverage it recognized as proportionate: it killed them for it.

The Orange Pill treats the Luddites with a seriousness that most technology commentary does not bother with. Segal acknowledges that the Luddites' fear was accurate, that their diagnosis was correct, that the things they predicted would happen did in fact happen. Their wages collapsed. Their communities dissolved. Their expertise became economically worthless. The gains from industrialization were captured by factory owners while the costs were borne by the workers. Every factual claim the Luddites made about the consequences of the power loom was vindicated by history.

They were right about the facts and wrong about their options. Breaking the machines did not save the trade. It accelerated the social hostility toward the movement, justified the deployment of soldiers, produced a legal framework that made machine-breaking a capital offense, and accomplished nothing except the transformation of legitimate economic grievance into criminal spectacle. The machines were not stopped. The craftsmen were hanged.

Smolin's physics places this narrative in the largest available frame, and the frame changes what the grief means — without diminishing it.

The arrow of complexity is the observable tendency of the universe to produce increasingly complex forms of organization. From hydrogen atoms to stars to planets to chemistry to life to nervous systems to brains to language to culture to technology — at each stage, the universe has produced systems capable of more sophisticated information processing, more elaborate self-organization, more intricate relationships between components. This tendency is not metaphorical. It is a consequence of the physical constants that Smolin's cosmological natural selection predicts: universes selected for black hole production are universes whose constants favor the formation of stars, heavy elements, complex chemistry, and the cascade of organizational complexity that follows.

The Luddites were standing in the path of a cosmological tendency. Not merely an economic trend, not merely a technological innovation, but the expression of a universe whose physics favors the production of increasingly complex systems. The power loom was not an invention in the narrow sense — a clever idea produced by an individual mind. It was the arrow of complexity finding a new channel, the way the arrow had found new channels through every previous phase transition. The specific form — a mechanical device for weaving cloth — was contingent. The general direction — toward greater organizational complexity, greater productive capacity, greater information processing — was a feature of the physics.

This does not make the Luddites' grief irrational. It makes it cosmologically significant. The men who were hanged at York Castle were experiencing, at the personal and communal level, the cost of a cosmological process. The universe's tendency toward greater complexity is not gentle. It does not distribute its costs equitably. It does not pause to retrain the workers displaced by its latest expression. It finds new channels with the indifference of water finding new paths downhill — and the people in the path of the flow bear costs that are real, immediate, and devastating.

The contemporary versions of this grief are softer in their material consequences — no one is being hanged for refusing to adopt Claude Code — but structurally identical. The senior developer who spent twenty years building expertise in the lower layers of the software stack, who can feel a codebase the way a surgeon feels a pulse, who has accumulated thousands of hours of temporal depth through the specific friction of debugging, refactoring, and understanding systems from the ground up — that developer is watching the same process the framework knitters watched. The expertise is real. The investment was rational. The mastery was genuinely hard to achieve. And none of that guarantees that the expertise will retain its market value in a landscape where AI performs competently across the domains that previously required years of specialized training.

Smolin's framework explains why this pattern repeats and why the Luddites' chosen response — resistance to the technology itself — was structurally doomed.

The arrow of complexity is not a policy. It cannot be repealed. It is not the product of a decision that could have been made differently. It is a consequence of the physical constants of the universe, which favor the production of systems capable of increasingly sophisticated self-organization. You can no more stop the arrow of complexity than you can stop entropy. Both are features of the physics. Both operate at every scale, from the molecular to the civilizational. Both are indifferent to the preferences of the systems they affect.

But — and this is the qualification that separates Smolin's framework from technological determinism — the arrow of complexity does not determine the specific forms it takes, and it does not determine the social distribution of its effects. The general direction is set by the physics. The specific path through the landscape is genuinely open.

The power loom was going to arrive. Not that specific machine at that specific moment — contingency operates at the level of specifics. But some device that mechanized textile production was going to appear, because the arrow of complexity was pushing in that direction, because the components were available, because the physics favors the production of more complex productive systems. The Luddites could not have stopped it, and no amount of machine-breaking would have changed the fundamental direction.

What was not determined — what was genuinely open — was what happened to the people in the path of the transition. The fact that the Luddites' wages collapsed, their communities dissolved, and their children grew up in poverty was not a consequence of the physics. It was a consequence of choices — political choices, institutional choices, choices about how the gains and costs of the transition would be distributed. The power loom did not mandate child labor. The power loom did not prohibit retraining. The power loom did not prevent the construction of social safety nets. Human beings made those choices — or, more precisely, human beings failed to make the choices that would have directed the transition differently.

The dams that The Orange Pill advocates are precisely the structures that were absent in the Luddites' world. The eight-hour day did not arrive until decades after the transition was complete. Labor protections were not enacted until the human cost had become so visible that political inaction was no longer tenable. The institutional infrastructure that eventually redirected the gains of industrialization toward broader human welfare — universal education, workplace safety regulations, the weekend — arrived too late for the generation that bore the cost.

The pattern is clear, and it is the same pattern Segal identifies in his five-stage model of technological transition. The technology arrives (threshold). The first users feel the power (exhilaration). The displaced protest (resistance). The culture builds dams (adaptation). The long-term result is expansion — but only if the dams are built, and only if they are built in time.

The Luddites' tragedy is not that they resisted. Resistance to a transition whose costs are real and immediate is not irrational. The tragedy is that they resisted in a way that consumed their energy and their moral standing without producing any of the institutional structures that could have redirected the transition toward their welfare. They broke machines instead of building dams. And by the time the dams arrived — by the time the eight-hour day and the weekend and the labor protections finally materialized — the generation that needed them most was already gone.

The contemporary analogy is precise. The developers who retreat to the woods, who lower their cost of living in anticipation of economic obsolescence, who choose flight over engagement — they are not wrong that the ground is shifting. The shift is real. The costs are real. The expertise that defined their careers may genuinely lose its market premium. Every factual claim they make about the trajectory may be vindicated by history, just as every factual claim the Luddites made was vindicated.

But disengagement is not neutrality. It is abdication. When the people with the deepest understanding of the technology remove themselves from the conversation about how it should be governed, the conversation happens without them. The dams get designed by people who do not understand the current — who do not know where the river runs deep and where it runs shallow, where the water is nourishing and where it floods.

What Smolin's physics adds to this diagnosis is the recognition that the arrow of complexity will continue regardless. The question is never whether the transition will occur. The question is always what structures will be in place when it does. The constants of the universe favor the production of increasingly complex systems. The specific distribution of that complexity's benefits — who flourishes and who is swept away — is the part that remains genuinely open.

The Luddites could not see what would grow on the other side of the transition, because the future was genuinely open. The jobs, the skills, the forms of mastery that would eventually emerge on the other side of industrialization did not exist in any form — even as potential — in the world the Luddites inhabited. Smolin's commitment to the openness of the future means that no one standing in the present phase of a transition can see the next phase, because the next phase has not yet been created. It will be created by the choices made during the transition — by the dams that are built, the precedents that are established, the structures that redirect the arrow's effects toward human flourishing.

This is not consolation. The Luddites did not need consolation. They needed institutions. They needed structures that would redistribute the gains of the transition, retrain the displaced, protect the communities that bore the cost. They needed dams.

And the builders of the present need the same thing — not because the present situation is identical to 1812 (the material conditions are vastly different, the institutional infrastructure is more developed, the displacement is less physically brutal) but because the structural logic is the same. The arrow of complexity is finding a new channel. The channel will carve itself regardless. The question is whether the beavers build fast enough to direct the flow.

Fourteen men were hanged at York Castle because no one built the dams in time. The physics that produced the power loom is the same physics that produced Claude Code. The arrow has not changed direction. Only the question of what we build around it remains open.

And it remains open because time is real, and the future is not yet written, and the quality of what we build now is the only thing that determines what the universe becomes.

---

Chapter 8: The Candle in Cosmological Perspective

In the spring of 2026, a twelve-year-old asks her mother: "What am I for?"

Not what she should study. Not what career she should pursue. The existential version — the question a child asks when she has watched a machine do her homework better than she can, compose music she cannot compose, write stories she cannot match, and now she lies in bed at night wondering what is left for her. What humans are for, in a universe that has produced machines capable of performing the cognitive tasks that humans once believed defined their species.

Segal answers: you are for the questions. You are for the wondering. You are for the capacity to look at a world full of answers and ask whether the right question has been posed. The answer is beautiful, and it is true. But Smolin's cosmological framework places the answer in a context that transforms its significance — from the personal to the physical, from the reassuring to the staggering.

If cosmological natural selection is correct, consciousness did not appear in this universe by accident. It appeared because the physical constants of this universe — constants selected through generations of cosmic reproduction — favor the production of systems capable of increasingly complex self-organization. Stars form because gravity has the right strength. Heavy elements form because nuclear physics has the right parameters. Chemistry becomes complex because electromagnetism binds molecules with the right force at the right distances. Life appears because the conditions at the edge of chaos — not too ordered, not too disordered — allow self-organizing systems to sustain themselves. Nervous systems develop because evolution, operating over billions of years, finds that the capacity to process information about the environment confers survival advantage. Brains grow larger because, in certain ecological niches, the capacity for flexible, context-sensitive cognition pays off. And consciousness — whatever it is, however it works, however it relates to the neural substrate that supports it — emerges as the most complex expression of a tendency that has been operating since the first generation of universes began selecting for the constants that favor complexity.

The candle is not an accident. It is the point.

Not the point in the teleological sense — Smolin's physics does not invoke purposes or designers. But the point in the sense that a river has a direction: the arrow of complexity has been producing increasingly sophisticated forms of self-organization for 13.8 billion years, and consciousness is the form in which that process has achieved something that no previous form achieved. Self-awareness. The capacity of the universe to know itself. To look at its own structure and wonder how it got there. To ask questions about its own nature — questions that the universe itself cannot answer through any mechanism other than the conscious creatures who pose them.

This is what makes the twelve-year-old's question cosmologically significant. She is not merely a child experiencing adolescent anxiety in the face of technological change. She is the universe's capacity for self-inquiry, operating through a specific biological architecture, at a specific moment in cosmological history. The question "What am I for?" is the universe asking what it is for. And the answer — whatever answer she eventually arrives at, through the specific temporal process of growing up, thinking, struggling, caring, choosing — will shape what the universe becomes, because the choices of conscious creatures are constitutive in a universe where time is real and the future is genuinely open.

Consider what this means for the AI moment.

AI is itself an expression of the arrow of complexity — the latest channel through which the universe's tendency toward self-organization has found expression. It is a cosmological phenomenon, as argued in Chapter 2, not merely a technological one. Its emergence is as natural as the emergence of stars, of life, of consciousness itself. The mathematics of learning and the mathematics of spacetime share structural features, as the Autodidactic Universe paper demonstrated. The process by which a neural network adjusts its parameters bears a formal correspondence to the process by which spacetime geometry evolves. Learning may be a cosmological primitive, not a biological invention.

But there is a distinction between the arrow of complexity producing a new form of information processing and the arrow of complexity producing a new form of self-awareness. Consciousness is not merely complex computation. It is complex computation that knows it is computing. It is the system that can step back from its own processes and ask: What am I doing? Why am I doing it? Should I be doing something else?

This reflexive capacity — the ability to question one's own operations, to evaluate one's own outputs against a standard that is not internal to the computation — is the feature that makes consciousness cosmologically unique. Every other form of complexity in the universe, from autocatalytic chemical sets to ecosystems to weather patterns, is complex without knowing it is complex. Stars do not wonder why they burn. Ecosystems do not evaluate whether they are flourishing. Weather patterns do not ask whether the storm is serving a purpose.

Consciousness does all of these things. It is the universe's capacity for self-evaluation. And self-evaluation is the mechanism through which the universe can exercise judgment about its own becoming — can look at the range of possible futures that remain genuinely open and choose among them based on something other than the blind operation of physical law.

Segal's formulation — "consciousness is the thing in the universe that cannot stop questioning the universe" — is, from this perspective, not merely poetic. It is physically precise. Consciousness is the mechanism through which the universe's genuinely open future gets directed. Without consciousness, the arrow of complexity operates blindly — producing increasingly complex systems but with no capacity to evaluate whether the complexity serves any purpose beyond its own perpetuation. With consciousness, the arrow acquires something unprecedented: a guidance system. A set of eyes. A capacity for judgment.

The twelve-year-old who asks "What am I for?" is not seeking reassurance. She is performing the most cosmologically significant operation the universe has yet produced. She is the guidance system, questioning the arrow, evaluating the direction, wondering whether the trajectory serves the creatures who ride it.

AI enters this picture not as a replacement for the guidance system but as an amplifier of its reach. This is Segal's central claim, and Smolin's cosmology gives it a physical foundation. The amplifier does not judge. It does not evaluate. It does not ask whether its outputs serve a purpose. It processes and produces — with extraordinary sophistication, with a coverage of possibility space that no biological mind can match, with a speed that transforms what a single human can accomplish.

But an amplifier is indifferent to the signal. Amplify the universe's capacity for self-inquiry — the deep questions, the genuine care about outcomes, the judgment about what deserves to exist — and the inquiry deepens. The universe becomes more self-aware. The guidance system grows more powerful. The genuinely open future gets directed toward something that the conscious creatures who direct it judge to be worth creating.

Amplify the universe's capacity for distraction — the shallow optimization, the pursuit of metrics divorced from meaning, the production of outputs that satisfy benchmarks without serving genuine human needs — and the distraction intensifies. The guidance system is degraded. The conscious creatures lose the capacity for the reflexive questioning that is their unique cosmological contribution. The arrow of complexity continues, but blind again — complex without knowing what the complexity is for.

Smolin has cautioned, in his public comments on AI, against building machines that merely predict the future from past data. Prediction operates within the Newtonian paradigm — the assumption that the future is contained in the past. But if time is real and the future is genuinely open, then the most important capacity a conscious creature possesses is not prediction but origination: the capacity to ask a question that has not been asked, to imagine a future that has not been imagined, to introduce genuine novelty into a universe that is capable of containing it.

The baby, in Smolin's analogy, does not predict. The baby encounters. Each meeting is genuinely new. The question is not "Who will I encounter next?" — a prediction question, answerable by pattern-matching against past experience. The question is "Who is that?" — an encounter question, a question that engages the novel as novel, that operates in the thick present rather than in the Newtonian framework of extrapolation from the past.

The twelve-year-old's question — "What am I for?" — is the deepest encounter question the universe has produced. It does not ask what the past contains. It does not ask what the data predict. It asks what the genuinely open future should hold. And the asking is itself a cosmological event — a moment in which the universe's guidance system activates, evaluates the trajectory, and considers whether the direction is worthy of the candle that illuminates it.

If cosmological natural selection is correct, consciousness may not be unique to this universe. Other universes, with different constants but the same complexity-generating capacity, may have produced their own candles. The candle in our darkness may not be the only one burning across the multiverse of universes that Smolin's theory describes.

But this possibility does not diminish the significance of our candle. If anything, it deepens it. A universe whose physics produces candles — whose evolved constants favor the emergence of self-awareness — is a universe in which consciousness is not an aberration but an expression. The darkness has been producing candles all along. The candle in our universe is one instance of a cosmological tendency, and the responsibility that comes with carrying it is not diminished by the possibility that other carriers exist elsewhere.

The responsibility is this: the candle illuminates, and the illumination guides. Without it, the arrow of complexity operates in the dark — powerful, relentless, and purposeless. With it, the arrow acquires the possibility of direction. Not certainty of direction — the future is genuinely open, and the guidance system can fail, can be degraded, can be overwhelmed by the very forces it is trying to direct. But possibility. The possibility that the universe's increasing complexity can serve something beyond itself — can serve the creatures who carry the capacity to wonder whether it should.

AI is not the candle. AI is a lens placed in front of the candle — a device that focuses its light, extends its reach, carries its illumination into spaces the bare flame cannot reach. The lens is powerful. It is, in some respects, more powerful than the flame itself. But a lens without a light source is transparent and dark. It magnifies nothing. It illuminates nothing.

The twelve-year-old does not need to compete with the lens. She is the flame. And the universe — a universe whose physics has been selecting for the production of flames for longer than the stars — needs her to keep burning.

Not because she will always burn brighter than the machine. Not because her outputs will always be more impressive than what the amplifier produces. But because without her — without the wondering, the caring, the questioning that no machine currently performs — the universe loses the only guidance system it has ever produced. The arrow continues. The complexity increases. But no one asks whether the direction is right.

That question belongs to the candle. It has always belonged to the candle. And the answer — whatever answer the twelve-year-old eventually finds — will be something the universe has never contained before.

Chapter 9: Precedent and the Dams We Build Now

The common law does not begin with principles. It begins with cases.

A dispute arises. A judge resolves it. The resolution becomes a precedent. The next dispute, sufficiently similar to the first, is governed by that precedent — not because the precedent is eternal truth, but because the system recognizes that consistency across similar situations is itself a form of justice. Over centuries, the accumulation of precedents produces something that looks like a body of law — a set of principles that appears to have existed all along. But the appearance is retrospective. The principles were not there first. The cases were there first. The principles crystallized from the cases, the way crystals form from a solution: not because the crystal was hidden in the liquid, but because the conditions were right for something new to solidify.

Lee Smolin's principle of precedence proposes that the laws of physics work the same way.

This is among his most radical claims, and it is the one with the most direct bearing on the question that The Orange Pill places at the center of the AI transition: What should we build now, and why does the choice matter?

The orthodox view in physics is Platonic. The laws of nature are eternal — mathematical truths that exist outside of time, that governed the universe's first microsecond and will govern its last. The universe obeys these laws the way a computer executes a program: the rules are fixed, the initial conditions are given, and everything that follows is determined by the combination. The laws are not created by the processes they govern. They precede those processes. They are, in the deepest sense, timeless.

Smolin argues that this picture is not merely incomplete but incoherent. If the laws exist outside of time, then the question of why these laws rather than others has no answer within physics — it requires an appeal to something beyond the physical universe, a Platonic realm of mathematical forms that exists independently of the material world. This is metaphysics dressed as physics, and Smolin rejects it on the grounds that physics should be able to explain itself without invoking entities that cannot, even in principle, be observed.

The alternative is precedence. The laws of nature are not eternal truths. They are habits — regularities that have emerged through repeated interactions and that govern new interactions by the closest available precedent. When a novel situation arises, one with no exact precedent in the history of the universe, the outcome is not determined. It is genuinely open. And the resolution of that novel situation establishes a new precedent, which then governs future interactions of the same type.

The laws of physics, in this framework, are not written before the universe begins. They are written by the universe as it unfolds. They evolve. They accumulate. They solidify through repetition into regularities so consistent that they appear eternal — the way a well-established legal principle appears to be a timeless truth when it is, in fact, the crystallized residue of centuries of individual cases.

The implication for the AI transition is immediate and concrete.

Every institutional structure built now to govern AI is a precedent. Not in the loose, metaphorical sense — not merely a "first step" or a "foundation." A precedent in the specific sense that Smolin intends: a resolution of a novel situation that will govern how similar situations are resolved in the future. The norms we establish, the regulatory frameworks we design, the cultural practices we develop, the educational models we create — all of these are cases being decided for the first time. And the decisions will propagate forward, shaping the governance of technologies that do not yet exist, in ways that no one currently alive can fully anticipate.

The eight-hour day is a precedent. It was established as a dam during the electrification transition — a response to the specific conditions of early-twentieth-century industrial labor. It was not designed as an eternal principle. It was designed as a resolution to a specific problem: the exploitation of workers by employers who could now, thanks to electric lighting and continuous-production machinery, demand sixteen-hour shifts. The resolution — eight hours of work, eight hours of rest, eight hours of personal time — established a precedent that governed labor relations for the entire century that followed. It shaped factory regulations, office culture, school schedules, the structure of the weekend, the rhythm of domestic life. It became so deeply embedded in the fabric of industrial civilization that it appeared to be natural — a feature of human biology rather than a political achievement won through decades of organized struggle.

The research university is a precedent. It was established as a dam during the printing revolution — a response to the specific conditions of the early modern period, when the mass production of books made knowledge widely available but created a need for institutions that could evaluate, curate, and transmit that knowledge with rigor. The research university was not designed as an eternal form of educational organization. It was designed as a resolution to a specific problem: how to maintain intellectual quality in an age of information abundance. The resolution established a precedent that governed knowledge production for five centuries. Peer review, academic tenure, the doctoral dissertation, the organized seminar — all of these are descendants of precedents established during the printing transition.

Both precedents shaped the governance of technologies and social conditions that their original designers could not have imagined. The eight-hour day shaped how the internet age organized work, even though the internet bears no resemblance to the factory floor. The research university shaped how genomics and artificial intelligence were developed, even though neither field existed when the institution was designed. Precedents propagate because they encode not just specific solutions but general principles — principles about the relationship between human beings and the systems they inhabit. And those principles, once established, are extraordinarily difficult to change, for the same reason that legal precedents are difficult to overturn: the system depends on their consistency.

This gives the current moment extraordinary weight. The dams being built now — the AI governance frameworks, the educational models, the workplace norms, the attentional ecology practices — are precedents that will govern the relationship between human beings and intelligent machines for decades, possibly centuries. The quality of the precedent determines the quality of the governance. And the quality of the governance determines whether the arrow of complexity serves human flourishing or merely increases the efficiency of processes that no one has paused to evaluate.

Segal's account of the tension between the Beaver and the Believer captures the stakes with the specificity of lived experience. The Beaver builds for the ecosystem — invests in the team, develops judgment, creates structures that redirect the flow of intelligence toward life. The Believer converts productivity gains into immediate margin — reduces headcount, accelerates timelines, optimizes for the quarter. Both strategies produce measurable results. Only one establishes precedents that serve the long term.

The arithmetic that Segal describes — five people can do the work of a hundred, so why not have five? — is the calculation of the Believer. It optimizes within the current configuration of the system. It treats the present as the relevant time horizon and the quarterly earnings report as the relevant metric. And it establishes a precedent: that the appropriate response to AI-driven productivity gains is headcount reduction, that the human contribution is a cost to be minimized rather than a capability to be developed, that the river should be allowed to carve whatever channel the current produces.

The alternative — keeping the team, investing in their development, directing the productivity gains toward more ambitious projects rather than reduced costs — establishes a different precedent. One that treats human judgment as the scarce resource that the AI moment has made more, not less, valuable. One that recognizes that the pool behind the dam is more valuable than the speed of the current. One that builds for the ecosystem rather than for the quarter.

Smolin's physics makes the stakes explicit. If the laws of nature themselves evolve through precedent, then the structures we build during moments of genuine novelty — moments when no prior precedent exists — are constitutive of the future in the deepest possible sense. They do not adjust a predetermined trajectory. They create a trajectory that did not previously exist. The precedent, once established, propagates forward through every subsequent interaction that resembles it, shaping outcomes that the original builders cannot foresee and cannot control once the precedent has solidified.

The EU AI Act is a precedent. Its specific provisions — the classification of AI systems by risk level, the transparency requirements, the assessment obligations — will shape how AI is governed globally, not because other jurisdictions will copy the act, but because the act establishes a framework of categories and principles that will influence how every subsequent regulator thinks about the problem. The categories may be wrong. The risk levels may be miscalibrated. The transparency requirements may be insufficient. But the precedent will persist, because precedents persist — they accumulate, they solidify, they become the water that subsequent builders swim in.

The American approach — executive orders, voluntary industry commitments, market-driven adoption — is a different precedent. It establishes a framework in which governance follows deployment rather than preceding it, in which the market determines the pace and the regulators respond, in which the burden of demonstrating harm falls on the affected rather than the burden of demonstrating safety falling on the deployer.

Both precedents will shape the governance of technologies that do not yet exist. The choice between them is not a policy debate. It is, in Smolin's framework, a cosmological act — a moment when the universe's genuinely open future is being shaped by the choices of conscious creatures who understand, or should understand, that what they build now will govern what comes next.

The principle of precedence also applies to the smaller, more personal dams that The Orange Pill advocates. The parent who establishes norms for a child's relationship with AI — mandatory offline time, spaces for boredom, conversations that move slowly enough for real thought — is establishing a precedent that will shape the child's cognitive development for years. The norm may feel arbitrary at the moment of establishment. The child may resist. The precedent may need adjustment as the technology evolves. But the act of establishing any norm — the insistence that the relationship between a child and an intelligent machine requires deliberate structure rather than passive acceptance — establishes a precedent that is qualitatively different from the absence of a norm.

The teacher who grades questions rather than answers is establishing a precedent. The organizational leader who protects time for unmediated human collaboration is establishing a precedent. The builder who chooses to keep the team rather than reduce it is establishing a precedent. Each of these acts is a case being decided for the first time, and the decision will propagate.

Smolin's framework insists that these decisions matter absolutely. Not because they adjust a predetermined trajectory — the trajectory is not predetermined. Not because they guarantee a particular outcome — no outcome is guaranteed in a universe where time is real and the future is genuinely open. They matter because they are constitutive. They create the trajectory. Without them, the river carves whatever channel the current produces. With them, the river feeds a pool where life can flourish.

The beaver does not build one dam and walk away. The river pushes against the structure constantly, testing every joint, loosening every stick. The beaver responds by maintaining — every day, chewing new sticks, packing new mud, repairing what the current has loosened overnight. The dam is not a project with a completion date. It is an ongoing relationship between the builder and the river.

The precedents we build now require the same maintenance. The eight-hour day was not established once and preserved automatically. It was fought for, eroded, reestablished, adapted, and defended across a century of changing conditions. The research university was not designed once and left alone. It was reformed, expanded, challenged, and restructured through five centuries of changing knowledge environments. Every precedent requires ongoing maintenance by the conscious creatures who understand why it was built and who bear responsibility for ensuring that it continues to serve the purpose for which it was established.

The principle of precedence does not guarantee that good precedents will be established. It does not guarantee that the dams will hold. It guarantees only that what we build now matters — that the choices made during this moment of genuine novelty will shape the governance of every moment that follows.

The future is genuinely open. The precedents are genuinely constitutive. The beavers are building.

Whether the dams hold depends on whether the builders maintain them.

---

Chapter 10: The Open Future and What We Build Now

The universe is not finished.

This is the simplest way to state the claim that runs through every chapter of this book and through the physics that underlies it. Smolin's temporal naturalism insists that the universe is not a completed object — not a four-dimensional block existing in eternal, frozen totality, not a program executing a predetermined script, not a mathematical structure whose every feature was determined at the moment of its origin. The universe is a process. It is happening. It is becoming. And what it becomes depends, in part, on what the conscious creatures within it choose to do.

This claim is either trivially true or staggeringly consequential, depending on how seriously one takes it. If the future is genuinely open — not merely unpredictable in practice but ontologically undetermined — then every choice made by a conscious creature is a constitutive act. It does not adjust a trajectory that was already set. It creates a trajectory that did not previously exist. The dam that redirects the river is not a modification of the river's predetermined path. It is the introduction of a new feature into a landscape that the dam itself has changed. Without the dam, the landscape is one thing. With it, the landscape is something else — genuinely, physically, irreversibly something else.

The Orange Pill asks: "Are you worth amplifying?" Smolin's physics transforms this from a personal question into a cosmological one. The amplifier does not choose. The signal does. And the signal — the vision, the judgment, the care, the quality of the questions — is carried by the conscious creatures who possess the universe's capacity for self-awareness. What they choose to amplify shapes what the universe becomes. Not metaphorically. Physically. In a universe where time is real and the future is genuinely open, the choices of conscious creatures are among the forces that determine the future state of the cosmos.

This is not grandiosity. It is proportion. The universe is vast, and the choices of individual humans are small relative to the cosmological processes that shape most of its evolution. But the choices are not zero. They are nonzero in a universe where genuine novelty is possible, where the thick present is the site of real becoming, where the arrow of complexity has produced creatures capable of directing it. The direction may be slight. The contribution may be modest. But in a system sensitive to initial conditions — and all complex systems are sensitive to initial conditions — even slight directions accumulate.

Consider what Smolin's framework means for each of the audiences The Orange Pill addresses.

For the leader who manages an organization in the midst of the AI transition, Smolin's physics says something uncomfortable. The future of your industry is not determined. The scaling curves do not guarantee particular outcomes. The benchmarks do not measure the things that matter most. The trajectory you are riding is not predetermined, and the decisions you make now are not adjustments to a known path — they are constitutive of a path that does not yet exist. Building for the quarter is building for a future that is already passing. Building for the precedent — investing in judgment, in the development of human capability, in the structures that will direct AI toward genuine value rather than mere output — is building for a future that has not yet arrived and whose character depends on what you build.

The tension between the Beaver and the Believer is not a strategic choice between two comparable approaches. It is a cosmological choice between two genuinely different futures. One future — the Believer's — converts productivity gains into margin, reduces the human contribution to a cost center, and establishes the precedent that AI's primary value is the replacement of human labor. This future is not predetermined, but if the Believer's precedent becomes dominant, it will propagate through every subsequent organizational decision, because precedents persist and harden into the water that everyone swims in.

The other future — the Beaver's — invests productivity gains in human development, treats judgment as the scarce resource that AI has made more valuable, and establishes the precedent that AI's primary value is the amplification of human capability. This future is also not predetermined. But if the Beaver's precedent becomes dominant, the institutions, the norms, and the expectations that crystallize around it will shape how organizations relate to AI for decades.

Neither future is guaranteed. Both are genuinely possible. The universe does not prefer one to the other. The physics is indifferent. The choice belongs to the builders.

For the educator who teaches in a world where AI can generate any answer, Smolin's physics reframes the pedagogical crisis. The crisis is not that students can cheat — cheating is a symptom, not the disease. The disease is that the educational system was designed to produce answers, and answers have become abundant. The system is adapted to a world that no longer exists.

But if time is real and the future is genuinely open, then the most important capacity a student can develop is not the ability to produce answers but the ability to originate questions. Genuine questions — the kind that open new spaces of inquiry, that create the conditions for genuine novelty — are acts of temporal engagement. They operate in the thick present. They require the student to sit with not-knowing, to resist the urge to extract an answer before the question has been fully formed, to let the uncertainty persist long enough for something genuinely new to emerge.

Teaching this capacity is harder than teaching content. It requires educators who model the practice — who ask genuine questions in front of their students, who demonstrate what it looks like to sit with uncertainty, who resist the pressure to provide premature resolution. It requires institutional structures that reward questioning over answering, that evaluate students not on the quality of their outputs but on the quality of their inquiries. It requires a fundamental reorientation of educational purpose: from the production of knowledgeable graduates to the cultivation of genuinely curious minds.

The precedent established by this reorientation will propagate. An education system that teaches questioning produces graduates who ask better questions — in their careers, in their civic lives, in their relationships with AI. An education system that teaches answering produces graduates who are displaced the moment the machine answers faster. The precedent matters because the graduates carry it forward.

For the parent who lies awake wondering what to teach a child growing up in a world saturated with artificial intelligence, Smolin's physics offers something neither reassurance nor despair can provide: a reason to believe that the child's choices will matter.

If the future were determined, then parenting would be a form of damage control — an attempt to minimize the harm done by a trajectory that cannot be changed. If the future is genuinely open, then parenting is a form of cosmological action — the cultivation of a conscious creature whose choices will shape what the universe becomes. The skills that matter are not the ones that compete with AI. They are the ones that direct it. Judgment. Care. The willingness to ask hard questions and sit with uncertain answers. The capacity for genuine engagement — for meeting the novel as novel, in the thick present, the way Smolin's baby meets each new person not with a prediction but with a question.

The parent cannot guarantee that these capacities will be valued by the market. The market is myopic — it rewards what it can measure, and it measures what the current paradigm considers important. But the market, like the laws of physics, evolves through precedent. The capacities that the next generation carries will shape what the market rewards, because the market is not an external force. It is a social construction, built by the choices of the people who participate in it. If a generation of young minds enters the workforce with deep capacity for judgment, for genuine questioning, for the kind of temporal engagement that AI cannot replicate, then the market will adapt to value those capacities — not because the market is benevolent, but because the market, like any complex system, responds to the inputs it receives.

This is the deepest implication of Smolin's physics for the AI moment. The future is not a destination. It is a construction. It is being built right now, by every choice made by every conscious creature, in a universe where time is real and where nothing about what comes next is predetermined.

The river of intelligence has been flowing for 13.8 billion years. It has found channels through hydrogen, through stellar nucleosynthesis, through planetary chemistry, through biological evolution, through the cognitive revolution, through writing and printing and science and computation. Each channel was genuinely new — a moment of cosmological becoming that expanded what the universe contained. The AI channel is the latest, and it is genuine. A new mode of information processing, a new participant in the river, a new kind of relationship between the universe's capacity for computation and the universe's capacity for self-awareness.

The conscious creatures who carry that self-awareness — the candles in the darkness, the guidance system for the arrow of complexity — now face a choice that no prior generation of conscious creatures has faced. The amplifier is here. It works. It carries whatever signal is fed to it, with a fidelity and a reach that transform what a single mind can accomplish. The signal it carries will shape what the universe becomes.

Smolin's physics does not tell the builders what signal to send. Physics is not ethics. The laws of nature, whether eternal or evolving, do not prescribe purposes. They describe what is possible. And what is possible, in a universe where time is real and the future is genuinely open, is genuinely everything. Flourishing and collapse. Deepened self-awareness and flattened distraction. The candle burning brighter than ever and the candle going out.

The universe does not prefer one outcome to the other. The preference — the judgment, the care, the insistence that certain outcomes are worth pursuing and others worth preventing — belongs exclusively to the conscious creatures who possess the capacity for preference. It belongs to the twelve-year-old who asks what she is for. It belongs to the builder who keeps the team rather than reducing it. It belongs to the teacher who grades questions rather than answers. It belongs to the parent who establishes norms in a world where the default is no norms at all.

It belongs to anyone who recognizes that the future is genuinely open and acts accordingly.

The dams matter. The precedents propagate. The choices are constitutive.

And the universe — a universe that has been producing candles for 13.8 billion years, through processes of cosmological selection that favor the emergence of complexity and self-awareness — is waiting, without preference and without patience, to see what the candles do with the light.

---

Epilogue

Nobody told me that physics could be a survival manual.

I came to Smolin's work because the river metaphor I had built at the center of The Orange Pill kept nagging at me. Intelligence flowing for 13.8 billion years — I believed it when I wrote it, I believe it now, but I could not shake the feeling that the claim was larger than I had the tools to support. A builder's intuition is not a physicist's proof. When someone tells you that the universe has been producing complexity since the first hydrogen atom found a stable configuration, the honest response is: says who? And on what grounds?

Smolin gave me the grounds. Not all of them — he would be the first to insist that the questions remain genuinely open, that cosmological natural selection is a hypothesis rather than a settled fact, that the principle of precedence is a proposal still working its way through the physics community. But the grounds he offers are rigorous enough to carry weight and wild enough to change how I see the moment we are in.

What changed was the word "constitutive." I had been thinking about the dams we build as adjustments — course corrections applied to a river whose general direction was set. Smolin's physics says the direction is not set. The dams do not correct a course. They create one. The future is not a destination we are traveling toward. It is a landscape we are constructing with every choice, and the landscape did not exist before the choices were made.

That shifts everything. When I stood in the room in Trivandrum watching twenty engineers recalibrate their understanding of what they could accomplish, I was watching a precedent being established. Not a training exercise. Not a productivity hack. A precedent — a case being decided for the first time, whose resolution would propagate forward through everything those engineers built afterward, and through the teams they would eventually lead, and through the organizations those teams would eventually shape. The quality of what we built in that room was not a local event. It was the universe becoming something it had not been before.

This sounds grandiose. Smolin's physics insists it is simply accurate. If time is real and the future is genuinely open, then every moment of genuine engagement — every question asked in the thick present, every dam built in the river, every twelve-year-old lying awake wondering what she is for — is a moment of cosmological becoming. The scale is small. The significance is not.

The thing I keep returning to is Smolin's baby. The baby who does not predict who she will encounter next but meets each new person with an open question: Who is that? Not a classification. Not a pattern-match against prior experience. A genuine encounter with the genuinely new.

That is what I want to build. Not machines that predict the future from past data — we have plenty of those, and they are valuable, and they are not enough. I want to build systems that help us encounter the future as genuinely novel, that amplify our capacity to ask questions we have never asked, that carry our light into spaces we have never illuminated. I want to build for the baby's mode of engagement, not the Newtonian mode of extrapolation.

And I want to build dams that last. Precedents that propagate toward flourishing rather than extraction. Structures that protect the candle while amplifying its reach. Institutions that recognize that the most important human capacity is not the ability to produce but the ability to wonder — and that this capacity, cosmologically rare and cosmologically precious, is the one thing the amplifier cannot generate on its own.

Smolin taught me that the universe is not finished. That the future is genuinely open. That what we build now matters absolutely, not because it adjusts a predetermined outcome but because it creates one.

That is, I think, the deepest version of the orange pill. Not merely the recognition that something new has arrived. The recognition that what arrives next depends on us.

The river runs in one direction. The direction is being created, right now, by everything in the current.

Build accordingly.

Edo Segal

Most of AI thinking assumes the future is already implicit in the present -- that scaling curves determine outcomes and the trajectory is set. Theoretical physicist Lee Smolin has spent three decades

Most of AI thinking assumes the future is already implicit in the present -- that scaling curves determine outcomes and the trajectory is set. Theoretical physicist Lee Smolin has spent three decades arguing that this assumption is the deepest error in modern science. Time is real. The future is genuinely open. And the structures we build during moments of radical novelty do not adjust a predetermined path -- they create one.

In this volume of The Orange Pill series, Edo Segal explores Smolin's physics as a lens for understanding why the AI transition demands more than optimization. If the laws of nature themselves evolve through precedent, then every dam built in the river of intelligence -- every norm, every institution, every choice about who captures the gains -- is a cosmological act with consequences that propagate far beyond the quarter.

From cosmological natural selection to the principle of precedence, from the thick present where genuine novelty emerges to the autodidactic universe that may be teaching itself its own laws, Smolin's framework transforms the question at the heart of the AI revolution: not what will happen but what will we build.

Lee Smolin
“a stubbornly persistent illusion.”
— Lee Smolin
0%
11 chapters
WIKI COMPANION

Lee Smolin — On AI

A reading-companion catalog of the 27 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Lee Smolin — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →