By Edo Segal
The question I stopped asking was the one that mattered most: not what I was building, but where it fell.
For months after the orange pill hit — after that winter of 2025 when Claude Code crossed the threshold and everything accelerated — I measured progress the way every builder measures it. Artifacts shipped. Features deployed. Prototypes that worked. The dashboard was green. The velocity was unprecedented. Twenty engineers in Trivandrum, each producing what entire teams used to produce. Station built in thirty days. This book drafted on a transatlantic flight.
I was filling. Filling fast, filling well, filling with a joy that bordered on compulsion. And I never once stopped to ask whether the space I was filling needed filling, or whether a different space needed opening.
George Kubler asked that question in 1962, about cathedrals and clay pots and Mayan stelae. He was a Yale art historian who spent decades studying objects whose makers were anonymous, whose biographies were irrecoverable, whose individual genius could not be invoked to explain why the objects mattered. So he built a framework that did not need genius. He built it on structure. Where does this thing fall in the chain of linked solutions to the problem it addresses? Is it the first of its kind — what he called the prime object — or a variation on something already demonstrated?
That distinction, between opening a sequence and filling one, is the sharpest diagnostic tool I have found for understanding what AI actually does and what it does not.
AI fills sequences. It fills them with breathtaking speed and formal competence. It can generate a thousand variations of anything that already exists. What it has not demonstrated — not yet, and possibly not ever — is the capacity to perceive that the existing sequences are insufficient. To feel the absence. To recognize that the formal landscape, however dense, does not contain what is needed.
That perception is what produces prime objects. And prime objects are the only artifacts that change the landscape rather than decorating it.
I needed Kubler because the river metaphor tells me intelligence flows, and the beaver metaphor tells me to build dams, but neither tells me where the dam matters most. Kubler does. The dam matters at the point where a new sequence opens — not where the current runs strongest, but where the landscape reveals a gap that no existing current can fill.
In a world drowning in competent artifacts, the ability to see what is missing is the ability that matters. Kubler saw it sixty years before the machines arrived. The machines have made him essential.
— Edo Segal ^ Opus 4.6
1912–1996
George Kubler (1912–1996) was an American art historian whose work fundamentally reoriented the study of material culture. Born in Hollywood, California, and educated at Yale University under the French art historian Henri Focillon, Kubler spent the bulk of his career as a professor at Yale, where he became one of the leading scholars of pre-Columbian Mesoamerican art and architecture. His most influential work, *The Shape of Time: Remarks on the History of Things* (1962), proposed that the proper unit of art-historical analysis was not the artist or the period but the formal sequence — chains of linked solutions to persistent problems that extend across individual makers and centuries. He introduced concepts including the prime object (the first artifact to open a new class of solutions), the replica (subsequent variations within an established sequence), and entrance (the structural conditions a maker encounters upon joining a sequence). Drawing on the vocabularies of signal theory and electrodynamics rather than biology, Kubler's framework anticipated computational approaches to cultural analysis by decades. His other major works include *The Art and Architecture of Ancient America* (1962) and the essay "The Shape of Time Reconsidered" (1982). His ideas have influenced fields ranging from archaeology and design theory to media studies and, increasingly, discourse on artificial intelligence and generative systems.
In 1962, a Yale art historian published a book that rendered most of his discipline obsolete. The book was 130 pages long. It contained no color plates, no catalogue of masterworks, no reverent descriptions of brushwork or composition. It proposed, with the compressed force of a mathematical proof, that the entire tradition of art history — organized around the lives of great artists, the flowering and decline of styles, the progress of civilization as told through its most beautiful objects — was built on a mistake.
The mistake was biological. Art historians had borrowed the vocabulary of the life sciences — birth, growth, maturity, decline, death — and applied it to cultural production as though styles were organisms and periods were species. The Gothic was born in the Île-de-France, matured at Chartres, declined into the Flamboyant, and died when the Renaissance arrived to replace it. The metaphor was so pervasive that no one noticed it was a metaphor. It had become the water in the fishbowl.
George Kubler's The Shape of Time: Remarks on the History of Things replaced the biological metaphor with something closer to physics. Not the physics of forces and masses, but the physics of signals and transmissions. Kubler proposed that the fundamental unit of cultural analysis was not the artist, not the style, not the period, but the thing — the artifact, the tool, the made object, the solution to a problem. And things, he argued, organize themselves not into life cycles but into formal sequences: chains of linked solutions to persistent problems that extend across individual careers, across centuries, across civilizations.
The distinction sounds technical. It is existential. Because if Kubler was right, then the most important question one can ask about any made thing is not who made it or when was it made or what does it express. The most important question is: where does this fall in the sequence?
That question, posed in 1962 about cathedrals and clay vessels and Mayan stelae, turns out to be the most precise question available for understanding what artificial intelligence produces and what it cannot.
---
Kubler's intellectual formation explains why his framework survived the arrival of a technology he never lived to see. Born in 1912, trained at Yale under Henri Focillon — the French art historian who had argued that forms possess their own internal logic independent of the artists who execute them — Kubler spent decades studying the art and architecture of pre-Columbian Mesoamerica. His primary scholarly territory was not Renaissance Florence or Impressionist Paris but the anonymous workshops of the Maya, the Aztec, the Inca: cultures where the individual maker was often unknown, where attribution was impossible, and where the biographical method that sustained European art history simply could not operate.
The absence of biography forced a structural question. If the maker cannot be identified, what organizes the history of these objects? The answer Kubler found was the problem. Each object was a solution to a problem — structural, symbolic, ritual, technical — and the problems persisted across generations of anonymous makers. The pointed corbel arch of Mayan architecture was not the expression of a single genius. It was a position in a formal sequence of solutions to the problem of spanning interior space with stone, a sequence that began centuries before any surviving example and continued, through linked variations, long after the last identifiable architect had died.
The objects were connected not by authorship but by the problem they addressed. The sequence was the unit of analysis. The individual object was a node in a chain.
This shift in analytical unit — from maker to sequence, from biography to structure — is what gives Kubler's framework its uncanny applicability to the age of AI. Every other theory of cultural production that was available in the mid-twentieth century depended, at some structural level, on the presence of a human maker with a biography, an intention, a cultural context that shaped and was shaped by the work. Remove the maker, and the theory collapses. Panofsky's iconology requires an artist whose symbolic vocabulary can be decoded. Gombrich's perceptual theory requires a painter whose visual strategies can be traced to learned conventions. Greenberg's formalism requires an avant-garde whose progressive purification of the medium can be narrated as a heroic arc.
Kubler's theory requires none of this. It requires only things, and the sequences those things form. When AI generates an image, a piece of code, a musical composition, or an architectural plan, it produces a thing — an artifact that enters a formal sequence, occupies a position, and either extends the sequence in a meaningful direction or merely fills a position already implied by what came before. The question of who — or what — made it is, in Kubler's framework, secondary. The question of where it falls in the sequence is primary.
This is not to say that authorship is unimportant. It is to say that authorship is not the load-bearing wall of the analysis. The load-bearing wall is the sequence. And sequences persist regardless of who or what fills them.
---
Kubler was explicit about the metaphorical architecture he preferred. In a passage that reads, sixty years later, as though it were written for the age of neural networks, he proposed that the study of cultural transmission would be better served by the language of electrodynamics than by the language of botany. The history of things, he wrote, deals with "the transmission of some kind of energy; with impulses, generating centers, and relay points; with increments and losses in transit; with resistances and transformers in the circuit." He suggested that Michael Faraday might have been a better mentor than Linnaeus for the study of material culture.
The vocabulary is arresting. Impulses. Generating centers. Relay points. Increments and losses in transit. Resistances and transformers. This is the language of signal processing. It is, with only modest translation, the language of information theory, the mathematical framework developed by Claude Shannon at Bell Labs in 1948 — fourteen years before The Shape of Time appeared — to describe the transmission, encoding, and degradation of signals through noisy channels.
Kubler was not working in isolation from this intellectual current. Pamela Lee, the Yale art historian whose 2001 essay and 2004 book Chronophobia provide the most sustained analysis of Kubler's relationship to cybernetics, demonstrates that The Shape of Time emerged from the same intellectual ecosystem that produced Norbert Wiener's cybernetics and Shannon's information theory. The triangulation between Kubler, Wiener, and Shannon is not a retroactive imposition. It is legible in Kubler's own vocabulary, in his preference for signal over symbol, for transmission over expression, for the formal structure of the chain over the biographical content of the maker.
What Lee identified as Kubler's proto-computational orientation has a specific consequence for the AI age. A framework built on signals and sequences rather than on biography and expression does not require revision when the signal generator changes from human to machine. The sequence continues. The signal still transmits. The question of whether the signal carries genuine information — whether it extends the sequence or merely adds noise — remains the same question regardless of the source.
Neural networks are, in a literal sense, the realization of Kubler's electrodynamic metaphor. They consist of impulses (activations), generating centers (neurons), relay points (layers), increments and losses in transit (gradient propagation and vanishing gradients), resistances (regularization), and transformers (the architecture, named with inadvertent precision, that powers the large language models at the center of the current revolution). When Kubler wrote that Faraday would have been a better mentor than Linnaeus, he was, without knowing it, describing the intellectual genealogy that would produce the tools his own framework best explains.
---
The argument of The Shape of Time rests on four concepts that will recur throughout this book: formal sequences, prime objects and replicas, entrance, and sequence exhaustion. Each deserves an initial definition, though each will be developed more fully in the chapters that follow.
A formal sequence is a chain of linked solutions to a problem that persists across individual makers and individual works. The problem need not be practical, though it can be. The problem of spanning large interior spaces with stone generated the sequence of solutions that runs from the Roman barrel vault through the pointed arch and the ribbed vault to the flying buttress. Each solution made the next possible and constrained what the next could be. The sequence has an internal logic that no individual participant fully controls.
A prime object is the first instance in a new sequence — the artifact that demonstrates, for the first time, that a new class of solutions is possible. The flying buttress was a prime object. The Bessemer process was a prime object. TCP/IP was a prime object. The prime object does not emerge from gradual refinement within an existing sequence. It emerges from the recognition that a new problem exists, or that an old problem can be reconceived in terms that make a new class of solutions available. Everything that follows within the sequence the prime object opens is a replica: a variation, refinement, elaboration, or adaptation of what the prime object demonstrated.
Entrance is Kubler's term for the moment when an individual maker begins participating in a formal sequence. The state of the sequence at the moment of entrance determines more about what the maker can accomplish than talent or training. A maker who enters a sequence in its early phase, when the formal possibilities are wide open, faces a landscape that rewards experiment and risk. A maker who enters the same sequence late, when the possibilities have been largely explored, faces a landscape of diminishing formal returns.
Sequence exhaustion is the condition of a sequence that has been explored to its limits. Every formal sequence has a finite span. The early phase is characterized by rapid innovation — each solution opening multiple new possibilities. As the sequence matures, the rate of formal invention declines. The later entries are increasingly refinements rather than departures. Eventually the sequence is exhausted: every significant variation has been explored, and genuine novelty requires the opening of an entirely new sequence.
These four concepts — sequence, prime object, entrance, exhaustion — constitute the load-bearing structure of Kubler's framework. They do not depend on the identity of the maker. They do not require consciousness, intention, or biography. They require only that things be made, that the things form sequences, and that the sequences have a structure that can be analyzed.
This is why the framework survives the arrival of AI. And this is why the arrival of AI reveals what was most radical about the framework from the beginning.
---
In 1973, in an interview with Artforum, Kubler observed that "everything has come into the domain of sensibility. Everything has come to be intelligible as esthetic experience. All experience is undergoing what one might call 'esthetization.'" The observation was made a half-century before algorithmic feeds, generative image models, and AI-curated environments completed the process Kubler had identified. Every surface is now designed. Every interaction is mediated by systems optimized for smoothness, engagement, delight. The aestheticization of all experience that Kubler noted as an emerging tendency has become the default condition of digital life.
But Kubler's observation carries a diagnostic edge that the triumphalists of the AI age would do well to hear. Aestheticization is not the same as aesthetic achievement. The universal availability of aesthetic surface — the capacity to generate beautiful images, elegant code, polished prose at near-zero marginal cost — does not produce the conditions for formal invention. It produces the conditions for replica abundance. The distinction between a landscape rich in aesthetic surfaces and a landscape rich in formal innovation is the distinction between a sequence that is being filled and a sequence that is being opened.
AI fills sequences. It fills them with extraordinary speed, fluency, and technical competence. The question that Kubler's framework forces — the question this book will pursue through nine more chapters — is whether filling sequences is sufficient, or whether the act that changes the landscape is not the filling but the opening: the recognition that a new problem exists, the production of the first artifact that demonstrates a new class of solutions is possible.
The history of things continues. The things have new makers now. The question of where each thing falls in the sequence has not changed. It has only become harder to answer, and more consequential to get right.
Every made thing occupies a position. The position is not geographical, not temporal in the simple chronological sense, not a matter of the maker's reputation or the institution that houses the artifact. The position is structural: where does this thing fall in the chain of linked solutions to the problem it addresses?
Most things occupy positions that have already been implied. The flying buttress at Notre-Dame de Paris is not the first flying buttress. It is a variation — refined, scaled, adapted to the specific structural demands of the Île-de-la-Cité site — of a solution that had already been demonstrated. It is a brilliant variation. It is also, in Kubler's vocabulary, a replica: an entry in a sequence whose formal parameters had already been established by an earlier artifact that first demonstrated the class of solution was possible.
That earlier artifact is the prime object. And the distinction between the prime object and the replica is the most consequential distinction in Kubler's framework — the distinction that, applied to the age of AI, separates what machines can do from what they have not yet demonstrated the capacity to do.
---
The concept requires careful handling, because it is easily mistaken for something it is not. The prime object is not the "best" object in a sequence. It is not the most refined, the most technically accomplished, the most beautiful. It is the first — the artifact that opens a class of possibilities that did not previously exist. The first flying buttress was almost certainly cruder than the buttresses at Chartres or Reims that followed it. The first steel produced by the Bessemer process was almost certainly inferior in quality to the steel produced by later refinements of the process. The first packet-switched message sent over ARPANET was four characters long — "LO," because the system crashed before it could transmit the full word "LOGIN" — and yet it was the prime object in the sequence of networked digital communication.
The prime object's significance lies not in its quality but in its demonstration. It demonstrates that a new class of solutions exists. Before the first flying buttress, no one knew that a stone wall could be supported from outside itself, freeing the wall to become a membrane of glass rather than a load-bearing mass. After the first flying buttress, everyone knew. The knowledge was irreversible. The sequence was open.
Kubler insisted that the prime object is not a product of gradual refinement within an existing sequence. This is the point where his framework diverges most sharply from the incrementalist story that technology companies prefer to tell about their own products. The incrementalist story — version 1.0, version 2.0, each a smooth improvement on the last — describes the filling of a sequence. It describes replicas. The prime object emerges from a different process entirely: the recognition that the existing sequences are insufficient, that the problems they address do not encompass a problem that has become pressing, and that a new class of solutions is required.
Kubler identified two modes of this recognition. The first involves the confluence of previously separate formal sequences — what happens when knowledge from one domain encounters a problem in another, and the encounter generates a solution that neither domain could have produced alone. The second mode is what Kubler called pure invention, in which the maker creates "solely by means of his own engagement with his milieu," producing a solution that is "experientially and theoretically untied to earlier thinking." The first mode is combinatorial. The second is discontinuous — a rupture in the formal landscape that cannot be derived from what preceded it.
Both modes share a common prerequisite: the capacity to see that a new problem exists. Not a new variation of an old problem, not a refinement of an existing question, but a genuinely new question — one that the existing sequences were not built to address and cannot answer within their own terms.
---
AI's relationship to the prime object is the central tension of the current cultural moment, whether or not the participants in the debate use Kubler's vocabulary.
Consider what a large language model does when it generates text, code, or design. It processes a training corpus — the accumulated output of formal sequences across every domain the corpus contains — and produces outputs that are statistically consistent with the patterns in that corpus. The outputs are often excellent. They can be surprising. They can combine elements from separate sequences in ways that produce results no individual human maker would have produced, because no individual human maker has access to the full breadth of sequences that the model has processed.
But the outputs are, in a precise Kublerian sense, replicas. They occupy positions within existing formal sequences. The statistical engine that produces them is a sequence-filling engine of extraordinary power. It can generate a thousand Gothic cathedral variations in an afternoon. Each will be structurally sound, aesthetically coherent, and formally competent. None will open a new sequence, because the model's generative mechanism — statistical inference from patterns in existing sequences — is structurally oriented toward the production of artifacts that are consistent with what already exists.
This is not a limitation of current models that future models will overcome, at least not obviously. It is a structural feature of the generative mechanism itself. Statistical inference produces outputs that belong to the distribution defined by the training data. The prime object, by definition, does not belong to the distribution defined by what preceded it. It opens a new distribution. It creates a class of possibilities that could not have been predicted from the existing landscape of solutions.
The counterclaim is familiar: human creativity is also recombinatorial. Dylan's "Like a Rolling Stone" drew on Guthrie, Johnson, the Beats, the British Invasion. The flying buttress drew on existing knowledge of stone construction, arch geometry, and structural loading. No prime object emerges from a vacuum. The combinatorial character of AI generation is, on this argument, no different in kind from the combinatorial character of human invention. It differs only in scale.
Kubler's framework provides the tools to evaluate this counterclaim with precision. The recombination that produces a replica — a new Gothic cathedral that combines elements from Chartres and Amiens in a previously unrealized configuration — operates within the sequence. The elements being combined belong to the same formal family. The output, however surprising, is a variation on a demonstrated theme.
The recombination that produces a prime object — the flying buttress itself, which combined knowledge of arches, structural loads, and exterior support into a class of solution that had no precedent — operates across sequences. And the critical step is not the combination itself but the recognition that the combination is needed: the perception that the existing sequence of solutions to the problem of spanning interior space has reached a point where it cannot proceed without something that the sequence, by its own internal logic, cannot generate.
That perception — the perception of insufficiency, of a gap in the formal landscape that no existing sequence can fill — is what Kubler called entrance at its most consequential. It is the act of seeing that a new sequence is available before the first object in that sequence exists. It requires, in a sense that Kubler left largely implicit but that the AI age makes explicit, a mind that experiences the insufficiency as a condition: not a statistical anomaly in a pattern, but a problem that presses on consciousness with the weight of lived experience.
---
The examples that test this distinction most severely are the ones drawn from domains where AI's combinatorial power is most impressive.
In drug discovery, AI systems have identified molecular candidates that no human researcher had proposed — combinations of structural features from different chemical families that address biological targets in novel ways. Some of these candidates have entered clinical trials. Are they prime objects? In Kubler's strict sense, the answer depends on whether they open new formal sequences — new classes of solutions to problems that existing chemical sequences could not address — or whether they are sophisticated replicas: novel combinations within the established sequence of small-molecule pharmacology. The distinction is not academic. A truly new class of drug — one that operates by a mechanism no existing drug employs — opens a sequence. A new molecule that operates by a known mechanism, however cleverly optimized, fills one.
In mathematics, AI systems have discovered proofs and conjectures that surprised expert mathematicians. The question, again, is structural. A proof that resolves a known conjecture within an established mathematical framework is a replica — a solution within the sequence that the conjecture defined. A proof that introduces a genuinely new method, one that opens a class of problems previously inaccessible, would be a prime object. The reports from mathematicians who have worked with these systems suggest that most AI contributions fall into the first category: powerful, accelerative, but operating within sequences that human mathematicians opened.
In music, AI systems have composed pieces that audiences cannot reliably distinguish from human compositions. The compositions are competent. Some are moving. All of them, by the structure of the generative process, belong to sequences defined by the training data. A piece composed in the style of Bach is a replica within the Bachian sequence. A piece that combines elements of Bach and Coltrane is a replica within a hybrid sequence that a human listener could, in principle, have imagined. The question that no AI music system has yet answered is whether it can open a sequence that no human composer could have imagined — not a new combination of existing elements, but a new class of musical possibility.
The asymptotic quality of AI's relationship to the prime object is visible in each of these examples. The outputs approach the boundary of the existing sequence with increasing closeness. They combine, extrapolate, interpolate with a fluency that can make the boundary seem negligible. But the boundary persists: the distinction between operating within a sequence and opening a new one is not a matter of degree. It is a matter of kind. And the act of opening — the recognition that a new sequence is needed, the production of the first artifact that demonstrates its possibility — remains, as of this writing, a human act.
---
The implications extend beyond the question of whether machines are creative. That question, absorbing as it is, is secondary to the structural question Kubler's framework poses.
If the prime object is the artifact that changes the landscape — the thing that, once it exists, makes a new class of things possible — then the age of AI is an age in which the landscape is being filled at unprecedented speed while the rate of landscape-changing events remains, at best, unchanged. The density of replicas increases exponentially. The frequency of prime objects does not.
This produces a specific cultural condition: a landscape so saturated with competent, formally sophisticated artifacts that the prime objects become harder to see, not because they are rarer but because the noise floor has risen. When every possible variation within a sequence has been generated, the variation that opens a new sequence — the artifact that does not belong to the existing distribution — is statistically anomalous. It looks, from within the sequence, like an error.
Kubler observed that society "dislikes change to a degree that militates against invention," that the public "recognizes only what exists, unlike the inventors and artists whose minds turn more upon future possibilities." The observation acquires new force in an age when the capacity to produce what already exists has become infinite. The cultural pressure toward convention — the gravitational pull of the existing sequence — is now amplified by a technology that generates convention at industrial scale.
The prime object has always been rare. Kubler understood this. What has changed is not its rarity but the density of the field in which it must be recognized. The needle has not gotten smaller. The haystack has become infinite.
The question this poses for the human beings who still possess the capacity to open sequences — the question the remaining chapters will attempt to address from multiple angles — is not how to compete with AI at filling sequences. That competition is over. The question is how to cultivate, protect, and exercise the specific capacity that AI has made maximally valuable: the capacity to see that the existing sequences are insufficient, and to produce the first artifact that demonstrates what comes next.
The sculptor born in Florence in 1402 faced a different formal landscape than the sculptor born in Florence in 1602. Both possessed hands, stone, training. Both understood the problem of rendering the human figure in three dimensions. But the sculptor of 1402 entered a sequence in its early phase — the formal possibilities of Renaissance naturalism were wide open, every solution generated new problems, and the space for genuinely novel work was vast. The sculptor of 1602 entered the same sequence two centuries later, when the major formal innovations had been achieved, the canonical solutions had been established, and the remaining moves were refinements, elaborations, mannerist complications of possibilities already demonstrated.
The later sculptor may have been more technically skilled. The accumulation of solutions within the sequence provided better tools, better anatomical knowledge, better access to classical models. But the formal space available for genuinely new work had contracted. The sequence was approaching what Kubler called exhaustion — not the absence of things to do, but the absence of things to do that the sequence had not already implied.
This is what Kubler meant by entrance: the state of the formal sequence at the moment an individual maker begins participating in it determines more about what that maker can accomplish than personal talent, training, or intention. Entrance is not biography. It is not circumstances, though circumstances affect which sequences are accessible. It is a structural fact about the relationship between an individual and the history of the problem that individual has chosen to address.
The concept is uncomfortable because it diminishes the mythology of sovereign creative will. The artist does not stand before a blank canvas. The artist stands before a canvas already marked, invisibly, by every previous solution in the sequence — and the density of those previous solutions determines how much unmarked space remains.
---
AI has performed what may be the most consequential transformation in the history of entrance. It has collapsed the temporal structure that entrance previously depended on.
Before December 2025, entrance was gated by time. A developer who wished to enter the sequence of web application frameworks needed years of training: languages, paradigms, accumulated knowledge of what had been tried and what had failed. A musician who wished to enter the sequence of jazz harmony needed years of listening, practicing, absorbing the canonical solutions that defined the sequence's trajectory. The years were not merely instrumental. They were formative. The time spent inside the sequence, moving through its accumulated solutions, built the understanding of the sequence's structure that made genuine contribution possible.
Claude Code, and the generation of AI tools it represents, dissolved the temporal gate. A developer using Claude Code in 2026 enters every software sequence simultaneously, at every point. The tool has processed the entire history of each sequence it can access. It can produce solutions from any phase — early, middle, late — with equal facility. A musician working with AI composition tools enters every musical sequence simultaneously: Baroque counterpoint, bebop harmony, minimalist repetition, spectral timbre. The concept of entrance, which in Kubler's framework was bound to the accident of historical timing — when you were born, where you trained, which sequences were accessible to you — becomes a matter of choice.
The developer in Lagos and the developer in San Francisco now face the same landscape of formal possibility. The entrance point is no longer gated by institutional access to training, tools, or accumulated knowledge within a particular sequence. The imagination-to-artifact ratio — the distance between what can be conceived and what can be realized — approaches zero for every sequence the AI has been trained on.
This is the democratization that the builders celebrate, and they are right to celebrate it. The expansion of who gets to enter formal sequences is, by any historical standard, an extraordinary event. For the entire history of human making, entrance was constrained by access: access to training, materials, institutions, patronage, markets, geographic proximity to the centers where sequences were actively being developed. The framework knitters of Nottinghamshire could enter the sequence of textile production because they lived where the sequence was. The equally talented person in a village without a loom could not. AI dissolves this constraint with an indifference to geography, class, and institutional affiliation that no previous technology has matched.
But the dissolution of the temporal gate creates a problem that Kubler's framework identifies with precision, a problem the democratization narrative tends to obscure. The problem is selection.
---
When entrance was temporally gated — when entering a sequence required years of immersion — the years performed a filtering function that was invisible because it was experienced as training. The musician who spent a decade absorbing jazz harmony did not merely acquire the technical ability to produce jazz. She developed an understanding of the sequence's structure: where the live formal possibilities remained, where the exhausted regions lay, which directions of exploration were likely to yield new solutions and which had been thoroughly mapped. The years of immersion were years of reading the sequence — learning to see its shape, its momentum, its unfinished edges.
AI compresses the immersion to an afternoon. The developer who describes a web application to Claude Code in natural language receives a working artifact without having spent years reading the sequence of web application development. The artifact may be excellent. It may occupy a meaningful position in the sequence. But the developer does not know where in the sequence it falls, because the developer has not traversed the sequence. The prime objects and the replicas, the exhausted regions and the live edges, the canonical solutions and the unexplored variations are all equally available and equally invisible.
Kubler described this condition with a metaphor drawn from his own field: the difference between a scholar who has spent years in an archive, developing an intuitive sense of what the documents contain and where the gaps lie, and a visitor who has been given a complete index. The index provides access to everything the archive contains. It does not provide the sense of what the archive lacks — the sense of absence that only extended immersion can develop, and that is the prerequisite for recognizing that a new problem exists.
AI provides the index. It does not provide the sense of absence. And the sense of absence is what produces prime objects.
---
The concept of entrance also explains the specific anxiety of the expert in the age of AI — the anxiety that runs through the Luddite chapter of The Orange Pill and through every conversation between experienced practitioners who have watched their hard-won knowledge become instantly available to anyone with a subscription.
The senior engineer who spent twenty years building systems possesses an understanding of formal sequences in software architecture that is, in Kubler's terms, the product of sustained entrance. She has lived inside the sequence long enough to know not just what works but why certain solutions failed — where the sequence reached dead ends, where promising directions turned out to be blind alleys, where the real edges of formal possibility lie. This knowledge is not propositional. It cannot be extracted from her and encoded in a training corpus, because it is the product of the specific path she took through the sequence — the errors she made, the dead ends she followed, the moments of recognition when the structure of the sequence became visible through the experience of having been inside it.
AI cannot replicate this knowledge, because AI does not enter sequences in the way Kubler meant. AI processes the outputs of sequences — the accumulated solutions — without undergoing the process of sequential exploration that produced them. The difference is analogous to the difference between reading a map of a mountain range and having climbed through it on foot. The map contains all the topographical information. It does not contain the understanding that comes from having been caught in a storm on the wrong side of a ridge, from having discovered that the pass marked on the map is impassable in winter, from having found the unmarked trail that leads to a viewpoint the cartographer missed.
The expert's anxiety is, in this light, not irrational. Something real is being lost. The temporal gate that made expertise expensive also made expertise deep — forged through the specific process of sequential exploration that builds understanding of a sequence's structure. When the gate dissolves, the depth does not automatically transfer to the new entrants who bypass it.
But Kubler's framework also provides the answer to the expert's anxiety, and it is not the answer the expert wants to hear. The value of expertise was never the accumulation of solutions. Solutions are now abundant. The value was the structural understanding of the sequence — the capacity to read the sequence's shape, identify its live edges, and recognize where genuine formal possibility remains. That capacity does not disappear when AI arrives. It becomes the scarce resource. And the expert who possesses it is not made redundant by AI. She is made essential — as the reader of sequences in a landscape where the sequences are being filled faster than anyone can comprehend without precisely the kind of structural understanding that only deep entrance produces.
The inversion is complete. The expert's knowledge of what to build was always secondary to the expert's knowledge of what the sequence still needs. AI has revealed this by providing the former in unlimited quantity and leaving the latter as rare as it has always been.
---
The most consequential application of the entrance concept is to education, because education is the institution that society has built to manage entrance into formal sequences.
The university, the apprenticeship, the conservatory, the coding bootcamp — each is a structure designed to take an individual who stands outside a formal sequence and guide them through a structured process of entering it. The curriculum is a map of the sequence: here are the canonical solutions, here is the order in which they should be encountered, here are the problems you will be asked to solve so that the structure of the sequence becomes legible to you through the experience of having worked within it.
AI renders most of this structure obsolete in its current form. The canonical solutions are available instantly. The problems the curriculum poses can be solved by the tool faster and more competently than by the student. The structured process of entrance — years of coursework, examinations, supervised practice — competes with a tool that provides the same formal output in minutes.
But the output is not the point. The point was always the entrance itself — the process of moving through the sequence slowly enough that its structure becomes visible. The curriculum was never primarily a delivery mechanism for solutions. It was a device for producing the experience of sequential exploration that builds the capacity to read a sequence's shape.
If education reconceives its purpose — from delivering solutions to cultivating the capacity to read sequences, to identify live formal possibilities, to recognize where existing sequences are insufficient — then the institution survives the AI transition in a form that is more honest about what it was always for. The teacher's role transforms: no longer the provider of answers that a machine can provide faster, but the guide through the experience of entrance that no machine can replicate, because the experience requires time, frustration, error, and the slow accumulation of structural understanding that only friction produces.
The student who asks five questions about a topic before writing an essay is practicing entrance. She is reading the sequence — its established solutions, its unresolved tensions, its edges — before attempting to occupy a position within it. The student who asks the machine to write the essay has bypassed entrance entirely. The essay exists. The understanding of the sequence does not.
Kubler's entrance concept reveals what the AI-era education debate is actually about. It is not about whether students should use AI tools. It is about whether the process of entering formal sequences — the slow, friction-rich process that builds the capacity to see where those sequences lead and where they fail — can be preserved when the outputs of those sequences are available for free.
The answer is not obvious. But the question, once posed in Kubler's terms, becomes precise enough to address.
In the twelve months following December 2025, the volume of AI-generated artifacts across every domain of human making exceeded any previous measure of cultural production. The numbers resist comprehension. Millions of lines of code generated daily. Images produced at a rate that exceeds the cumulative output of every human artist who has ever lived, compressed into a single calendar year. Music compositions, architectural renderings, legal briefs, scientific hypotheses, marketing copy, instructional materials, screenplays — each produced at a scale that makes the word "prolific" meaningless.
The question Kubler's framework forces upon this abundance is specific: What does it mean, structurally, when formal sequences are filled at this density?
Most of what AI produces is, in the precise Kublerian sense, replicas. Variations within existing formal sequences. Competent explorations of possibility spaces that were opened by prime objects produced by human makers in earlier phases of those sequences. An AI system trained on the corpus of Western classical music can produce a thousand pieces in the afternoon that occupy positions within the sequences defined by Bach, Beethoven, Debussy, and Stravinsky. Each piece may be formally coherent, harmonically sophisticated, and aesthetically pleasing. None opens a sequence that those composers did not already imply.
This is not a criticism. Kubler was clear that replicas are not inferior objects. The replicas that follow a prime object are where the sequence's potential is realized — where the initial insight is tested, refined, pushed to its limits, adapted to contexts the prime object's maker could not have anticipated. The Gothic cathedrals that followed the first flying buttress were not lesser works because they were replicas. They were the works that demonstrated the full range of what the flying buttress made possible. Chartres is a replica. Reims is a replica. The Sainte-Chapelle is a replica. Each is magnificent. Each occupies a position in a sequence whose formal parameters were established by an earlier, probably less refined artifact.
Without replicas, prime objects remain isolated insights — brilliant demonstrations of possibility that never develop into traditions, practices, cultures. The sequence of Gothic architecture is its replicas. The prime object opened the door. The replicas built the cathedral.
But replica density changes the landscape in ways that Kubler anticipated in outline and that the AI age realizes in full.
---
When replicas are scarce — when each variation requires months or years of skilled labor — the sequence unfolds with a deliberation that has consequences for its structural development. Each position is occupied by a maker who has entered the sequence in Kubler's sense: someone who has spent time inside the sequence, traversed its earlier solutions, developed an understanding of its structure. The scarcity of replicas means that each new entry represents not just a formal variation but a judgment about which variation is worth the investment of time and labor required to produce it.
This filtering effect is invisible when it is operating. It is experienced not as filtering but as the natural pace of creative work. The architect who spends two years designing a building is not conscious of the thousands of variations she has not produced. She is conscious only of the one she has produced, and of the reasons — structural, aesthetic, contextual, economic — that led her to this particular solution rather than another. The filtering is embedded in the process. The friction of production is the curation.
When replicas become abundant — when AI can produce a thousand variations in the time a human maker would produce one — the filtering disappears. Every variation that the sequence implies can now be generated. The space of formal possibility that was previously explored selectively, guided by the judgment of makers who had entered the sequence deeply enough to know which variations were worth pursuing, is now explored exhaustively, without judgment, without selection, without the embedded curation that scarcity provided.
The result is not a richer sequence. It is a denser one. The distinction matters. A rich sequence is one in which each position is occupied by an artifact that extends the sequence in a meaningful direction — that tests a boundary, reveals a connection, demonstrates a possibility that the previous entries did not fully explore. A dense sequence is one in which every position is occupied, including the positions that contribute nothing to the sequence's development. The density is not a function of quality. It is a function of the cost of production, which has collapsed to near zero.
---
Kubler introduced a concept that he called replication drift — the inevitable variation that accumulates as artifacts are reproduced across time and across makers. No replication is exact. Each copy introduces minute changes — in material, in execution, in the maker's interpretation of the model — and these changes accumulate over generations into what Kubler described as an "inexorable drift" away from the artifact's original form. The drift is not planned. It is a structural consequence of the replication process itself. And it is, in many cases, the mechanism by which sequences evolve: the unintended variation that reveals a possibility the original maker did not see.
AI replication drift operates by a different mechanism but produces a structurally analogous effect. The stochastic element in generative models — the randomness controlled by what engineers call the "temperature" parameter — introduces variation into every output. At low temperatures, the output hews closely to the statistical center of the training distribution. At high temperatures, the output strays further, producing combinations that are more surprising, more distant from the canonical examples in the training corpus.
The parallel to Kubler's replication drift is suggestive. In both cases, variation is a structural feature of the replication process, not an intentional act of the replicator. In both cases, the variation occasionally produces artifacts that occupy positions the original sequence had not anticipated — positions that, if recognized, could point toward the opening of a new sequence. The question is whether the recognition is possible without the structural understanding that comes from deep entrance into the sequence.
The human maker who produces an unintended variation and recognizes it as significant — who sees, in the accidental, the outline of a new formal possibility — possesses the structural understanding of the sequence that makes recognition possible. She knows the sequence well enough to know that the variation does not belong to it. She sees the edges of the sequence because she has traced them through years of work. The variation stands out because it stands outside.
AI produces variations at a rate that would require millennia of human making to match. Among those variations, there are almost certainly artifacts that occupy positions outside existing sequences — formal anomalies that, if recognized, could serve as the seeds of genuinely new formal possibilities. The problem is that the recognition requires the structural understanding that only deep entrance provides, and the density of variations makes the recognition harder, not easier. The anomaly does not announce itself. It sits among millions of other variations, indistinguishable from noise unless the viewer possesses the capacity to read the sequence well enough to see where its boundaries lie.
---
There is a further consequence of replica density that Kubler's framework predicts and that the AI age is beginning to demonstrate. Call it premature sequence exhaustion.
Every formal sequence has a natural pace of exploration. In its early phase, the pace is rapid — each solution opens multiple new possibilities, and the formal landscape changes with each entry. As the sequence matures, the pace decelerates. The major variations have been explored. The remaining moves are increasingly subtle, requiring deeper knowledge of the sequence's structure to perceive and execute. The late phase of a sequence is not devoid of interest. Some of the most refined, most sophisticated artifacts in any sequence are produced in its late phase, when the makers have internalized the sequence so completely that their variations operate at a level of subtlety that would be invisible to a casual observer.
The natural pace of exploration allows for what might be called latent discovery: the recognition of formal possibilities that become visible only through the sustained attention that slow exploration requires. A maker working slowly through a sequence encounters dead ends that turn out to be side channels. She follows a variation that leads nowhere, backtracks, and in the backtracking discovers a connection to a different part of the sequence that the direct path would never have revealed. The detours are not inefficiencies. They are the mechanism by which sequences reveal their full structure.
AI fills sequences at a pace that eliminates the detours. The exhaustive exploration of the formal space, guided by statistical inference rather than sequential immersion, maps the territory without traversing it. Every position that the training data implies is occupied. The occupation is comprehensive but shallow — each position is filled without the process of arrival that, in human making, is where the deepest understanding of the sequence develops.
The danger is that the sequence appears exhausted before its genuine possibilities have been explored. The positions are all filled. Every variation the statistical model can infer has been produced. The sequence looks complete. But the completeness is an artifact of the exploration method, not a property of the sequence itself. The human maker who would have spent years inside the sequence — who would have followed the dead ends, discovered the side channels, recognized the latent possibilities that only slow exploration reveals — never arrives. The sequence is closed before it is finished.
This is not a speculative concern. It is already observable in domains where AI-generated content has saturated the formal space. In commercial music, where AI composition tools are most widely deployed, producers report a specific form of creative paralysis: the sense that every variation has been tried, that the formal space of a particular genre or style has been mapped to exhaustion, that there is nothing left to do within the sequence. The paralysis is not a response to actual exhaustion. It is a response to apparent exhaustion — the sensation produced by a formal space that has been filled to density without being explored to depth.
The distinction between density and depth is the distinction between a sequence that has been mapped and a sequence that has been understood. Mapping is what AI does. Understanding is what entrance produces. And the risk of premature sequence exhaustion is that the mapping forecloses the understanding — that the density of replicas convinces makers and audiences alike that the sequence is finished, when in fact the most generative possibilities have not yet been discovered because they lie in the places the statistical model could not reach.
---
Kubler observed that the public "recognizes only what exists, unlike the inventors and artists whose minds turn more upon future possibilities." The observation, made in 1973, describes with uncanny accuracy the condition that replica density produces. When the existing variations are abundant — when every position within a sequence has been occupied by a competent artifact — the pressure to produce more within the sequence intensifies, and the incentive to step outside the sequence diminishes. The existing sequence is where the audience is. It is where the metrics are legible, where engagement can be measured, where success can be defined in terms the market understands. The new sequence, by definition, has no audience, no metrics, no proven market. It is a field of possibility that does not yet exist as a field of production.
AI amplifies this asymmetry. It can fill existing sequences at industrial scale, producing artifacts that satisfy the market's demand for recognizable variation with a reliability that no human maker can match. The market rewards this production. The metrics rise. The sequence gets denser.
What the market cannot reward, because it cannot see, is the opening of a new sequence — the prime object that will, in retrospect, be recognized as the artifact that changed the landscape. The prime object has no audience at the moment of its production. It belongs to no existing distribution. It satisfies no established demand. It is, by every metric the market can apply, an anomaly.
The density of replicas does not prevent the production of prime objects. But it raises the noise floor — the level of ambient production through which the prime object must be recognized. And it redirects the incentive structure away from sequence-opening and toward sequence-filling, because filling is what the tools are optimized for, what the market rewards, and what the metrics can measure.
Kubler would recognize this condition. He described it in terms of the "replica mass" — the accumulated weight of variations that surrounds every prime object and that, over time, obscures the structural innovation the prime object introduced. In the pre-AI world, the replica mass accumulated slowly, over decades or centuries. The prime object could be identified in retrospect by scholars who could trace the sequence backward to its point of origin.
In the AI age, the replica mass accumulates in months. The identification of the prime object — the act of seeing, amid the density, which artifact actually opened a new sequence — becomes correspondingly harder. Not because the prime object is rarer. Because the haystack has become infinite, and the needle looks, from any distance, exactly like the hay.
The sequence of "software as subscription product" followed a trajectory that Kubler could have plotted in 1962 without knowing what software was. An early phase of rapid formal invention: the first companies that demonstrated it was possible to deliver applications over the internet rather than shipping them on discs, that subscription revenue could replace license fees, that updates could be continuous rather than annual. A middle phase of proliferation: the model applied to every conceivable business function, from customer management to human resources to accounting to design to project coordination, each application a variation within the formal parameters the early entrants had established. A late phase of diminishing formal returns: the remaining variations increasingly incremental, the products converging on a set of canonical features, the differentiation between competitors shrinking to the width of a marketing claim.
By 2024, the sequence was approaching what Kubler would have recognized as exhaustion. Not the absence of products to build. The absence of products to build that the sequence had not already implied. Every significant variation of the subscription software model had been explored. The late entrants were producing refinements — better interfaces, smoother onboarding, marginal feature additions — within a formal space whose boundaries had been established a decade earlier.
Then, in the winter of 2025, the sequence became visible. AI made it trivially easy to produce new entries — to generate, in a weekend, the core functionality that a SaaS company had spent years building and millions of dollars refining. The sequence did not collapse because AI attacked it. The sequence became legible as a sequence, its structure exposed by a technology that could fill every remaining position in it overnight.
A trillion dollars of market value vanished in the first eight weeks of 2026. Workday fell thirty-five percent. Adobe lost a quarter of its valuation. Salesforce dropped twenty-five percent. When Anthropic published a demonstration of Claude's capacity to modernize legacy COBOL systems, IBM suffered its largest single-day stock decline in more than a quarter century. The market called it the SaaS Apocalypse. Kubler would have called it the recognition of sequence exhaustion — the moment when the density of possible entries within the sequence became so apparent that the market could no longer pretend the sequence was still in its generative phase.
---
The distinction between what died and what survived in the Death Cross is the distinction between the sequence and the ecosystem — a distinction Kubler's framework clarifies with a precision that financial analysis alone cannot provide.
The code was the sequence. The subscription software model organized itself around a formal problem — deliver a business function over the internet, charge a recurring fee, update continuously — and the code that implemented each variation was the artifact that occupied a position within that sequence. When AI could produce the code for any variation in hours, the sequence was revealed as exhausted: every position could now be filled by anyone with access to the tool and the capacity to describe what they wanted.
But the ecosystem was not the sequence. The ecosystem — the accumulated data, the integrations with other systems, the workflow patterns embedded in institutional muscle memory, the compliance certifications, the audit trails, the security guarantees that took years of sustained engineering to build — belonged to a different formal sequence entirely. The sequence of institutional infrastructure. The sequence of trust.
This second sequence was in its early phase. The formal possibilities of how institutions organize around data, coordinate across systems, and build the structures of accountability that complex operations require were not exhausted. They were barely explored. The companies whose value resided in this second sequence — the ones that had built genuine ecosystems, not merely code — were repriced, not destroyed. The companies whose value resided entirely in the first sequence, the code sequence, were the ones that collapsed.
Kubler's framework provides the analytical tool that separates the structural correction from the structural catastrophe. The Death Cross was not the death of software. It was the exhaustion of one formal sequence — software-as-code-product — and the revelation of another — software-as-institutional-infrastructure — that was still in its early, generative phase. The value migration that followed was not a market anomaly. It was the structural consequence of sequence succession: the replacement of an exhausted sequence by one whose formal possibilities remain open.
---
The pattern extends beyond software. In every domain where AI has achieved production-level competence, the same structural analysis applies: the sequence of making the thing approaches exhaustion, and the sequence of deciding what thing to make, for whom, and within what institutional context is revealed as the sequence that matters.
Commercial music provides a particularly legible example. The formal sequences of popular music — the harmonic vocabularies, rhythmic patterns, structural conventions, production techniques that define genres and subgenres — have been explored with increasing density over the past century. AI composition tools, trained on the accumulated output of these sequences, can now produce competent entries in any genre at industrial scale. A producer who needs a thirty-second piece of background music in the style of lo-fi hip-hop can generate a hundred variations in an hour. Each will be formally competent. Each will occupy a position within the sequence that human producers established over decades.
The sequence of producing music within established genres is approaching exhaustion — not because AI has replaced human musicians, but because AI has made it possible to fill every remaining position in the sequence so rapidly that the sequence's exhaustion becomes visible. What remains valuable is not the capacity to produce another entry in the sequence. It is the capacity to recognize that the sequence is insufficient — that the formal possibilities it contains do not address a problem that has become pressing — and to open a new one.
The musician who achieves this — who produces the prime object that opens a new sequence of musical possibility — will not be competing with AI. She will be operating at the level of the sequence that AI cannot reach: not the filling of formal positions, but the recognition that a new class of positions is needed.
Visual art, architecture, legal analysis, medical diagnosis, scientific research — each domain exhibits the same structural pattern. The sequences that can be formalized — that can be reduced to a problem-and-solution structure amenable to statistical inference — are being filled by AI at a pace that reveals their approaching exhaustion. The sequences that cannot be formalized — the ones that require embedded judgment, institutional knowledge, the capacity to perceive the insufficiency of existing sequences — remain in their early phases, with vast formal possibility still unexplored.
---
Kubler observed that sequence transitions are rarely smooth. The exhaustion of one sequence and the opening of another is typically accompanied by a period of disorientation — a period in which the old sequence's practitioners resist the evidence of exhaustion, the new sequence's early entrants struggle to articulate what they are doing in terms the existing institutional structures can recognize, and the broader culture oscillates between nostalgia for the old sequence and anxiety about the new.
The disorientation is not a failure of understanding. It is a structural feature of the transition itself. The old sequence provided not only a body of solutions but a vocabulary, a set of evaluative criteria, a shared understanding of what constituted good work and what constituted bad. When the sequence exhausts, the vocabulary exhausts with it. The criteria no longer apply. The practitioners who were masters of the old sequence find that their mastery — genuine, hard-won, the product of deep entrance — is mastery of a set of formal possibilities that no longer define the frontier.
The senior engineer whose expertise in software architecture was built through decades of entrance into the code-product sequence faces precisely this disorientation. Her knowledge of systems design, dependency management, scaling strategies, and performance optimization was the product of deep engagement with a formal sequence that AI has now revealed as approaching exhaustion. The knowledge is not false. It is not irrelevant in some absolute sense. But its relationship to the frontier has changed. The frontier has moved to the sequence above — the sequence of institutional infrastructure, of judgment about what systems should exist and for whom — and the knowledge that was built for the old frontier provides leverage in the new one only if the practitioner can recognize that the frontier has shifted.
Kubler would note that this recognition is itself a form of entrance — the entrance into a new sequence, at a point early enough that the formal possibilities are still open. The practitioners who make this transition are the ones who transfer their structural understanding of the old sequence to the new one: who recognize that the judgment, taste, and architectural instinct developed through years of building systems are precisely the capacities the new sequence requires, even though the new sequence does not look like the old one and cannot be evaluated by the old criteria.
The practitioners who do not make the transition — who insist that the old sequence still defines the frontier, that the new tools are producing inferior work, that the depth of the old entrance cannot be replaced — are Kubler's late entrants in an exhausted sequence, producing increasingly refined variations within a formal space that has ceased to generate new possibilities.
---
The Death Cross, understood through Kubler's framework, is not a catastrophe. It is a revelation. The crossing of the curves — the moment the AI market overtakes the SaaS market in aggregate value — marks the point at which the exhaustion of one formal sequence and the generative potential of another become legible to the market in financial terms.
The market, in this reading, is not wise. It is not foolish. It is a mechanism that detects sequence exhaustion with a particular kind of sensitivity: the sensitivity to the rate of formal innovation, expressed as willingness to pay a premium for future possibility. When the rate of innovation within a sequence declines — when the remaining positions are refinements rather than departures — the market withdraws the premium. When a new sequence opens, and the rate of innovation in its early phase is visibly high, the market assigns the premium there.
The trillion dollars that vanished from software valuations did not disappear. It migrated — to the companies and individuals positioned at the early phase of the new sequences that AI makes possible. The migration is not complete. It is not orderly. It is accompanied by the disorientation, the resistance, and the human cost that every sequence transition produces.
But the structural logic is clear. The sequence of software-as-code-product is approaching exhaustion. The sequences that open above it — institutional infrastructure, judgment-driven design, the curation and direction of AI-generated abundance — are in their early phases. The formal possibilities are vast. The prime objects have not yet been produced. The makers who will produce them are, in many cases, the same people who built their expertise in the exhausting sequence below — practitioners whose deep entrance gives them the structural understanding to recognize where the new sequences lead.
The Death Cross is not the end of making. It is the moment when the formal landscape shifts, and the question of where to enter — which sequence to join, at which point, with which capacities — becomes the question that determines everything.
Kubler would recognize the moment. He described it, in 1962, with the precision of a crystallographer examining a phase transition. The crystal structure changes. The atoms are the same. The arrangement is new. And the properties of the new arrangement cannot be predicted from the old one, only discovered through the specific process of entering the new sequence and exploring what it contains.
The exploration has barely begun.
A crystallographer who grows a crystal slowly — controlling the temperature, managing the saturation of the solution, allowing the lattice to assemble one molecular layer at a time — produces a structure of extraordinary internal order. The crystal is transparent because its regularity permits light to pass through without scattering. Its faces are flat because each layer was deposited in alignment with the layers beneath it. Its strength is a function of its perfection: every bond in its place, every plane continuous, every defect minimized by the patience of the process.
Force the crystallization — supersaturate the solution, drop the temperature rapidly, accelerate the process by any of the means available — and the crystal forms faster. But the internal structure degrades. The lattice contains voids, dislocations, boundaries where one region of order meets another at a misaligned angle. The crystal is opaque where it should be transparent. Its faces are rough. Its strength is compromised by the defects that rapid formation produces.
The analogy is imperfect. Kubler, whose preference for analogies drawn from the physical sciences was a defining feature of his intellectual style, would have appreciated both its precision and its limits. The precision lies in the relationship between speed and structural order. The limit lies in the fact that crystals do not learn, do not accumulate understanding, do not develop the capacity to see their own structure and modify their growth accordingly. But the relationship between speed and order — between the pace at which a formal sequence is filled and the structural depth of the artifacts that fill it — is the question that connects Kubler most directly to the critique of smoothness that Byung-Chul Han has mounted against the digital age.
---
Han's argument, stripped to its structural core, is that the removal of friction from human experience produces artifacts and experiences that are formally competent but structurally shallow. The AI-generated code that works without struggle. The AI-drafted brief that cites the right cases. The AI-composed music that follows the genre's conventions. Each occupies a position in a formal sequence. Each is a replica — a variation within an established set of formal parameters. And each has been produced without the process of sequential exploration that, in Kubler's framework, is where the structural understanding of the sequence develops.
The smoothness Han describes is the surface quality of rapid crystallization. The artifact appears complete. Its faces are regular. Its formal properties are coherent. But the internal structure — the depth of understanding that the artifact embodies, the connection between this position in the sequence and the positions that preceded it, the sense of why this variation rather than another — is compromised by the speed of its production.
Kubler provides the analytical vocabulary that Han's critique lacks. Han can describe the phenomenology of smoothness — what it feels like to encounter artifacts produced without friction — but he cannot specify what is lost, structurally, when the friction disappears. Kubler can. What is lost is the process of entrance: the slow traversal of the sequence that builds the maker's understanding of its structure. The maker who produces a solution after years of engagement with the sequence understands not only what the solution is but where it falls — which problems it addresses, which it leaves open, how it relates to the solutions that preceded it and the possibilities it creates for the solutions that follow.
The maker who produces the same solution through AI assistance in an afternoon has the solution. She does not have the structural understanding. The artifact is the same. The maker is different. And the difference matters, not because the artifact is worse — it may be identical — but because the maker's capacity to produce the next artifact, the one that extends the sequence in a direction only structural understanding can perceive, has not been developed.
---
Kubler's framework complicates Han's conclusion, however, in a way that prevents the argument from settling into a comfortable nostalgia for friction.
Not all friction is formative. Kubler was a scholar of pre-Columbian art — a field in which the barriers to entrance were not primarily intellectual but institutional, geographical, economic. The Mesoamerican potter whose work Kubler studied did not face a deficit of productive struggle. She faced a landscape of constraints: limited materials, limited tools, limited access to the formal solutions developed in workshops she could not visit, limited time between the demands of subsistence and the demands of her craft. These constraints were real. They shaped her work. But they were not the constraints that built structural understanding of the formal sequence. They were the constraints of access — the barriers that prevented entrance into the sequence altogether.
The removal of access friction is not a loss. It is a liberation. The developer in Lagos who could not enter the sequence of web application development because she lacked the institutional infrastructure — the training programs, the mentors, the development environments, the financial runway to spend years in unpaid apprenticeship — is not made shallower by the removal of that barrier. She is made possible. Her entrance into the sequence, previously foreclosed, is now open. The question of whether her engagement with the sequence will produce structural understanding depends on what she does after the barrier falls — not on whether the barrier itself was somehow beneficial.
The distinction between formative friction and access friction is the distinction that separates a rigorous critique of AI's effect on depth from a romanticized defense of difficulty for its own sake. Han's argument is powerful when it addresses formative friction — the resistance that builds understanding through the specific process of sequential exploration. It weakens when it fails to distinguish formative friction from access friction — when it treats all difficulty as productive and all ease as corrosive.
Kubler's framework enforces the distinction. The formal sequence is the unit of analysis. Friction that advances one's understanding of the sequence's structure — that builds the capacity to read its shape, identify its live edges, recognize where genuine formal possibility remains — is formative. Friction that prevents entrance into the sequence — that keeps potential makers outside the formal landscape entirely — is access friction, and its removal is an unqualified expansion of the field of human making.
AI removes both kinds simultaneously, and that simultaneity is the source of the dilemma. The same tool that opens the sequence to the developer in Lagos also allows the developer in San Francisco to bypass the formative process that would have built her structural understanding. The same collapse of the imagination-to-artifact ratio that democratizes entrance also accelerates production to a pace that degrades the depth of engagement. The crystal grows faster. Its internal order is compromised. And the compromise cannot be separated from the liberation, because both are produced by the same mechanism.
---
The crystallographer's dilemma is, finally, a question about what kind of structure the age of AI will produce. Kubler's formal sequences are cultural crystals — structures of linked solutions whose internal order determines their capacity to support further growth. A sequence with deep internal order — one in which each position has been occupied by a maker who understands the sequence's structure and extends it in a direction that the structure itself suggests — can support a vast range of further development. The Gothic sequence, grown slowly over two centuries of linked solutions, produced a structural vocabulary capable of generating cathedrals of extraordinary complexity and beauty, each one building on the structural understanding accumulated by its predecessors.
A sequence with shallow internal order — one in which positions have been filled rapidly, without the process of entrance that builds structural understanding — may appear complete without being deep. It may contain a thousand variations that are formally competent and structurally unconnected — artifacts that occupy positions without understanding why those positions exist or what they make possible. The sequence looks full. It is, in a structural sense, empty.
The question is whether the ascending friction that builders describe — the relocation of difficulty from the mechanical level to the level of judgment, vision, and structural understanding — produces a new kind of internal order that compensates for the order that rapid crystallization degrades. If the removal of mechanical friction frees the maker to engage with the sequence at a higher structural level — to think about the shape of the sequence rather than the execution of the next variation — then the crystal may grow differently, not slowly in the old sense but with a different kind of depth, organized at the level of structural understanding rather than at the level of manual execution.
This is the possibility that Han cannot see from his garden and that the triumphalists cannot see from their dashboards. The old depth was real. Its loss is real. But the question of whether a new depth is forming — organized differently, located at a higher structural level, produced by a different kind of engagement with the formal sequence — is genuinely open. Kubler's framework identifies the question with precision. It does not answer it. The answer depends on what the makers do — whether they use the liberation from mechanical friction to engage more deeply with the structural logic of the sequences they inhabit, or whether they allow the speed of production to substitute for the understanding that slow production once enforced.
The crystal is still growing. Its final structure is not yet determined. The molecular layers are being deposited faster than at any point in the history of human making. Whether the resulting crystal will be transparent or opaque — whether the formal sequences of the AI age will possess the internal order that supports further growth or the shallow density that forecloses it — depends on choices that are being made now, in every workshop and studio and engineering office and classroom where the tools are in use.
The crystallographer's dilemma is not resolved by choosing speed or choosing patience. It is resolved by understanding the relationship between the two — by recognizing that the depth of a sequence is not determined by the pace of its filling but by the quality of attention brought to each position that is filled. That quality of attention is what entrance produces, what structural understanding enables, and what the age of AI places at the greatest risk and at the greatest premium simultaneously.
Every previous participant in a formal sequence was a biological organism operating under constraints that shaped the sequence as decisively as the formal possibilities it explored.
The Gothic builder was cold. He worked in stone because stone was available and wood was scarce or insufficient for the spans the sequence demanded. He was answerable to a bishop whose theological requirements constrained the symbolic program of the cathedral. He was constrained by the guild system that regulated who could practice which craft and under what conditions. He was mortal — a fact that imposed a limit on the complexity of the projects he could undertake and required the development of systems for transmitting knowledge across generations of builders who would never meet.
Each of these constraints shaped the formal sequence of Gothic architecture. The availability of stone determined which structural solutions were attempted. The bishop's requirements determined which symbolic programs were explored. The guild system determined the rate at which innovations could propagate from one workshop to another. Mortality determined the temporal scale of the sequence's development: centuries rather than decades, because each generation had to reconstruct the accumulated knowledge of its predecessors before it could extend the sequence.
The constraints were not obstacles to be overcome. They were constitutive elements of the sequence itself. The formal possibilities of Gothic architecture are inseparable from the material, institutional, and biological constraints under which they were explored. Remove the constraints — give the Gothic builders unlimited materials, unlimited labor, unlimited lifespans — and the sequence that results would be a different sequence, producing different artifacts, organized by a different internal logic.
---
AI participates in formal sequences without these constraints. It does not experience cold, scarcity, institutional pressure, or mortality. It does not experience the sequence as a lived condition — a landscape of problems that press on consciousness with the weight of real stakes. It processes the sequence as a formal structure — a pattern of linked solutions that can be analyzed, extended, and varied without the embodied understanding that comes from having been shaped by the sequence's constraints.
This is not a deficit in the conventional sense. The absence of constraints is precisely what makes AI so powerful as a sequence-filling mechanism. Unconstrained by material scarcity, institutional friction, or biological limitation, AI can explore formal spaces with a speed and thoroughness that no constrained participant can match. It can produce solutions that constrained participants would never attempt — not because the solutions are beyond their formal capacity, but because the constraints of their situation make certain explorations impractical, uneconomical, or invisible.
But the absence of constraints also changes what the sequence produces. When all participants are biological organisms embedded in a world of material, institutional, and mortal constraints, the sequence is shaped by the interaction between formal possibility and lived resistance. The solutions that survive are the solutions that work — not in the abstract formal sense that they occupy a coherent position in the sequence, but in the material sense that they address a problem that a constrained organism actually faces. The flying buttress survives as a solution not because it is formally elegant, which it is, but because it solves a problem that stone builders actually have: how to span large interior spaces when the walls must be thin enough to admit light and the stone must support its own weight against gravity.
When one participant is unconstrained, the formal possibilities are explored more comprehensively, but the constraint channel weakens. The filtering function that lived resistance provides — the function that selects, from among all formally possible solutions, the ones that address problems actual organisms face — is no longer built into the process of production. It must be supplied externally, by human judgment that evaluates the AI's output against criteria the AI does not possess: Does this solution address a real problem? Does it work in the material conditions where it will be deployed? Does it account for the constraints — physical, institutional, ethical, emotional — that the organisms who will use it actually face?
---
The domains where AI participation is most transformative are the domains where the constraint channel is narrowest — where the formal problem can be specified with sufficient precision that the absence of lived constraints does not degrade the quality of the solution.
Software development is the paradigmatic case. The formal problem of writing code that performs a specified function can be described with enough precision that an unconstrained participant — one that does not experience the frustration of debugging, the institutional politics of a development team, the pressure of a deadline — can produce solutions of extraordinary quality. The constraints that matter in software are formal constraints: Does the code compile? Does it pass the tests? Does it perform within the specified parameters? These constraints are internal to the formal sequence. They do not require the participant to have lived through the sequence's history to respect them.
Medicine provides a contrasting case. The formal sequence of diagnostic reasoning — the chain of linked solutions to the problem of identifying disease from symptoms, history, and test results — can be modeled with impressive accuracy. AI systems that process medical data produce diagnoses that match or exceed the accuracy of experienced clinicians across a wide range of conditions. But the constraint channel in medicine extends far beyond formal accuracy. The clinician's judgment is shaped by constraints that the formal sequence cannot capture: the patient's emotional state, the institutional context of the diagnosis, the ethical implications of uncertainty, the difference between a diagnosis that is statistically correct and a diagnosis that is appropriate for this particular patient in this particular circumstance.
These constraints are not noise. They are constitutive elements of the medical sequence, in the same way that the bishop's theological requirements were constitutive elements of the Gothic sequence. The AI system that diagnoses without them produces a formally correct artifact that may be contextually wrong — a position in the sequence that is formally occupied but substantively empty.
Art presents the most contested case. The formal sequences of artistic production — the chains of linked solutions to problems of representation, expression, composition, material manipulation — can be modeled, and AI systems produce artifacts that occupy positions within these sequences with remarkable competence. An image generated in the style of Vermeer occupies a position in the Vermeer sequence. A poem generated in the style of Emily Dickinson occupies a position in the Dickinson sequence.
But artistic production is, among all forms of human making, the one most deeply shaped by the constraint channel. Vermeer painted as he painted not because the formal possibilities of oil painting dictated his approach but because the material conditions of seventeenth-century Delft — the quality of the light, the cost of ultramarine pigment, the domestic interiors that were his available subjects, the optical instruments that were newly available to him — constrained and enabled his formal choices simultaneously. Dickinson wrote as she wrote not because the formal possibilities of English verse dictated her approach but because the conditions of her life — the isolation, the intellectual intensity, the specific quality of attention that her circumstances produced — shaped her formal choices in ways inseparable from the biographical facts.
AI produces Vermeer-like images without the light of Delft. It produces Dickinson-like poems without the isolation of Amherst. The formal positions are occupied. The constraint channel is absent. And the question of whether the absence matters — whether the artifacts that result are formally equivalent to the constrained originals or structurally different in ways that formal analysis alone cannot detect — is the question that divides practitioners, critics, and audiences in every domain where AI-generated artifacts now circulate.
---
Kubler's framework does not resolve this question by fiat. It identifies the structural change: a new kind of participant, with a new relationship to the constraints that have always shaped formal sequences, producing artifacts that are formally positioned but experientially untethered. Whether the tethering matters depends on what one believes the constraint channel contributes to the sequence's development.
If the constraint channel is merely a filter — if its only function is to select, from among formally possible solutions, the ones that happen to be feasible under real-world conditions — then AI's unconstrained participation is an improvement. Remove the filter, and the full formal space becomes available. The best solutions can be identified and then filtered afterward, through human judgment, for real-world applicability.
If the constraint channel is constitutive — if the lived experience of working under material, institutional, and mortal constraints is part of what gives the sequence its structure, its direction, its capacity to generate artifacts that resonate with the organisms who encounter them — then AI's unconstrained participation produces a different kind of sequence. One that is formally comprehensive and experientially hollow. One that fills every position without understanding why any position matters.
Kubler's own practice suggests he held the second view, though he never stated it in terms that anticipated AI. His decades of work with pre-Columbian artifacts — objects produced by anonymous makers under severe material constraints, in cultures where the individual maker's biography was irrecoverable — had taught him that the constraints were legible in the objects themselves. The Mayan corbel arch carries within its form the evidence of the stone it was made from, the tools that shaped it, the structural limitations the builders faced and the ingenuity with which they addressed them. Remove those constraints, and the arch becomes a different object — not better, not worse, but disconnected from the material history that gives it structural meaning within the sequence.
AI-generated artifacts carry no such material history. They are positions in sequences without the evidence of passage through those sequences. They are, in a metaphor Kubler might have used, signals without noise — transmissions so clean that they lack the artifacts of the channel through which they traveled, and those channel artifacts are, paradoxically, part of the information.
The noise is part of the signal. The constraint is part of the content. The lived resistance through which the human maker passes on the way to occupying a position in a formal sequence is not an obstacle to be optimized away. It is the process by which the position acquires the structural depth that connects it to the positions before and after it in the chain.
AI enters the sequence without passing through it. The position is occupied. The passage is absent. And the sequence itself — the chain of linked solutions whose internal order determines its capacity to support further growth — is changed by the absence, in ways that are only beginning to become visible.
Imagine a museum that contains every artifact ever made and every artifact that could be made. The museum is infinite. Its corridors extend in every direction. Every wall is covered, every surface occupied, every possible variation of every formal sequence instantiated in physical form.
The museum already exists, in a sense. It is the aggregate output of every generative AI system operating in 2026. Not housed in a single building but distributed across servers, feeds, platforms, and devices, the collection grows at a rate that renders any catalog obsolete before it is compiled. Images produced at a scale that exceeds human art history in total volume, compressed into months. Code generated at a rate that surpasses the cumulative output of every programmer who has ever lived. Music, text, architectural renderings, molecular structures, legal arguments, educational materials — each produced at a density that makes the word "collection" inadequate.
The question the museum poses is the question that every previous expansion of cultural production has posed, amplified to a degree that transforms it from a curatorial problem into an existential one: How do you walk through it? What do you stop in front of? What criteria guide your attention in a landscape of infinite abundance?
---
Every previous technology that reduced the cost of cultural production generated a version of this question, and every previous generation answered it by building institutions of curation — structures whose function was to apply judgment to abundance.
The printing press produced the library, the bookshop, the literary review, the university reading list. Each was a mechanism for filtering the flood of printed material into something a human mind could navigate. The criteria varied — scholarly significance, literary merit, commercial appeal, pedagogical utility — but the function was consistent: reduce the space of available artifacts to a space of attended artifacts, guided by human judgment about what merited attention.
Photography produced the gallery, the museum of modern art, the photography magazine, the curated exhibition. The medium's capacity to produce images at a rate that painting could never match required a corresponding capacity to select, from among the millions of images produced, the ones that advanced the formal sequences of the medium.
Recorded music produced the record label, the radio programmer, the music critic, the curated playlist. Each was a filter — a mechanism for reducing the space of recorded music to a navigable subset, guided by judgment about which recordings extended the formal sequences of their genres and which merely occupied positions.
In each case, the curatorial institution emerged in response to a specific condition: the cost of production fell faster than the cost of attention. When anyone can produce, the question of what deserves attention becomes the structurally important question. The answer is always supplied by human judgment, institutionalized in structures that translate individual discernment into collective focus.
AI has produced the most extreme version of this condition in cultural history. The cost of producing a formally competent artifact in nearly any domain has approached zero. The cost of attention has not changed. The human mind can still hold only a finite number of artifacts in focus at any given moment. The disproportion between production capacity and attentional capacity has never been greater.
---
Kubler's framework provides the theory of curation that this condition requires. The theory is simple in structure, radical in implication: the artifact that merits attention is the one that occupies a structurally significant position in a formal sequence. Not the most beautiful artifact, not the most technically accomplished, not the most emotionally resonant — though it may be any or all of these — but the one whose position in the sequence either opens new possibilities or extends existing ones in directions the sequence had not previously explored.
The prime object merits attention because it opens a sequence. The replica that extends a sequence in a genuinely new direction merits attention because it reveals possibilities the prime object only implied. The replica that occupies a position already filled — however competently, however beautifully — does not advance the sequence and therefore does not, by this criterion, merit the same order of attention.
The criterion is structural, not subjective. It does not depend on the viewer's preferences, the maker's reputation, or the institution that presents the artifact. It depends on the artifact's position — where it falls in the chain of linked solutions, whether it advances or merely occupies.
This is a severe criterion. Applied rigorously, it would empty most galleries, most playlists, most bookshops. The vast majority of artifacts in any domain — human-made or AI-generated — occupy positions that have already been filled. They are competent. They may be beautiful. They do not change the shape of the sequence.
The severity is the point. In a museum of everything, a criterion that admits everything is no criterion at all. The value of Kubler's structural approach is precisely its capacity to discriminate — to separate, from among the infinite artifacts the museum contains, the finite number that change the landscape of formal possibility.
---
The practical application of this criterion requires a capacity that no algorithm currently possesses: the capacity to read a formal sequence well enough to identify where its live edges lie.
The live edge of a sequence is the boundary between what has been explored and what remains possible. It is not the boundary of the formally conceivable — AI can compute that boundary more efficiently than any human mind. It is the boundary of the structurally significant — the region where the next position occupied would change the sequence's direction, open a new line of development, or reveal a connection to another sequence that had not previously been visible.
Identifying the live edge requires the kind of structural understanding that only deep entrance produces. The critic who has spent years inside a formal sequence — who has traversed its canonical solutions, followed its dead ends, internalized its internal logic — can perceive the live edge as a felt quality, a sense of where the sequence is straining against its own limits. The perception is not algorithmic. It is not the product of scanning the sequence's contents and computing which positions remain unoccupied. It is the product of having been inside the sequence long enough that its structure has become legible as a landscape — a landscape in which certain directions feel open and others feel closed, in which certain variations feel urgent and others feel exhausted.
This perception is what distinguishes the great curator from the competent cataloger. The cataloger can organize the museum's contents. The curator can walk through the museum and stop — instinctively, reliably, without being able to articulate the full basis for the choice — in front of the artifact that changes something. The artifact that occupies a position at the live edge of a sequence. The one that, once seen, makes the viewer see the sequence differently.
AI can catalog. It can organize, classify, recommend based on similarity, predict engagement based on historical patterns. What it cannot do — what the absence of deep entrance prevents — is perceive the live edge. The live edge is visible only to a mind that has been shaped by the sequence, that carries the sequence's structure as an internalized map, and that can therefore perceive where the map ends and the unmapped territory begins.
---
The age of AI has made this perception the most valuable cognitive capacity a human being can possess. Not the most marketable, necessarily — markets reward many things before they reward structural perception. But the most consequential. Because in a museum of everything, where every formally possible artifact exists or can be generated on demand, the only act that changes the landscape is the act of seeing — of perceiving, amid the density of replicas, the position that opens a new sequence, the artifact that the existing sequences could not have produced.
The curator in the museum of everything is not selecting for beauty, though beauty may be a correlate of structural significance. She is not selecting for novelty, though novelty may accompany the opening of a new sequence. She is selecting for position — for the structural relationship between the artifact and the sequences it belongs to or opens.
This is the skill that education must develop if it is to remain relevant in the age of AI. Not the skill of producing artifacts — production is now cheap. Not the skill of analyzing artifacts by established criteria — analysis can be automated. The skill of perceiving the structural significance of an artifact within the landscape of formal sequences. The skill of reading a sequence well enough to see its live edges. The skill of walking into a room filled with a thousand competent variations and knowing, without a checklist, which one changes the direction of the sequence.
The skill, in Kubler's terms, of distinguishing the prime object from the replica in a landscape where the density of replicas approaches infinity.
This skill cannot be taught by instruction alone. It is the product of entrance — of the slow, friction-rich process of moving through a formal sequence, encountering its solutions, internalizing its structure, and developing the felt sense of where it leads and where it ends. It is the skill of the crystallographer who has watched crystals grow slowly enough to understand the principles that govern their formation — and who can therefore recognize, when confronted with a rapidly grown crystal, where the structural defects lie and what they cost.
The museum of everything awaits its curators. They will not be algorithms, though algorithms will assist them. They will be minds shaped by deep entrance into formal sequences — minds that carry the structure of the sequences they have traversed and that can perceive, amid the infinite abundance of the museum, the finite number of artifacts that change the landscape.
The museum does not lack for contents. It lacks for the judgment that gives contents meaning. That judgment is human. It is built by the same process — entrance, immersion, the slow accumulation of structural understanding — that has always produced it. The tools have changed. The capacity they demand has not. It has only become more visible, more necessary, and more difficult to develop in a world that offers every shortcut to every destination except the one that matters: the deep understanding of where you are, where you have been, and where the sequence has not yet gone.
The question that has organized this book — where does this thing fall in the sequence? — leads, in its final implication, to a question about the human beings who make things and the specific capacity that the age of AI has rendered most consequential.
The capacity is not production. Production is now abundant. The capacity is not even judgment in the general sense, though judgment matters. The capacity is something more specific, more structurally defined, and more resistant to automation than any other cognitive act: the capacity to open a new formal sequence.
To open a sequence is to recognize that a new problem exists — or that an old problem can be reconceived in terms that make a genuinely new class of solutions possible — and to produce the first artifact that demonstrates the new class. The flying buttress. The Bessemer process. TCP/IP. The twelve-year-old's question, "What am I for?" Each is a prime object. Each created a space of formal possibility that did not previously exist. Each changed the landscape downstream in ways that the landscape, by its own internal logic, could not have generated.
The act of opening a sequence is distinct from the act of filling one. Filling a sequence — producing variations within an established set of formal parameters — is the work that AI performs with extraordinary competence. The statistical engine that powers generative models is, in Kubler's terms, a replica-production engine of unprecedented scale. It processes the accumulated output of formal sequences and generates new entries that are consistent with the patterns it has learned. The entries may be surprising. They may combine elements from separate sequences in configurations no human maker would have produced. But they are, structurally, entries within sequences that already exist. They occupy positions that the existing formal landscape implies.
Opening a sequence requires something that the existing landscape does not imply. It requires the perception of an absence — a gap in the formal landscape that no existing sequence addresses, a problem that presses on consciousness with a specificity that statistical inference cannot generate because the problem is not yet represented in the data.
---
The perception of absence is the cognitive act that most clearly distinguishes the human maker from the AI system in 2026. It is not the only distinguishing act, and it may not remain distinguishing indefinitely — the question of whether future AI systems will develop the capacity to perceive structural absences is genuinely open, and this book does not pretend to resolve it. But as of this writing, the perception of absence is the human act, and its structural properties explain why.
An absence is, by definition, not present in the data. It is the thing that the existing sequences do not contain, the problem they do not address, the formal possibility they do not imply. A system trained on the outputs of existing sequences can interpolate within those sequences, extrapolate along their established trajectories, and combine elements across sequences in novel configurations. What it cannot do is perceive that the sequences themselves are insufficient — that the landscape of formal possibilities they define does not encompass a possibility that matters.
The perception of insufficiency requires a relationship to the formal landscape that is different in kind from the relationship a generative model maintains. The model processes the landscape as data — as a distribution of patterns from which new patterns can be inferred. The human maker inhabits the landscape as a condition — as a set of possibilities and impossibilities that bear on her with the weight of lived experience. She knows the landscape not because she has processed it but because she lives in it, and the things the landscape cannot do are things she cannot do, and the problems the landscape does not address are problems she faces.
This is what Kubler meant, though he did not state it in these terms, when he described the two modes of invention. The first mode — the confluence of previously separate sequences — produces prime objects through combination: knowledge from one domain encountering a problem in another. AI can, in principle, perform this combinatorial operation, and there is evidence that it does so productively in domains like drug discovery and materials science, where the formal sequences are well-defined and the combinatorial space is computationally tractable.
The second mode — pure invention, what Kubler described as creation "solely by means of his own engagement with his milieu" — is the mode that resists automation most stubbornly. It requires not the combination of existing formal elements but the recognition that the existing elements are insufficient. That recognition is not a computation. It is an experience — the experience of inhabiting a formal landscape and finding it inadequate to the problem at hand.
Einstein's thought experiment about riding a beam of light was not a combination of existing physical concepts. It was the recognition that the existing concepts — Newtonian mechanics, the ether theory, the established formal sequence of classical physics — could not accommodate an experience that Einstein could vividly imagine but that the sequence could not contain. The prime object that followed — special relativity — was not a position within the sequence of classical physics. It was the opening of a new sequence, one whose formal parameters were incompatible with the sequence it replaced.
Darwin's question about the Galápagos finches was not a variation within the existing sequence of natural theology. It was the recognition that the existing sequence — the formal framework in which species were understood as fixed creations — could not account for the variation he had observed. The prime object — the theory of natural selection — opened a sequence that the previous sequence could not have generated, because the previous sequence was organized around assumptions that excluded the possibility.
In each case, the prime object emerged not from the processing of existing data but from the perception that existing data was insufficient to account for an experience that the maker could not ignore.
---
The implications for education, for organizational design, for the cultivation of human capability in the age of AI are direct and specific.
If the capacity to open new sequences is the irreducible human contribution — and if that capacity depends on the perception of structural absence, which in turn depends on deep entrance into formal sequences and on the lived experience of inhabiting a landscape whose insufficiencies are felt rather than computed — then the institutions that develop this capacity are the institutions that matter most.
Education that teaches students to fill sequences — to produce artifacts within established formal parameters, to demonstrate competence in the canonical solutions of a given domain — is education that AI will make redundant. Not because the competence is unimportant, but because the competence is no longer scarce. AI provides it in unlimited quantity. The student who can write a competent legal brief, produce a competent piece of code, compose a competent analysis of a literary text has acquired a capacity that is now available to anyone with access to a generative model.
Education that teaches students to read sequences — to perceive their structure, identify their live edges, and recognize where they fail — is education that AI makes more valuable. The capacity to perceive structural absence cannot be acquired from the output of a generative model, because the model's output is, by construction, consistent with what already exists. The capacity is acquired through the slow process of entrance: the traversal of a sequence's history, the encounter with its canonical solutions, the internalization of its internal logic, and the gradual development of the felt sense of where the sequence leads and where it ends.
Organizations that reward sequence-filling — that measure productivity by the volume of artifacts produced within established parameters — will find their metrics satisfied by AI at a fraction of the cost of human labor. The sequence will be filled. The positions will be occupied. The organization will be efficient and, in a structural sense, static. It will produce more of what already exists without producing anything that changes the landscape.
Organizations that reward sequence-opening — that identify and cultivate the individuals capable of perceiving structural absences, that create the conditions under which the perception of insufficiency can develop into the production of prime objects — will possess the capacity that determines the direction of formal sequences. They will be less efficient in the narrow sense. They will produce fewer artifacts per unit of input. But the artifacts they produce will be the ones that change the landscape, and in a world where the landscape is being filled at industrial scale, the capacity to change it is the only capacity that creates durable value.
---
The argument, compressed to its structural core, is this: AI fills sequences. It fills them with a speed, a fluency, and a formal competence that no biological maker can match. The filling is valuable. It is where the potential of formal sequences is realized, where prime objects are tested and refined and adapted to the full range of contexts they can serve. Without the filling, prime objects remain isolated insights. With the filling, they become traditions, practices, cultures.
But the filling is not the opening. The opening is the act that creates the sequence in the first place — the recognition that a new problem exists, the production of the first artifact that demonstrates a new class of solutions is possible. That act has always been rare. Kubler understood this. The prime object was always the exception, surrounded by a replica mass that outnumbered it by orders of magnitude. What has changed is not the rarity of the prime object but the density of the replica mass that surrounds it. The needle has not gotten smaller. The haystack has become infinite.
The human being who can produce a prime object — who can perceive the structural absence, recognize that the existing sequences are insufficient, and produce the first artifact that demonstrates what comes next — is not competing with AI. She is doing the thing that AI makes maximally valuable by making everything else maximally abundant.
This is not a consolation prize. It is not the diminished remainder left over after the machines have taken everything else. It is the most powerful capacity any species has ever possessed: the capacity to change the direction of formal sequences by introducing problems the sequences were not built to address. Every scientific revolution, every artistic movement, every political transformation, every technological disruption began with the opening of a new sequence. The opening was always the rarest act. It is now the most consequential one.
The age of AI has not diminished the human capacity to open sequences. It has revealed that capacity as the thing it always was — the rarest, most valuable, and most structurally significant act a mind can perform. The revelation is not comfortable. It demands a reorientation of education, of organizational design, of the criteria by which societies evaluate human contribution. But the revelation is also, in a sense that Kubler's austere framework permits only the most restrained expression of, cause for a particular kind of hope: the hope that the capacity most needed in the age of thinking machines is the capacity that only thinking beings have demonstrated.
The formal sequences continue. The things accumulate. The landscape fills. And somewhere, in a workshop or a laboratory or a classroom or a quiet room, a mind shaped by deep entrance into the sequences of its time perceives an absence — a gap in the formal landscape, a problem the existing sequences cannot address — and begins the work of producing the first artifact that demonstrates what comes next.
That work is the work that matters. It has always been the work that matters. The age of AI has simply made it impossible to pretend otherwise.
In 1982, twenty years after The Shape of Time appeared, Kubler delivered a lecture titled "The Shape of Time Reconsidered." The occasion was retrospective. The book had by then achieved the peculiar status of a work that is more cited than read, more invoked as a gesture toward structural thinking than engaged with as a system of specific claims. Kubler used the lecture to refine certain concepts, to acknowledge limitations, and to restate, with the precision of a scholar who had lived with his own framework long enough to see where it bent, the core proposition that had organized his intellectual life.
The proposition was this: the history of things is more useful than the history of people for understanding the shape of cultural change. Not because people do not matter. Because the things persist when the people are gone, and the structure of the things — their arrangement in sequences, their positions relative to what came before and what came after — reveals patterns that the biographical tradition obscures.
The proposition was offered in 1962 as a corrective to art history's biographical habit. It was offered in 1982 as a corrective to the cult of the individual that pervaded the art world of that era — the inflation of the artist's persona into a brand, the conflation of the maker's biography with the work's significance. In both cases, the proposition performed the same function: it redirected attention from the maker to the made, from the biography to the structure, from the person to the position.
In 2026, the proposition requires no corrective framing. It has become the default condition. AI generates artifacts without biographies. The things arrive without makers in any sense that the biographical tradition can accommodate. There is no life to narrate, no intention to decode, no cultural context that shaped the maker's sensibility in ways the critic can trace. There is only the thing, and its position in the sequence.
Kubler's framework, written for a world of human makers, turns out to be the framework best suited to a world where the makers are no longer exclusively human. Not because he anticipated AI — he could not have — but because he built his analytical structure on the one foundation that the arrival of AI does not dissolve: the structure of things across time.
---
But the framework fails in one respect that the age of AI makes visible, and the failure is instructive.
Kubler assumed that the opening of new formal sequences required the kind of intelligence that only biological organisms possess. He assumed this not as an explicit axiom but as an implicit condition of his entire analytical system. Every prime object in his framework was produced by a human maker. Every act of entrance was performed by a biological organism embedded in a world of material constraints. The formal sequences he described were sequences of human solutions to human problems, and the capacity to open new sequences — to recognize that a new problem existed and to produce the first artifact that demonstrated a new class of solutions — was a capacity he attributed, implicitly and without argument, to embodied, constrained, mortal intelligence.
The assumption was reasonable in 1962. No other kind of intelligence existed. It was reasonable in 1982. No other kind was imminent. It is no longer reasonable in 2026 — not because AI has demonstrated the capacity to open new sequences, but because the assumption's reasonableness can no longer be taken for granted. The question of whether AI can open sequences, or will in the future, is now a question that must be addressed rather than assumed away.
The evidence, as of this writing, supports a cautious version of Kubler's original assumption. AI systems generate artifacts within existing formal sequences with extraordinary competence. They combine elements across sequences in ways that produce surprising and sometimes valuable outputs. But the production of a genuine prime object — an artifact that opens a formal sequence not implied by the existing landscape — has not been convincingly demonstrated. The drug candidates, mathematical conjectures, and architectural designs that AI systems have produced are, on careful analysis, positions within sequences that human researchers opened. They are powerful replicas. They are not, or not yet, prime objects.
But the caveat matters. The history of claims about what machines cannot do is a history of revised claims. The claim that machines cannot play chess, cannot compose music, cannot generate natural language, cannot produce images indistinguishable from photographs — each was reasonable at the time of its making and incorrect within decades. The claim that machines cannot open new formal sequences may follow the same trajectory. It may not. The structural argument — that the perception of absence requires embedded, constrained, mortal intelligence — is more robust than the earlier claims, which were typically claims about task performance rather than claims about structural cognition. But robustness is not certainty, and the history of overconfident claims about machine limitation counsels humility.
Kubler's framework accommodates this uncertainty. If AI demonstrates the capacity to open sequences, the framework does not collapse. It expands. The formal sequences continue to be the unit of analysis. The distinction between prime objects and replicas continues to organize the landscape. What changes is the range of agents capable of producing prime objects — a change that would be historically momentous but structurally continuous with the framework's existing logic.
If AI does not demonstrate this capacity — if the opening of new sequences remains, for reasons that may be structural rather than contingent, a property of embedded biological intelligence — then Kubler's framework holds in its original form, and the human capacity to open sequences is confirmed as the irreducible contribution that no tool can automate.
The framework does not require the resolution of this question to remain useful. It requires only that the question be asked — and asked with the structural precision that Kubler's vocabulary provides. Not "Can AI be creative?" — a question too vague to answer. But "Can AI open a formal sequence that the existing landscape does not imply?" — a question that specifies what would count as an answer and that directs attention to the structural evidence rather than the subjective impression.
---
The book began with a developer generating forty prototypes in a weekend. The question posed in the prologue — which of these, if any, opens a sequence that did not exist before? — can now be answered with the full weight of ten chapters behind it.
The forty prototypes are replicas. Each occupies a position within formal sequences that human makers opened: the sequence of web applications, of mobile interfaces, of data visualization, of whatever specific domain the prototypes address. Each is competent. Some may be excellent. All belong to distributions defined by what already exists.
The question the developer cannot answer from inside the productivity rush — the question that requires stepping back from the sequence-filling and asking whether the sequences themselves are sufficient — is the question that Kubler's framework identifies as the structurally important one. Not "Can I produce more?" but "Is what I am producing within the right sequence?" Not "How fast can I fill this space?" but "Does this space need to be filled, or does a different space need to be opened?"
The answer, when it comes, will not be generated by the tool. It will be generated by the mind that has entered the sequences deeply enough to perceive their limits — the mind that knows, from the specific experience of having been inside the formal landscape, where the landscape fails. Where the existing solutions do not address the problem that matters. Where the sequences, however dense, however thoroughly filled, leave a structural absence that only a new sequence can address.
That perception — that recognition of absence, that capacity to see what the landscape lacks rather than what it contains — is the shape of time in the age of AI. It is the shape that Kubler described, in 1962, with the precision of a scholar who understood that the history of things is the history of the problems things address, and that the most consequential moments in that history are the moments when someone perceives a problem the existing things cannot solve.
The things have new makers now. The sequences fill faster than at any point in the history of human making. The landscape grows denser by the hour. And the question that organizes the landscape — not what can be made, but what needs to exist that does not yet exist — remains the question that only a mind embedded in the world can ask. A mind that experiences the landscape not as data but as a condition. A mind that knows what is missing because the absence presses on it with the weight of a problem that will not resolve until someone builds the first artifact that demonstrates what comes next.
The shape of time has not changed. The rate at which it fills has changed beyond recognition. The capacity to perceive its shape — to read its sequences, to identify its live edges, to see where the prime objects are needed — is the capacity the age of AI places at the center of human value.
Kubler spent his life studying the shapes that things make as they accumulate across time. The shapes continue to form. The things continue to accumulate. The time continues to flow, faster now, through channels wider than any Kubler could have imagined. And the question he asked — not who made this, but where does it fall? — remains, sixty-four years after he first posed it, the most precise and consequential question available for understanding what the age of thinking machines is building, and what it has not yet begun to build.
The position I kept misidentifying was my own.
For months, while building Napster Station and writing The Orange Pill, I thought I understood what I was doing: filling sequences. Shipping features. Generating artifacts. The productivity rush was intoxicating precisely because the artifacts accumulated so fast — each one a position occupied, each one evidence that the tools worked, that the vision was translating into reality at a speed I had never experienced in three decades of building.
Kubler stopped me. Not with a prohibition — he was never that kind of thinker — but with a question so structurally precise that it cut through every self-congratulatory metric I had been running: Where does this fall in the sequence?
The question is merciless. It does not care how many hours you worked. It does not care how many prototypes you shipped. It does not care whether your Slack is full or your dashboards are green. It asks only whether the thing you made advances the formal landscape or merely occupies a position the landscape had already implied.
I realized, reading Kubler through the lens of everything that happened in the winter of 2025, that most of what I had been building — most of what everyone had been building, in that first euphoric rush after the orange pill — was replicas. Brilliant replicas, sometimes. Replicas that worked, that shipped, that served real users. But replicas within sequences that human makers had opened decades earlier. The tools had made us impossibly fast at filling. They had not made us better at seeing what was missing.
That distinction — between filling and opening, between the replica and the prime object, between the position that the sequence already implies and the position that creates a new sequence — is the distinction I needed and did not have until Kubler provided it. The river metaphor tells me intelligence flows. The beaver metaphor tells me to build dams. Kubler tells me where the dam matters: at the point where a new sequence opens. Not where the existing current runs strongest — that is where the replicas accumulate — but where the landscape reveals an absence that no existing sequence can address.
What unsettles me most is the crystallographer's dilemma. The recognition that the speed at which we are filling sequences may be degrading the conditions under which new sequences become visible. That the density of replicas is not just a cataloging problem but a perceptual one — that the noise floor of competent, AI-generated artifacts may be rising faster than our capacity to perceive the prime objects hiding within it. The haystack is not just larger. It is actively making the needle harder to see.
And yet the hope in Kubler's framework is real, because it locates human value at the one point the machines have not reached. Not at production, which is now abundant. Not at combination, which AI performs with a breadth no human can match. At the perception of structural absence — the felt recognition that the formal landscape, however dense, does not contain what is needed. That recognition requires the thing Kubler assumed without argument and that the age of AI has made explicit: a mind that inhabits the landscape as a condition, not as data. A mind with stakes. A mind that knows what is missing because the missing thing is something it needs.
My children will inherit a landscape denser with artifacts than any generation has ever navigated. The question I carry for them — the question this book has helped me formulate more precisely than I could have without Kubler — is not whether they will be able to produce. They will produce more than I ever could, with tools more powerful than anything I have used. The question is whether they will learn to see — to perceive, amid the infinite abundance, the absences that matter. To read the sequences deeply enough to know where they end. To feel the structural insufficiency that signals where a new sequence needs to open.
That capacity is built by entrance — by the slow, friction-rich process of moving through formal sequences with enough patience to internalize their structure. It cannot be downloaded. It cannot be prompted. It can only be earned, the way Kubler earned his understanding of Mesoamerican art: by spending decades inside the sequences, following the dead ends, discovering the side channels, developing the felt sense of where the landscape leads and where it fails.
I do not know the shape of the time we are entering. Kubler would have said the same, and meant it as a statement of intellectual honesty rather than defeat. The shape will be determined by the prime objects that have not yet been produced — by the artifacts that will open sequences we cannot currently imagine, in response to problems we have not yet recognized.
What I know is that the capacity to produce those artifacts is the capacity this moment demands. And that capacity is ours — not because machines cannot have it, but because, as of now, they have not demonstrated it. The window may close. It may not. But while it is open, the most important thing a human being can do is enter the sequences deeply enough to see where they end, and begin the work of building what comes next.
The shape of time continues. The question of where we fall in it has never been more consequential, or more precisely posed.
In 1962, George Kubler proposed that every made thing occupies a position in a formal sequence -- a chain of linked solutions to a persistent problem. Some things fill positions already implied. A rare few open entirely new sequences. That distinction, between filling and opening, is now the most consequential question in technology.
AI fills sequences with breathtaking speed. It generates a thousand competent variations of anything that already exists. But the act that changes the landscape -- perceiving that the existing sequences are insufficient, that a new class of solutions is needed -- requires something the machines have not demonstrated: the felt recognition of structural absence, built through deep immersion in the problems a sequence addresses.
This book applies Kubler's framework to the age of generative AI, examining what happens when the cost of producing replicas approaches zero while the capacity to open new sequences remains as rare as ever. The haystack has become infinite. The needle has not changed. The question is whether we can still see it.
-- George Kubler, The Shape of Time

A reading-companion catalog of the 17 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that George Kubler — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →