By Edo Segal
The crash is the part nobody wants to talk about.
Every conversation I have about AI goes the same way. The exhilaration comes first — the productivity numbers, the capability expansion, the things a single person can build now that used to require a team of twenty. Then someone mentions the trillion dollars that vanished from software companies in eight weeks, and the room gets quiet. Not because the number is unfamiliar. Because nobody has a framework for what it means.
I didn't either. I had the vertigo — the falling and flying I describe throughout The Orange Pill. I had the pattern recognition of a builder who has watched technology cycles for thirty years. I had the gut sense that we were inside something historic. What I did not have was the structural map.
Carlota Perez drew that map.
She spent four decades studying what happens when a revolutionary technology collides with the financial system and the institutions that are supposed to govern both. Not one collision. Five of them, stretching from Arkwright's cotton mill in 1771 to the microprocessor in 1971. And the pattern she extracted is uncomfortable in the most useful way: the frenzy is not a bug. It is the mechanism by which the infrastructure gets built. The crash is not a failure. It is the turning point that opens the only window in which the institutions that determine whether a golden age follows can be constructed.
That reframing changed something fundamental in how I think about this moment. The SaaSpocalypse, the Death Cross, the trillion-dollar repricing — these are not signs that AI has gone wrong. They are signs that we are inside the turning point. The question is not whether the correction will come. It is what gets built while the window is open.
Perez also forced me to confront the most dangerous comfort available to a builder at the frontier: the assumption that the technology will sort it out. It will not. The technology installs the infrastructure. The institutions determine who benefits. Every golden age in history was engineered — through factory legislation, through universal education, through social compacts that did not build themselves. The canals survived the canal mania. The railways survived the railway mania. The internet survived the dot-com crash. But the golden ages that followed were built by people, not by the infrastructure.
This book applies Perez's framework to the AI moment with the rigor it demands. It asks where we actually are in the cycle, what the turning point requires, and whether the institutions will be built in time. These are not academic questions. They are the questions that determine what your children inherit.
The technology is there to be shaped. Perez can show you the shape of the choice.
— Edo Segal ^ Opus 4.6
b. 1939
Carlota Perez (b. 1939) is a Venezuelan-British economist whose work on the recurring patterns of technological revolutions has become foundational to the study of innovation, finance, and long-wave economic change. Born in Caracas, she trained as an industrial engineer and began her academic career studying technology policy in Latin America before joining the Science Policy Research Unit (SPRU) at the University of Sussex, where she developed the framework for which she is best known. Her landmark book Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (2002) identified five great surges of technological development since the late eighteenth century, each following a two-phase structure — an installation period driven by speculative financial capital, followed by a deployment period in which institutional reforms redistribute the technology's gains broadly. Perez holds honorary professorships at University College London and the Tallinn University of Technology, among other institutions. Her work has deeply influenced technology investors, policymakers, and economists seeking to understand why speculative booms and crashes are structurally necessary features of capitalist innovation, and why the institutional response during the turning point between installation and deployment determines whether a society achieves a golden age or a prolonged period of inequality and stagnation.
In 1771, Richard Arkwright opened a cotton mill in Cromford, Derbyshire, powered by a waterwheel on the River Derwent. The mill was not the first to mechanize spinning — James Hargreaves had patented the spinning jenny six years earlier — but Arkwright's innovation was organizational rather than purely mechanical. He combined water power, continuous operation, and a disciplined labor force into a system that could produce cotton thread at a quality and cost that cottage industry could never match. Within a decade, his system had been copied across the English Midlands. Within a generation, it had restructured the British economy. Within a century, it had reshaped civilization.
That mill is the first data point in a pattern that has repeated five times in two hundred and fifty years — a pattern so structurally consistent across radically different technologies, geographies, and political contexts that its recurrence demands explanation rather than dismissal. The economist Carlota Perez has spent four decades extracting this pattern from the historical record, testing it, refining it, and defending it against critics who insist that each revolution is unique and that history does not rhyme. Her framework, articulated most fully in Technological Revolutions and Financial Capital (2002), identifies five great surges of development since the late eighteenth century, each following the same sequence: a new technology irrupts into the economy, financial capital floods toward it in speculative frenzy, the frenzy produces a crisis, institutions adapt — or fail to adapt — and the resolution determines whether a golden age of broadly shared prosperity follows or whether a prolonged period of instability and inequality takes its place.
The five surolutions, in Perez's cataloguing: the age of the Industrial Revolution and its factory system (irrupting around 1771); the age of steam and railways (around 1829); the age of steel, electricity, and heavy engineering (around 1875); the age of oil, the automobile, and mass production (around 1908); and the age of information and telecommunications (around 1971, with the Intel microprocessor). Each surge followed the same two-phase structure. An installation phase, driven by financial capital, during which the new technology's infrastructure is built, early adopters experience dramatic capability expansion, speculative excess inflates asset prices, and the social costs of creative destruction fall disproportionately on the workers and communities tied to the previous paradigm. Then a turning point — typically a financial crisis or a period of institutional reckoning — followed by a deployment phase, during which production capital displaces financial capital, institutional reforms redistribute the gains, and the technology's benefits reach the broader population.
The installation phase is where fortunes are made and myths are born. The deployment phase is where civilizations are built. The turning point is where the outcome is decided.
Now consider where artificial intelligence sits in this pattern, because the answer is more contested — and more consequential — than the technology press acknowledges.
The instinct of most observers in 2026 is to treat AI as the sixth great surge: a new technological revolution following the same sequence, with its own irruption (the large language model breakthrough of 2022-2025), its own frenzy (the hundreds of billions flooding into AI companies), its own coming crash, and its own eventual golden age. This framing is clean. It is intuitive. It maps the breathtaking speed of AI adoption — ChatGPT reaching fifty million users in two months, Claude Code crossing $2.5 billion in annualized revenue — onto the familiar pattern of installation-phase dynamics. And it offers the comfort of historical precedent: the frenzy will end, the crash will come, institutions will adapt, and a golden age will follow, because that is what the pattern does.
Perez herself rejects this framing. In her most direct published statement on AI, a March 2024 essay titled "What Is AI's Place in History?", she argued that artificial intelligence "must be understood as belonging to a larger, more mature technological revolution that began a half-century ago." AI is not the sixth revolution. It is a powerful development within the fifth — the information and communications technology revolution that irrupted with the microprocessor in the early 1970s and whose deployment phase has yet to fully arrive. As early as 1986, in a paper on the trajectory of new technologies, Perez described how computers follow paths "towards increasing processing power" and other directions that "widen into the future with the target of 'artificial intelligence.'" She saw AI coming — not as a separate revolution but as the natural destination of the ICT paradigm, the place where five decades of accelerating computational power, networking, and software development were always heading.
This distinction matters enormously, because the two framings produce radically different diagnoses of the present moment and radically different prescriptions for what must be built next.
If AI is a new revolution, then the current frenzy is an installation phase in its early stages, the crash is years away, and the institutional response can afford to be deliberate. The turning point lies in the future. There is time.
If AI is part of the ICT revolution — its most powerful expression, arriving during what should have been the deployment phase — then the situation is far more urgent and far more strange. The ICT revolution's installation phase peaked with the dot-com bubble. The turning point came with the crash of 2000 and the financial crisis of 2008. The deployment phase should have followed: the period when institutional reforms redirect the gains of the revolution from the speculative few to the productive many, when the technology's benefits are broadly distributed through reformed education systems, updated labor protections, modernized social insurance, and governance structures adequate to the new paradigm.
That deployment phase never fully arrived.
The gains of the ICT revolution remained concentrated among platform companies and their shareholders. The institutional innovations that every previous golden age required — the factory legislation, the universal education, the social compact between capital and labor — were delayed, diluted, or blocked by political dynamics that Perez had been warning about since the early 2000s. In a 2019 tweet captured by the technology analyst Ben Thompson, Perez wrote: "I see the present as the 1930s, the turning point of the IT surge. We have had 2 frenzies and we have not yet had a golden age. The power of AI, IoT, 3D, robots, blockchain is there to be shaped."
There to be shaped. Not celebrated, not feared, not accepted as inevitable — shaped. By institutions, by political will, by the deliberate construction of frameworks that redirect the technology's power from concentration toward distribution. The golden age is not a gift the technology bestows. It is a structure that societies build, using the technology as raw material, during the brief window of institutional opportunity that the turning point opens.
This reading places the AI moment in a more precarious and more consequential position than the "sixth revolution" narrative suggests. AI has not arrived at the beginning of a new cycle, with decades of installation ahead and a turning point safely in the distance. It has arrived into the institutional vacuum left by the incomplete deployment of the previous revolution — a society that went through the speculative frenzy of the dot-com era, endured the financial crises that should have been the turning point, and then failed to build the deployment-phase institutions that every previous golden age required. AI is the most powerful technology in the ICT revolution's portfolio, arriving at the worst possible institutional moment: too late for the installation phase, which has already produced its speculative excess and its crises, and too early for the deployment phase, whose institutional foundations have not been laid.
Edo Segal, in The Orange Pill, arrived at a structurally equivalent observation from inside the revolution itself. Writing about the gap between AI capability and institutional readiness, he concluded that "the dams are not adequate" — that the institutional structures needed to channel AI's power toward broadly shared benefit did not exist and were not being built at the speed the moment demanded. His metaphor was ecological: the river of intelligence flowing faster than the structures built to direct it. Perez's framework provides the historical scaffolding that explains why the dams are not adequate: because the previous revolution's turning point was never fully resolved, and the institutional infrastructure that should have been built in the 2010s was never constructed.
The consequences of this institutional deficit are visible in every dimension of the AI transition. Educational institutions designed for the information age are training students in skills that AI commoditizes faster than the students can acquire them. Labor market protections designed for the industrial employment relationship do not accommodate the AI-augmented work that is rapidly becoming the norm. Social insurance systems designed for cyclical layoffs cannot address the structural displacement of knowledge workers whose professional identities are being reconfigured. Governance frameworks designed for the regulation of physical products and localized services cannot keep pace with a technology that evolves faster than legislative processes and operates across every national boundary simultaneously.
The Perez framework does not resolve the debate about whether AI constitutes a sixth revolution or the culminating technology of the fifth. What it does — and what makes it indispensable for understanding the current moment — is reveal the structural dynamics that operate regardless of which framing proves correct. Whether AI is a new revolution or the most powerful expression of an existing one, the same institutional requirements apply. The gains must be distributed. The workers must be supported. The educational systems must be restructured. The governance frameworks must be built. The turning point must be navigated with institutional imagination and political will commensurate to the scale of the technology.
The pattern that has repeated five times is not a prophecy. It does not guarantee that a golden age will follow the current frenzy, any more than it guaranteed that the Great Depression would end in the post-war prosperity of the 1950s. What it guarantees is the structure: that the frenzy will produce a crisis, that the crisis will open a window of institutional opportunity, and that the quality of the institutional response during that window will determine the trajectory of the revolution for decades. The pattern is a map, not a destination. It shows where the society is — in the gap between installation and deployment, at the moment of maximum institutional consequence — and it shows what has historically determined whether that gap produces a golden age or a lost generation.
Paul Kedrosky, in an incisive 2025 essay, challenged the application of Perez's framework to AI on the grounds that the model is "descriptive, not prescriptive" — that it becomes "a kind of secular theodicy" in which every crash is retroactively justified as necessary and every loss is sanctified as the price of progress. The critique has force. There is a version of the Perez framework that functions as a lullaby: the pattern says golden ages follow turning points, so relax, the institutions will be built, history will take care of itself. That version is dangerous precisely because it is comforting. The historical record shows that golden ages are not automatic. They are constructed, through political struggle and institutional imagination, by people who understand what the moment demands and choose to act accordingly. The Victorians built factory legislation. The New Dealers built the welfare state. Neither was inevitable. Both were the product of deliberate choices made during the turning point by people who understood that the technology alone would not deliver the broadly shared benefits that social stability requires.
The question, then, is not whether the pattern will repeat. The question is whether the people living through the turning point — the builders, the educators, the regulators, the citizens — will build the institutions that the deployment phase demands, at a speed commensurate with the technology that is already reshaping every dimension of economic and social life. The pattern says they can. The pattern cannot tell them whether they will.
That choice is being made now. It is being made in every boardroom where a leader decides whether to convert AI productivity gains into headcount reduction or capability expansion. In every classroom where a teacher decides whether to ban AI tools or restructure her pedagogy around the judgment and questioning that AI cannot perform. In every legislature where a policymaker decides whether to regulate AI to protect incumbents or to build the demand-side institutions that citizens need to navigate the transition. In every household where a parent decides what to tell a child who has just watched a machine do her homework better than she can.
The pattern hides in plain sight because it operates at a scale most people never see from inside the fishbowl of their daily lives. Perez sees it because she has spent four decades looking at two and a half centuries of evidence from a vantage point that encompasses the full arc of industrial capitalism. What she sees is not a prediction but a structural possibility — the possibility that the AI moment, for all its disruption, for all its creative destruction, for all the genuine suffering it is producing during the transition, could be the foundation of the most broadly shared golden age in the history of technological revolution. If the institutions are built. If the political will materializes. If the turning point is navigated with the same institutional imagination that produced the Victorian reforms, the New Deal, and the post-war social compact.
If. The smallest and most consequential word in the English language. The word on which golden ages turn.
Every speculative frenzy in the history of capitalism has been simultaneously irrational and indispensable. This is the paradox that Perez's framework resolves, and it is the paradox that most observers of the AI investment surge — both the enthusiasts and the skeptics — fail to hold in both hands at once.
The canal mania of the 1790s poured capital into waterway schemes that could never have generated returns sufficient to justify their cost. Investors bid up shares in canals that would never be completed, connecting towns that did not need connecting, serving traffic that did not exist. The mania ended in the predictable way: prices collapsed, investors were ruined, and the canal companies that had been the darlings of the London stock exchange became cautionary tales. And yet — the canal network survived. The speculative capital that funded the mania built an infrastructure of inland waterways that the British economy used for decades, an infrastructure that would not have been built by patient, rational investment because patient, rational investors would never have funded it at the speed or the scale that the mania produced. The frenzy was irrational. The infrastructure was real. Both were true at the same time.
The railway mania of the 1840s followed the same script with different scenery. George Hudson, the Railway King, built an empire on leverage, political influence, and the willingness to promise returns that physics and economics could not deliver. When the bubble burst in 1847, Hudson was ruined, thousands of investors lost their savings, and the parliamentary investigations that followed revealed a pattern of fraud, self-dealing, and speculative excess that made the canal mania look quaint. And yet — the railway network survived. Britain emerged from the mania with thousands of miles of track, hundreds of stations, and a transportation infrastructure that would underpin the Victorian golden age for the next three decades. The speculators were destroyed. The infrastructure endured.
The dot-com bubble followed the same structural logic at internet speed. Pets.com, Webvan, eToys, and hundreds of other companies burned through billions of dollars building businesses that could never have justified their valuations. The crash of 2000 destroyed $5 trillion in market value and produced a generation of chastened investors who swore they would never again confuse a business plan with a business. And yet — the fiber optic cables survived. The server farms survived. The software platforms survived. The internet infrastructure that the frenzy funded became the foundation of the information economy's partial deployment: Google, Amazon, Facebook, and the platform ecosystem that, for all its distributional failures, transformed the material conditions of life for billions of people.
Perez's framework explains why the frenzy is structurally necessary even as it is economically destructive. Financial capital during the installation phase serves a function that production capital cannot: it funds infrastructure at a speed and scale that rational investment analysis would never justify. The frenzy over-invests, and the over-investment is wasteful in its specifics — many companies fail, much capital is lost — but the infrastructure that survives the crash becomes the foundation on which the deployment phase builds. The frenzy is the mechanism by which capitalism installs the infrastructure of a new technological paradigm ahead of the institutional capacity to use it. The waste is the price of the speed. And the speed matters, because the infrastructure must exist before the deployment-phase institutions can redirect it toward broad social use.
Now consider the AI frenzy through this lens, and the picture clarifies considerably.
The hundreds of billions of dollars flowing into AI companies in 2025 and 2026 are building the infrastructure of the new paradigm: the large language models, the training clusters, the inference infrastructure, the organizational knowledge about how to integrate AI into human workflows, the developer tools, the fine-tuning techniques, the safety research, and the institutional understanding of what these systems can and cannot do. Much of this investment will produce losses. Many of the companies being funded will fail. The valuations that characterize the current moment — AI startups valued at multiples that make the dot-com era look conservative — will not survive the correction that every installation phase produces. Paul Kedrosky, applying Perez's own framework in a 2025 analysis, described the current moment as "a late-stage installation surge" in which "financial capital overshoots, infrastructure is bought and installed at great speed and well ahead of actual demand, and most of it goes bust."
But the infrastructure will survive the bust, just as the canals survived the canal mania, the railways survived the railway mania, and the internet infrastructure survived the dot-com crash. The large language models being trained today, the techniques being developed, the organizational practices being refined — these will constitute the installed base on which the deployment phase builds. The question is not whether the frenzy will end in a correction. The historical pattern makes that virtually certain. The question is what will remain after the correction, and whether the institutional infrastructure exists to redirect what remains toward broadly shared benefit.
The investment firm Baillie Gifford, applying the Perez framework in a November 2025 analysis, observed that "the surplus from technological shifts often leaks from infrastructure builders, which become commoditised, to users and society at large." This is the mechanism of the turning point: the companies that built the infrastructure during the installation phase see their margins compressed as the technology matures and competition intensifies, and the value migrates from the builders of the infrastructure to the users of it. The canal companies were eventually regulated into utilities. The railway companies saw their monopoly profits eroded by competition and government intervention. The platform companies of the information age have, so far, resisted this migration of value — which is precisely why the deployment phase of the ICT revolution has been incomplete. The value remains concentrated among the builders. The institutional mechanisms that should have redirected it to the users have not been constructed.
The AI frenzy exhibits features that both conform to and deviate from the historical pattern in ways that the Perez framework illuminates. The conforming features are obvious: the speculative excess, the detachment of valuations from current revenue, the concentration of investment among a small number of companies and geographies, the breathless optimism of early adopters, and the mounting anxiety of workers and communities who can see the creative destruction arriving but cannot yet see what will grow in the space it clears.
The deviations are more interesting, and more consequential.
The first deviation is speed. The AI installation phase is compressing the dynamics that previous revolutions spread across decades into a period of years or months. Segal's account of the Trivandpur training — twenty engineers achieving twenty-fold productivity multipliers in a single week — illustrates the compression vividly. The capability expansion that the factory system delivered over decades, the railway over years, and the internet over months, AI is delivering over days. This compression means that the turning point, when it arrives, will arrive faster and with less warning than any previous turning point. The institutions that need to adapt will have less time to adapt. The workers who need to retrain will have less time to retrain. The political systems that need to build deployment-phase infrastructure will have less time to build it.
The second deviation is the nature of the infrastructure being built. Previous frenzies built physical infrastructure — canals, railways, electrical grids, highway systems, fiber optic networks — that was expensive to build, slow to depreciate, and difficult to repurpose. The AI frenzy is building a different kind of infrastructure: computational and intellectual rather than physical. Large language models, training methodologies, fine-tuning techniques, prompt engineering practices, safety frameworks, and the organizational knowledge of how to integrate AI into human workflows. This infrastructure is cheaper to replicate, faster to evolve, and easier to distribute than any previous revolution's installed base. It is also more fragile, in the sense that it depends on continued investment in computation and energy in a way that a railway track, once laid, does not.
The third deviation is the most significant, and it connects directly to the debate about whether AI constitutes a new revolution or a culminating technology within the ICT paradigm. If Perez is correct that AI belongs to the ICT revolution, then the current frenzy is not an installation-phase frenzy in the classical sense. It is something stranger: a second frenzy within the same revolution, arriving after the turning point that should have transitioned the ICT revolution from installation to deployment. Perez herself acknowledged this strangeness in her 2019 statement: "We have had 2 frenzies and we have not yet had a golden age." The dot-com bubble was the first frenzy. The AI investment surge is the second. And the golden age — the deployment phase, the period of broad institutional reform and distributed prosperity — remains unrealized.
This reading produces a diagnosis of the current moment that is more alarming than either the "sixth revolution" optimists or the "bubble will burst" skeptics appreciate. The alarming feature is not the frenzy itself — frenzies build infrastructure, and the infrastructure is necessary. The alarming feature is that the institutional capacity to navigate the turning point has been weakened by two decades of failed deployment. The progressive institutional innovations that should have been built after the dot-com crash — the educational reforms, the labor market restructuring, the social insurance modernization, the governance frameworks — were not built. The political energy that the turning point should have channeled toward institutional construction was dissipated by the 2008 financial crisis, the populist backlash that followed, and the polarization that made progressive institutional innovation politically impossible.
Segal captured the consequence of this institutional deficit in The Orange Pill when he described a world where "AI governance frameworks arrive eighteen months after the tools they were meant to govern had already reshaped the workforce." The gap between the speed of capability and the speed of institutional response is not closing. It is widening. And the people in the gap — the workers adapting in real time without guidance, the students navigating an educational system designed for the previous paradigm, the parents who cannot answer their children's questions about what the future holds — are bearing the cost of the transition without the structures that the deployment phase should have provided.
The frenzy builds infrastructure. That is its function, and it is performing that function with characteristic vigor. The hundreds of billions flowing into AI are installing the computational and intellectual infrastructure of the new paradigm at a speed that patient investment could never have achieved. Many of the companies being funded will fail. Much of the capital being invested will be lost. The correction, when it comes, will be painful for the investors who funded the excess and the workers who depend on the companies that collapse.
But the infrastructure will remain. The models, the techniques, the organizational knowledge, the developer tools — these will survive the correction and become the installed base on which whatever comes next is built. The question that the turning point must answer is not whether the infrastructure was worth building. It was. The question is who will benefit from it, and what institutional structures will determine the distribution of those benefits. The historical pattern says that the distribution is not determined by the technology. It is determined by the institutions built around the technology. And the institutions, as of this writing, are not adequate to the task.
The Perez framework offers one more insight about the frenzy that is worth holding onto as the correction approaches. The frenzy is not just a financial event. It is a cultural event — a period during which the society's relationship to the new technology is forged in the heat of speculative excitement and existential anxiety. The discourse that Segal described in The Orange Pill — the triumphalists, the elegists, the silent middle — is the cultural expression of the installation phase's emotional dynamics. The positions that harden during the frenzy, the assumptions that crystallize, the narratives that take hold — these shape the cultural context within which the turning point's institutional debates occur. A society that emerges from the frenzy believing that AI is purely a tool for productivity gains will build different institutions than one that emerges understanding that AI is a paradigm shift requiring fundamental restructuring of education, labor, and governance.
The frenzy will end. The infrastructure will remain. The institutional question — who benefits, and through what structures — will be answered during the turning point that follows. The quality of that answer depends on whether the society arrives at the turning point with the institutional imagination and political will to build what every previous golden age required: the deployment-phase infrastructure that transforms technological capability from a source of concentrated wealth into a foundation for broadly shared prosperity.
The canals are in the ground. The question is what flows through them.
On February 23, 2026, Anthropic published a blog post about Claude's ability to modernize COBOL — the ancient programming language that still runs the core systems of major banks, insurance companies, and government agencies. IBM, the company most associated with the COBOL ecosystem, suffered its largest single-day stock decline in more than a quarter century. The market had drawn a conclusion that IBM's executives had been avoiding: the moat around their legacy business had been drained overnight by a tool that could read, understand, and rewrite code that human programmers found nearly impenetrable.
That single day was a microcosm of a larger repricing that the technology press had taken to calling the SaaSpocalypse. In the first eight weeks of 2026, Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. Autodesk twenty-one. Figma nineteen. A trillion dollars of market capitalization vanished from software companies in less than two months. The market had identified a structural shift and was repricing accordingly, with the speed and brutality that financial markets bring to moments of paradigm transition.
Segal, in The Orange Pill, named the phenomenon the Death Cross — the moment when the rising curve of AI market value intersects the falling curve of SaaS valuations. The metaphor borrowed from technical analysis, where a death cross signals that momentum has shifted from bullish to bearish. Applied to the software industry, the signal was vivid: the thing that was rising was now falling, and the thing that was marginal was now dominant. The lines had crossed, and the old order was on the wrong side.
Perez's framework explains the Death Cross as a specific instance of a dynamic that has accompanied every turning point in the history of technological revolution: the migration of value from the installation-phase infrastructure to the deployment-phase ecosystem. During the installation phase, value resides in the technology itself — in the capacity to build the new thing. The companies that capture this value are the ones that own the technology, the platforms, the tools. During the deployment phase, value migrates to the institutional layer above the technology — to the judgment about what the technology should do, for whom, and in what context. The companies that capture deployment-phase value are not necessarily the ones that built the technology. They are the ones that built the ecosystems, the institutional relationships, and the domain expertise that turn raw technological capability into useful services.
The canal companies of the 1790s owned the installation-phase infrastructure: the waterways, the locks, the warehouses. Their value was in the physical infrastructure itself. When the railway arrived and made the canals less relevant as transportation corridors, the canal companies that survived were the ones that had built something beyond the infrastructure — commercial relationships, warehousing expertise, knowledge of local markets. The ones that had only built canals were destroyed. The railway companies experienced the same dynamic a generation later: the ones that survived the turning point were the ones whose value extended beyond track and rolling stock into integrated transportation systems with scheduling expertise, signaling networks, and commercial relationships that constituted genuine institutional infrastructure.
The SaaS companies facing the Death Cross in 2026 were experiencing the same migration of value at digital speed. Segal's analysis was precise: nobody uses Salesforce for the software. They use Salesforce for the data layer that twenty years of enterprise deployment have built, for the integrations that connect their sales pipeline to their marketing automation, for the workflow assumptions baked into the muscle memory of every sales organization trained on the platform, for the compliance certifications and audit trails that took a decade to accumulate. The code — the thing AI could reproduce in an afternoon — was the installation-phase infrastructure. The ecosystem — the data, the integrations, the institutional trust, the accumulated organizational knowledge — was the deployment-phase value.
This distinction between code-as-value and ecosystem-as-value maps directly onto Perez's distinction between financial capital and production capital. Financial capital values code because code is the scarce, proprietary asset that generates installation-phase returns. Code can be owned, licensed, protected by patents and trade secrets, and sold at margins that reflect its scarcity. When AI makes code abundant — when any competent person can describe what they want and receive working software in hours — financial capital's valuation model breaks. The scarcity that justified the valuation has been destroyed. The Death Cross is the market's recognition that the scarcity has moved.
Production capital values ecosystems because ecosystems are the durable, institutional assets that generate deployment-phase returns. Ecosystems cannot be reproduced in an afternoon. They are built over years through the patient accumulation of customer relationships, data, integrations, domain expertise, regulatory compliance, and the organizational knowledge that turns raw capability into reliable service. When the code becomes abundant, the ecosystem becomes the scarce and therefore valuable thing. The companies that own ecosystems are the deployment-phase survivors. The companies that only own code are the installation-phase casualties.
The financial repricing of the Death Cross is therefore not a sign that the software industry is dying. It is a sign that the software industry is being repriced according to a new theory of value — a theory in which code is a commodity and ecosystems are assets. The repricing is painful for the companies and workers who bet on the old theory. It is also structurally necessary, because it redirects capital from the installation-phase infrastructure (code) to the deployment-phase infrastructure (ecosystems, institutional knowledge, domain expertise, judgment) that the next period requires.
But the repricing is happening with a speed that has no historical precedent, and the speed itself creates risks that the Perez framework illuminates. Previous turning-point repricings unfolded over years or decades, giving institutions, workers, and communities time to adapt. The canal companies' value eroded over decades as railways displaced waterway transportation. The railway companies' monopoly profits were compressed over years as regulation and competition took hold. The dot-com repricing, for all its drama, played out over the better part of a year.
The AI-driven repricing of SaaS happened in weeks. A trillion dollars of value vanished in eight weeks. This speed is itself a consequence of the technology — AI accelerates not just the installation of the new paradigm but the destruction of the old one. And the speed has implications for the turning point that the historical pattern alone cannot fully address. If the repricing happens faster than institutions can adapt, then the social costs of creative destruction — the displaced workers, the disrupted communities, the eroded professional identities — concentrate in a shorter period, producing a sharper and more politically destabilizing shock.
Kedrosky's warning about the scale of AI capital expenditure is relevant here. In conversation with the economist Paul Krugman, Kedrosky observed that "the scale of the spending is now on a sovereign level" — that the capital flowing into AI infrastructure exceeds what most national governments spend on any single category of public investment. When spending reaches sovereign scale, the consequences of the correction also reach sovereign scale. The turning point, when it arrives, will not be a stock market correction that affects primarily the investors who funded the excess. It will be an economic event that affects the societies in which the infrastructure was installed, the workers whose labor it displaced, and the institutions that must respond to the displacement.
Perez's framework suggests that the migration of value from code to ecosystem is not the end of the story but the beginning of the deployment-phase restructuring. The SaaS companies that survive the Death Cross will be the ones that adapt their business models from selling code to selling judgment — from licensing software to providing the institutional layer that AI-augmented organizations need but cannot build for themselves. The deployment-phase SaaS company is not a software vendor. It is an institutional infrastructure provider: a company whose value lies in the data, the domain expertise, the compliance frameworks, the integration networks, and the organizational knowledge that turn raw AI capability into reliable, governed, trustworthy service.
This transformation has implications beyond the software industry. Every knowledge-work sector is experiencing a version of the Death Cross — a repricing of value from the capacity to execute to the capacity to direct, evaluate, and govern execution. The law firm whose value was in the capacity to draft briefs is being repriced; the law firm whose value is in the judgment about which briefs to draft, and the client relationships that make that judgment valuable, is being repriced upward. The consulting firm whose value was in the capacity to build analytical models is being repriced downward; the firm whose value is in the domain expertise that determines which models are worth building is being repriced upward. The pattern is consistent across sectors: execution value is migrating to judgment value, code value is migrating to ecosystem value, and the companies, workers, and institutions that positioned themselves on the wrong side of the migration are experiencing the installation-phase creative destruction that every turning point produces.
The policy implications of the Death Cross are immediate and practical. If the repricing is happening at sovereign speed and sovereign scale, then the institutional response must also operate at sovereign speed and sovereign scale. The retraining programs that displaced knowledge workers need cannot be developed over five-year planning horizons. The educational reforms that the next generation requires cannot wait for the completion of multi-decade curriculum review processes. The governance frameworks that AI-augmented industries require cannot be built through the normal pace of legislative deliberation.
Segal's injunction in The Orange Pill — stop doing 2026 planning based on pre-December 2025 assumptions, throw the plan away, start from the world that actually exists — applies with equal force to the institutional response. The world that actually exists is one in which a trillion dollars of software industry value has been repriced in eight weeks, in which the migration of value from code to ecosystem is accelerating, and in which the institutional infrastructure that would help workers, students, and communities navigate this migration does not yet exist at scale.
The Death Cross is a financial event. It is also a structural signal — an early expression of the turning point that Perez's framework predicts and that the historical pattern confirms. The lines have crossed. The old theory of value is on the wrong side. And the institutional response that will determine whether the turning point produces a golden age or a lost generation is, as of this writing, operating at a speed and a scale that are not commensurate with the magnitude of the repricing.
The canals were repriced when the railways arrived. The railways were repriced when the automobile arrived. The question was never whether the repricing would happen. The question was what institutions would be built to manage the transition, and whether the people who bore the cost of the repricing would be supported through it or abandoned to it. That question is being answered now, in real time, at a speed that the historical pattern did not prepare us for.
The turning point between installation and deployment is not a single event. It is a period — sometimes lasting several years, sometimes compressed into months — during which the speculative excess of the installation phase collides with the institutional deficit of the society, and the collision produces a crisis that opens a window of institutional opportunity. What gets built during that window determines the trajectory of the revolution for decades. The Victorian factory legislation, the New Deal financial regulation, the post-war social compact — each was built during a turning point's window of institutional opportunity, and each determined whether the revolution it accompanied produced a golden age or a prolonged period of instability.
The window opens because the crisis concentrates political attention. In normal times, institutional reform is slow, incremental, and politically difficult. The constituencies for reform are diffuse. The constituencies against reform — the incumbents who benefit from the existing institutional arrangement — are concentrated, organized, and politically powerful. The crisis changes the calculus. The suffering produced by the gap between technological capability and institutional readiness becomes visible, politically salient, and impossible to ignore. The constituencies for reform coalesce. The political cost of inaction exceeds the political cost of reform. And the window opens — briefly, precariously, subject to closing at any moment.
The current turning point has features that distinguish it from every previous one in the Perez cycle, even as it follows the same structural logic. Three features in particular deserve sustained attention, because they determine what the turning point demands and what the window of institutional opportunity must produce.
The first feature is the speed of the technology. Previous turning points could be navigated at the pace of politics because the underlying technology evolved at a pace that, while faster than institutions, still allowed institutional adaptation to approximate technological change within a generation. The factory system's capabilities expanded over decades. The railway network's reach expanded over years. Even the information technology revolution, which felt blindingly fast to those living through it, evolved on a timeline measured in years and decades rather than months and weeks.
AI operates on a different clock. The capability expansion that Segal documented in The Orange Pill — the twenty-fold productivity multiplier in a single week, the product built from concept to functioning prototype in thirty days — illustrates a technology that evolves faster than any institutional process designed to govern it. Between the time a regulatory framework is proposed and the time it is enacted, the technology it was designed to regulate has already evolved beyond the framework's assumptions. Between the time an educational curriculum is redesigned and the time the first cohort of students graduates, the skills the curriculum teaches have already been commoditized. The gap between the speed of technological capability and the speed of institutional response is not closing. It is widening with each iteration of the technology.
This speed asymmetry means that the institutional innovations built during the current turning point cannot be designed the way previous turning-point institutions were designed — as static frameworks that regulate a stable technology. They must be designed as adaptive frameworks that evolve as the technology evolves. The factory legislation of the Victorian era could specify maximum working hours and minimum age requirements because the factory system's demands on workers were relatively stable and predictable. AI-age institutions cannot specify the equivalent stable requirements because the technology's demands on workers, students, and citizens change faster than any static framework can accommodate.
The Perez framework does not prescribe specific institutional forms. It identifies the functions that deployment-phase institutions must perform: redirecting gains from the few to the many, protecting workers during the transition, preparing citizens for the new paradigm, and ensuring democratic governance of the technology's trajectory. How those functions are performed — through what specific institutional mechanisms — must be invented for each revolution, because each revolution's technology creates different challenges and different opportunities.
The second distinguishing feature is the nature of the work being disrupted. Previous revolutions primarily disrupted physical labor in their early phases. The factory system displaced hand production. The railway displaced horse-drawn transportation. The automobile displaced the horse-drawn carriage and the industries that supported it. The disruption of physical labor was socially devastating but did not threaten the fundamental categories through which people understood their professional identities. A hand-loom weaver displaced by a power loom lost his livelihood, but the concept of "weaving" as a category of productive activity persisted. The activity was industrialized, not eliminated.
AI disrupts knowledge work, and it does so in a way that threatens the categories themselves. When a senior software architect watches AI perform the implementation work that consumed eighty percent of his career, the challenge is not merely economic. It is ontological. The category of "software engineering" — the set of activities, skills, and professional norms that defined the identity — is being restructured in real time. What remains after AI absorbs the implementation layer is something that does not yet have a stable name or a stable institutional home: the judgment, the architectural intuition, the capacity to decide what should be built and for whom. This capacity is valuable — arguably more valuable than the implementation it replaces — but it is not recognized by existing credentialing systems, not accommodated by existing employment categories, and not supported by existing professional development infrastructure.
Segal described this identity disruption with the empathy of someone who had witnessed it firsthand: the senior engineer who spent his first two days in Trivandpur oscillating between excitement and terror, arriving by Friday at the realization that the twenty percent of his work that could not be automated — the judgment, the taste, the instinct for what would break — was the part that mattered. The tool had not made him redundant. It had revealed what he was actually good at. But the revelation was accompanied by grief, because the eighty percent that had been automated was not merely labor. It was identity. It was the thing he had spent twenty-five years learning to do, the thing his professional community recognized and rewarded, the thing that made him who he was.
The turning point must address this identity disruption alongside the economic disruption, because the two are intertwined. Workers who lose their professional identities do not simply retrain and move on. They experience a crisis of meaning that affects their capacity to adapt, their willingness to invest in new skills, and their engagement with the political process through which turning-point institutions are built. A turning point that addresses the economic displacement without addressing the identity displacement will fail, because the people it needs to mobilize will be too demoralized to participate.
The third distinguishing feature is the institutional deficit inherited from the incomplete deployment of the previous revolution. This is, in Perez's analysis, the most dangerous feature of the current turning point, because it means that the society is entering the crisis with weaker institutional infrastructure than any society has brought to a turning point since the early industrial era.
The information age's deployment phase should have produced the educational reforms, the labor market restructuring, the social insurance modernization, and the governance innovations that would have prepared the society for the next wave of technological disruption. It did not. The gains of the digital revolution were captured by platform companies and their shareholders. The institutional innovations were blocked by political dynamics — the capture of regulatory processes by incumbents, the polarization that made progressive reform politically impossible, the erosion of the social movements that had historically driven turning-point institutional construction. The result is a society entering the AI turning point with educational institutions designed for the previous paradigm, labor market protections designed for the industrial employment relationship, social insurance systems designed for cyclical rather than structural displacement, and governance frameworks that cannot keep pace with a technology that evolves faster than legislative processes.
Perez argued in her 2024 essay that the technologies of the ICT revolution — AI, the Internet of Things, robotics, blockchain — are "there to be shaped." The verb is important. Shaped, not accepted. Shaped, not resisted. The technology is not a natural force that humans must simply endure. It is a tool — the most powerful tool in the history of the species — and the direction it takes is determined by the institutional structures that channel its power. The turning point is the moment when those structures are built, and the quality of the structures determines whether the technology produces broadly shared prosperity or concentrated wealth and distributed suffering.
What must the turning point produce? The Perez framework points to four domains of institutional innovation, each corresponding to a function that every previous golden age required.
The first is education. Every golden age was built on an educational foundation designed for its technological paradigm. Universal primary education for the factory system. Universal secondary education for the mass-production economy. Expanded higher education for the information age. The AI paradigm requires something different from all of its predecessors: not more training in specific skills that AI will commoditize, but the development of the judgment, questioning capacity, ethical reasoning, and integrative thinking that AI cannot perform. The educational restructuring is not a marginal adjustment. It is a paradigm shift within education itself — from training people to produce toward developing people to direct, evaluate, and govern AI-augmented processes. As Segal observed in The Orange Pill, educational institutions that fail to make this shift will see their demand evaporate as young people calculate that years of expensive education in skills already being commoditized is an investment with negative returns.
The second is labor market restructuring. Existing employment categories, compensation structures, and professional development frameworks are designed for a world in which human execution is the scarce resource. In a world where AI can execute competently across an expanding range of knowledge work, the scarce resource is the judgment that directs execution — the capacity to decide what should be built, for whom, in what institutional context, and according to what ethical constraints. New labor market institutions are needed that recognize this shift: portable benefits not tied to a single employer, professional development frameworks that maintain judgment and evaluation skills throughout a career, standards for AI-augmented work that ensure human oversight and accountability, and mechanisms for workers to share in the productivity gains that AI augmentation produces.
The third is social insurance. The displacement produced by AI is faster, more structurally complex, and more psychologically devastating than any previous revolution's creative destruction. The traditional unemployment insurance model — designed for cyclical layoffs from manufacturing jobs — is inadequate for the structural displacement of knowledge workers whose professional identities are being reconfigured. New forms of transition support are needed that address not just income replacement but the identity disruption, the retraining needs, and the community-level impacts of displacement.
The fourth is governance — and here the AI turning point poses a challenge that no previous turning point has faced. AI is not just the subject of governance. It is a tool that can be used to perform governance functions — to analyze policy options, model outcomes, draft legislation, evaluate regulatory effectiveness. This recursive quality — governing a technology that can itself govern — creates risks of capture and concentration that previous revolutions did not produce. The governance institutions built during the turning point must be designed not just to regulate AI but to ensure that AI does not capture the regulatory process itself — that democratic deliberation, with all its friction and imperfection, remains the mechanism through which the society determines the technology's trajectory.
Perez's emphasis on the state's role at the turning point bears repeating. It is not enough to wait for the market to deliver broad diffusion. Public institutions must actively shape the trajectory of the new paradigm. The market is extraordinarily good at allocating resources efficiently within a given institutional framework. It is not good at building the institutional framework itself. The factory legislation was not a market outcome. The New Deal was not a market outcome. The post-war social compact was not a market outcome. Each was the product of political will, institutional imagination, and the deliberate construction of frameworks that the market alone would never have produced.
The turning point is a window. It opens when the crisis concentrates attention. It closes when the crisis passes — either because the institutional innovations are built and the deployment phase begins, or because the political will dissipates and the installation phase's distribution of gains crystallizes into a durable pattern of inequality. The historical record shows both outcomes. The progressive resolution of the Victorian turning point produced a golden age. The failure to resolve the turning point of the 1920s produced the Great Depression. The partial resolution of the ICT turning point produced the institutional deficit into which AI has arrived.
The current window is open. The crisis is visible — in the trillion-dollar repricing, in the displacement of knowledge workers, in the educational institutions that are failing to prepare students for the world that actually exists, in the governance frameworks that are trailing the technology by years rather than months. The question is not whether the window will close. It will. The question is what gets built before it does.
The Perez framework cannot answer that question, because the answer depends on choices that have not yet been made — by builders, by educators, by regulators, by citizens. What the framework can do is identify the structural requirements of the moment: the institutional domains where innovation is needed, the historical patterns that illuminate what works and what fails, and the urgency of building during the window before it closes.
Kedrosky's critique — that the Perez framework risks becoming a secular theodicy that retroactively justifies every crash as necessary — carries additional weight here. The turning point is not a stage in an automatic process. It is a contest, between the interests that benefit from the installation phase's concentration of gains and the interests that require the deployment phase's institutional redistribution. The outcome of that contest is not determined by the pattern. It is determined by the people who show up to fight it.
The window is open. What gets built inside it is the only question that matters.
The framework knitters of Nottingham knew exactly what was coming. They could see the power looms arriving in the mills along the River Leen. They could calculate, with the arithmetic precision of people whose livelihoods depended on such calculations, what mechanized production would do to the price of stockings, the demand for hand-knitted goods, and the wages of the men and women who produced them. They were not ignorant. They were not irrational. They were skilled workers performing a sophisticated cost-benefit analysis and arriving at the correct conclusion: the new technology would destroy their economic position, their professional communities, and the way of life they had built around a specific, hard-won expertise.
They were right about the diagnosis. They were catastrophically wrong about the prescription.
Joseph Schumpeter gave creative destruction its theoretical dignity in 1942, describing it as the "essential fact about capitalism" — the process by which new technologies, new firms, and new organizational forms displace the old, not as an unfortunate side effect of progress but as the mechanism through which progress occurs. The concept had an elegant brutality. The destruction was not incidental to the creation. It was constitutive of it. The old had to die for the new to live. The framework knitters had to be displaced for the factory system to deliver cheaper cloth, broader employment, and eventually — eventually, after decades of suffering that Schumpeter's theory acknowledged but did not dwell on — rising living standards for the population as a whole.
Perez's framework accepts Schumpeter's description but adds the dimension that his theory fatally lacks: timing. Creative destruction does not operate uniformly across the cycle of a technological revolution. It operates differently in the installation phase and the deployment phase, and the institutional context in which it occurs determines whether it produces manageable transition or catastrophic dislocation. During the installation phase, creative destruction is at its most intense and its most unevenly distributed. The new paradigm displaces the old with maximum speed and minimum institutional cushioning. The gains concentrate among early adopters. The costs fall on the workers and communities most closely tied to the previous paradigm. There is no retraining infrastructure, because the new skills have not yet been codified. There is no social insurance adequate to structural displacement, because the insurance systems were designed for the old paradigm's risks. There is no educational pathway from the old expertise to the new, because the new expertise has not yet stabilized enough to be taught.
During the deployment phase, creative destruction continues — it never stops, in any phase of any revolution — but it operates within an institutional framework that manages the transition. Retraining programs exist. Social insurance covers structural displacement. Educational systems have been redesigned for the new paradigm. The destruction is real, but the institutions absorb the shock, redirect the displaced workers toward new opportunities, and distribute the gains of the new technology broadly enough to maintain social cohesion.
The framework knitters lived in the installation phase without deployment-phase institutions. That is why their experience was catastrophic. Not because the power loom was uniquely destructive — every revolution's signature technology destroys the previous paradigm's skill base — but because the institutional infrastructure that would have managed the transition did not yet exist. The factory legislation, the universal primary education, the labor protections that eventually redirected the gains of industrialization from the factory owners to the broader population — these were deployment-phase institutions built decades after the framework knitters had been destroyed. The golden age was constructed on top of their suffering, not despite it.
Now consider what distinguishes AI-driven creative destruction from every previous version, and the distinction is not comfort.
Previous revolutions destroyed skills that were embodied in physical processes. The hand-loom weaver's expertise was in the coordination of hands, feet, and eyes in the operation of a specific machine. The stagecoach driver's expertise was in the management of horses and the navigation of roads. The telegraph operator's expertise was in the encoding and decoding of Morse. In each case, the expertise was real, hard-won, and economically valuable — until the new technology made the physical process it depended on obsolete. But the destruction, while devastating for the individuals who experienced it, was bounded by the physicality of the skills involved. The displaced workers could, in principle, learn new physical processes. The retraining was difficult, but the category of productive activity — physical skill applied to physical materials — persisted across the transition.
AI destroys skills that are embodied in cognitive processes. The software architect's expertise is in the mental model of how systems fit together. The lawyer's expertise is in the interpretation of precedent and the construction of argument. The analyst's expertise is in the extraction of meaning from data. These are not physical processes that can be observed, decomposed, and retaught. They are patterns of thought built through years of immersive practice — the kind of deep, embodied understanding that Segal described in The Orange Pill as geological: thin layers of comprehension deposited through thousands of hours of patient engagement with problems that did not yield their structure easily.
When AI performs these cognitive tasks competently — not perfectly, not with the depth that decades of human practice produce, but competently enough for most commercial purposes — the displacement is qualitatively different from anything the framework knitters experienced. The category of productive activity itself is under pressure. It is not that the knowledge worker must learn a new cognitive process. It is that the cognitive process itself has been automated to a degree sufficient to destroy its market value, even as the deeper understanding that the process produced remains valuable in absolute terms.
This is what Perez's framework identifies as the most dangerous feature of AI-driven creative destruction: the gap between the absolute value of deep expertise and its market value. Deep expertise — the kind that takes years to build, that operates through embodied intuition rather than explicit rules, that allows a senior engineer to feel that something is wrong before she can articulate what — remains genuinely valuable. It is the foundation of the judgment that AI cannot yet replicate. But the market does not price absolute value. It prices scarcity relative to demand. And when AI makes competent performance across a wide range of cognitive tasks abundantly available, the market value of deep expertise in any single cognitive domain declines — not because the expertise is less real, but because the market has discovered that breadth is good enough for most purposes.
Segal named this dynamic with painful precision: depth was losing its market value, not because depth was less real or less valuable in absolute terms, but because the market was stopping rewarding the journey to the bottom now that the surface was good enough for most practical applications. The framework knitters would have recognized this formulation immediately. Their mastery of the stocking frame was deep, genuine, and economically worthless.
The speed at which this repricing occurs is unprecedented. Rita McGrath, applying the Perez framework to AI's impact on expert professions, argued that "systemic changes in technologies always lead to systemic changes in society. The winners in an old regime become the losers in a new one." The observation is historically uncontroversial. What is new is the velocity. The stocking frame's displacement unfolded over decades. The telegraph operator's obsolescence played out over years. The knowledge worker's repricing is happening in months — sometimes weeks. An engineer who was the most senior person on her team in November 2025 found herself, by February 2026, performing work that a junior colleague with an AI tool could approximate in a fraction of the time.
The psychological consequences of this velocity deserve more attention than they typically receive in economic analyses of technological disruption. Perez's framework identifies the turning point as a period of institutional reckoning, but the reckoning is not merely institutional. It is personal, existential, experienced by millions of individuals simultaneously. The senior engineer does not merely lose market value. She loses the narrative that organized her professional life — the story that said: you invested years in this expertise, and the investment was rational, and the expertise is valuable, and the career built on it is secure. When that narrative collapses in weeks rather than decades, the psychological shock is qualitatively different from what previous generations of displaced workers experienced. There is no time to grieve, adapt, and rebuild. The ground shifts under your feet while you are still standing on it, and the view, as Segal observed, gets better at the same time — which is the cruelest part, because you can see the opportunities that the new paradigm creates even as you feel your capacity to seize them eroding.
This compound experience — seeing the opportunity and feeling the ground give way simultaneously — is the emotional signature of creative destruction at AI speed. It maps onto the fight-or-flight dichotomy that Segal identified in the developer community: some leaning in, investing in the new tools, reconstructing their professional identities around judgment and direction rather than implementation; others retreating, lowering their cost of living, preparing for what they perceive as the end of their professional relevance. Neither response is irrational. Both are rational adaptations to a situation in which the individual has been asked to bear the full cost of a structural transformation that the society has not yet built institutions to manage.
And that is the critical structural point. The individual is bearing the cost because the institutions are not there. Perez's framework makes this explicit: the suffering of creative destruction during the installation phase is not a function of the technology's power. It is a function of the institutional vacuum in which the technology operates. The same technology, operating within deployment-phase institutions — retraining programs, transition support, educational pathways, professional development infrastructure — would produce the same creative destruction with radically different human consequences. The displacement would still occur. The repricing would still happen. But the people experiencing it would have institutional support for navigating the transition rather than being left to navigate it alone, in real time, with whatever resources they happen to have.
The Luddites broke machines because no one had built institutions. Segal made this point in The Orange Pill, and Perez's framework explains why the point is structural rather than merely historical. The Luddites' violence was not stupidity. It was the rational response of people who correctly identified the source of their suffering and had no institutional mechanism for addressing it. The machines were the proximate cause. The absence of institutions was the structural cause. When there is no retraining infrastructure, no social insurance adequate to structural displacement, no educational pathway from the old skills to the new — when the society has failed to build the deployment-phase institutions that every previous golden age required — then the people who bear the cost of creative destruction have no recourse except resistance. And resistance, as the Luddites demonstrated, is strategically catastrophic: it accelerates the political hostility toward the displaced, justifies the deployment of force against them, and produces a legal and cultural framework that criminalizes the expression of legitimate suffering.
The contemporary equivalents of machine-breaking are quieter but structurally analogous. The senior engineer who insists that AI-generated code is fundamentally inferior, the lawyer who argues that AI-drafted briefs lack the depth that decades of legal training produce, the professor who bans AI tools from the classroom — each is performing a version of the Luddites' response: resisting the technology because the institutions that would make the transition manageable do not exist. The resistance is not irrational. The diagnosis is accurate. Something genuinely valuable is being lost, and the people who have invested years in building that value have legitimate grounds for anger. But the resistance, like the machine-breaking, will not alter the structural dynamics of the transition. It will only ensure that the people who resist are excluded from the conversation about what gets built next.
The conversation about what gets built next is the turning point. And the turning point demands institutions that do not yet exist — institutions designed for the specific dynamics of AI-driven creative destruction: the speed of the displacement, the cognitive rather than physical nature of the skills being destroyed, the identity crisis that accompanies the repricing of deep expertise, and the psychological shock of watching the ground shift under your feet in weeks rather than decades.
What would such institutions look like? Not the static retraining programs designed for industrial displacement — six-month courses in a new skill that will itself be commoditized before the certificate is issued. Something more radical: institutional structures that support continuous adaptation rather than one-time retraining. Professional development frameworks that maintain and develop the judgment, integration, and direction capacities that AI cannot replicate. Transition support that addresses the identity crisis alongside the income crisis, recognizing that a knowledge worker who has lost her professional narrative needs more than a new skill — she needs a new story about what she is for.
The framework knitters never got those institutions. Their children did — eventually, imperfectly, after decades of suffering that could have been mitigated if the institutions had been built in time. The question for the current turning point is whether the institutions will be built at the speed the technology demands, or whether a generation of knowledge workers will bear the cost of the transition the way the framework knitters bore theirs: alone, without support, and without recourse.
The pattern says both outcomes are possible. The pattern cannot say which one will occur. That depends on what gets built during the window that the turning point has opened — a window that, given the speed of the technology, may be narrower than any previous turning point has offered.
Every golden age in the history of industrial capitalism was engineered. Not in the sense that a single architect drew blueprints and a single contractor poured the foundation — the process was messier, more contested, and more politically fraught than any architectural metaphor can capture. But engineered in the sense that the broadly shared prosperity that characterized each golden age was the product of specific institutional innovations, deliberately constructed during the turning point, that redirected the gains of the technological revolution from the few who had captured them during the installation phase to the many who would sustain them during the deployment.
The Victorian golden age required factory legislation that limited working hours, prohibited child labor, and established minimum safety standards. It required public education that produced a literate, numerate workforce capable of participating in the industrial economy. It required sanitation infrastructure that made the rapidly growing cities livable. It required the gradual expansion of the franchise that gave working people a political voice. None of these institutions existed before the turning point. All of them were built through political struggle — resisted by the factory owners who bore their immediate costs, demanded by the workers and reformers who understood that the technology alone would not produce the broadly shared benefits that social stability required.
The post-war golden age required a more elaborate institutional architecture: the Bretton Woods system that stabilized international finance, the welfare state that provided social insurance, the G.I. Bill and the expansion of higher education that created the human capital base, and the social compact between capital and labor that exchanged higher productivity for higher wages. Together, these institutions produced the most sustained period of broadly distributed economic growth in the history of capitalism — the golden age of the 1950s and 1960s, when living standards rose across all income levels and the middle class expanded to encompass the majority of the population.
The pattern is clear: the institutions were designed for the specific technology they accompanied. Victorian factory legislation addressed the specific hazards and exploitation patterns of the steam-powered factory. The New Deal financial regulation addressed the specific speculative excesses of the mass-production era's financial system. The post-war educational expansion addressed the specific human capital requirements of the consumer economy. Each institutional innovation was a response to a specific diagnosis of what the turning point's crisis had revealed about the gap between technological capability and social infrastructure.
What, then, does the AI turning point demand? Perez's framework identifies the functions that deployment-phase institutions must perform — redistributing gains, protecting workers, preparing citizens, governing the technology's trajectory — but it does not prescribe specific institutional forms, because each revolution's technology creates different challenges. The specific institutional forms must be invented for the AI age, informed by the pattern but not constrained by it.
The first domain is education, and it is the domain where the institutional deficit is most acute and the consequences of inaction most severe.
Every previous golden age was built on an educational foundation that matched the paradigm it accompanied. Universal primary education for the factory system: reading, writing, arithmetic, the capacity to follow instructions and keep records. Universal secondary education for the mass-production economy: the ability to operate complex machinery, manage organizational processes, and participate in the consumer economy. Expanded higher education for the information age: programming, data analysis, systems thinking.
The AI paradigm requires something different from all of these — not a higher level of the same kind of education, but a different kind entirely. When AI can execute competently across an expanding range of knowledge work, the educational system's task is no longer to produce competent executors. The executors are being produced, far more cheaply, by the technology itself. The educational system's task is to produce people who can direct execution wisely — who can ask the questions that determine what gets executed, evaluate the output that execution produces, and exercise the judgment that decides whether the output serves human needs or merely satisfies algorithmic metrics.
This is the shift from training to development: from teaching students to perform specific cognitive tasks to developing the capacities — judgment, questioning, ethical reasoning, integration across domains, tolerance for ambiguity, the ability to evaluate AI output critically — that AI cannot replicate and that the deployment-phase economy will value above all else. Segal described a teacher who had begun this shift by grading questions rather than essays — requiring students to demonstrate not what they knew but what they understood about what they did not know. The move was small. The principle was enormous: the educational objective had shifted from the production of answers to the cultivation of the capacity to ask.
Scaling this shift is the educational challenge of the turning point. It requires not merely new curricula but new assessment frameworks, new teacher training, new institutional cultures, and new relationships between educational institutions and the economy they serve. The university that continues to grant four-year degrees in skills that AI will commoditize before graduation is not just inefficient. It is actively harmful, burdening students with debt in exchange for credentials whose market value is declining faster than the institutional processes that produce them.
The second domain is labor market architecture, and here the challenge is structural rather than incremental.
Existing labor market institutions — employment law, compensation structures, benefits systems, professional credentialing — were designed for a world in which the employment relationship was the primary mechanism for distributing economic participation. A person worked for an employer, performed specified tasks, received wages and benefits, and built a career through progressive specialization within a professional category. The institutions that supported this model — minimum wage laws, overtime regulation, employer-provided health insurance, professional licensing — were designed for its specific contours.
AI is restructuring those contours. The work that creates the most value in an AI-augmented economy is not task performance within a professional category. It is direction across categories — the capacity to see connections between domains, to integrate insights from different fields, to exercise judgment about what should be built rather than executing the building itself. This work does not fit neatly into existing employment categories. The person who directs AI across multiple domains may not have a job title that existing classification systems recognize. The compensation structures designed for task-based work do not capture the value of judgment-based direction. The benefits systems tied to single-employer relationships do not serve workers whose contributions span multiple organizations and projects.
New labor market institutions must be designed for this reality. Portable benefits that follow the worker rather than the employer. Professional development accounts that maintain judgment and direction capabilities throughout a career. Standards for AI-augmented work that ensure human oversight, quality accountability, and mechanisms for workers to share in the productivity gains their judgment and direction produce. These are not utopian proposals. They are the structural equivalents of the factory legislation and collective bargaining frameworks that previous turning points produced — institutional innovations designed for the specific dynamics of the new paradigm.
The third domain is social insurance, and here the speed of AI-driven displacement creates a requirement that existing systems cannot meet.
Traditional social insurance addresses specific, identifiable events: job loss, workplace injury, retirement. The systems are designed for cyclical displacement — workers laid off during downturns who will be rehired when the economy recovers. AI produces structural displacement: workers whose skills are being permanently repriced, whose professional categories are being restructured, whose career trajectories have been altered by a technology that did not exist five years ago. The displacement is not cyclical. It will not reverse when the economy recovers. And the psychological dimension of the displacement — the identity crisis that accompanies the loss of a professional narrative built over decades — is not addressed by income replacement alone.
Transition support designed for AI-driven displacement must go beyond traditional unemployment insurance. It must address the retraining needs of workers whose next career may require fundamentally different capacities than their previous one. It must address the psychological dimension of displacement — the grief, the identity disruption, the loss of professional community that Segal documented in The Orange Pill. And it must operate at the speed of the displacement, which means that the traditional model of designing a program, piloting it, evaluating the results, and scaling it over five to ten years is inadequate. By the time such a program reaches scale, the workers it was designed to serve will have already navigated the transition — or failed to navigate it — without support.
The fourth domain is governance, and here the AI turning point poses a challenge unlike any that previous revolutions have presented.
Every previous revolution's governance institutions were designed by humans to regulate a technology operated by humans. The regulatory process — legislation, rulemaking, enforcement, adjudication — proceeded at human speed and addressed human-scale challenges. AI introduces a recursive element: the technology being governed is itself capable of performing governance functions. AI can analyze regulatory proposals, model policy outcomes, draft legislation, evaluate enforcement effectiveness, and identify regulatory gaps — all faster and, in some cases, more competently than the human processes that currently perform these functions.
This recursive quality creates risks of capture and concentration that previous revolutions did not pose. If AI is used to optimize regulatory processes, who controls the optimization? If AI analyzes policy options, whose values determine the objective function? If AI drafts legislation, whose interests does the draft serve? The governance institutions built during the turning point must ensure that democratic deliberation — slow, messy, imperfect, and irreplaceable — remains the mechanism through which societies determine the technology's direction. The alternative — governance optimized by the technology it governs, serving the interests of the parties that control the technology — is a form of capture more complete than any previous revolution has made possible.
The regulatory approach that Segal advocated in The Orange Pill — demand-side regulation that empowers citizens rather than supply-side regulation that constrains companies — aligns with Perez's framework in a crucial way. Supply-side regulation is the form most susceptible to capture, because it directly affects the competitive dynamics of the AI industry and therefore attracts the most intense lobbying from the parties with the greatest financial stake. Demand-side regulation — investment in citizen capability, educational reform, transition support, information infrastructure that enables informed democratic participation — is both more resistant to capture and more consequential for the deployment phase, because it builds the human infrastructure on which the golden age depends.
These four institutional domains — education, labor markets, social insurance, governance — constitute the deployment-phase blueprint for the AI age. Each requires innovations as fundamental as the factory legislation, the social compact, and the welfare state that previous golden ages demanded. Each must be built at a speed that no previous turning point has required, because the technology is moving faster than any previous technology and the institutional deficit inherited from the incomplete deployment of the information age means that the starting point is further behind than any previous turning point's.
The investment firm Baillie Gifford observed that "the surplus from technological shifts often leaks from infrastructure builders, which become commoditised, to users and society at large." The verb is important: leaks. Not flows. Not is directed. Leaks — the way water finds cracks in a dam, not because the dam was designed to let it through but because no dam is perfect. The deployment-phase institutions are the mechanisms that turn leakage into flow — that redirect the surplus deliberately, through institutional channels, from the concentrated gains of the installation phase to the broad distribution of the deployment phase.
Without those institutions, the surplus does not flow. It pools. It concentrates in the hands of the parties that captured it during the installation phase, and the golden age that the technology makes possible remains permanently unrealized — a structural possibility that the society chose, through institutional failure, not to build.
The blueprints exist in outline. The historical pattern provides the structural guidance. What remains is the construction — and the construction requires political will, institutional imagination, and a speed of institutional innovation that the turning point's window demands but that the political systems of most advanced economies are not currently equipped to deliver.
In every technological revolution, the regulatory response to the turning point has been shaped by a contest between two forces that the Perez framework identifies with structural clarity. On one side: the imperative to build deployment-phase institutions that redirect the gains of the revolution toward broad social benefit. On the other: the incentive of installation-phase incumbents to capture the regulatory process and shape rules that protect their existing positions against the restructuring that the deployment phase requires. The outcome of this contest has determined, more than any other single factor, whether the turning point resolves progressively — producing a golden age — or regressively — producing a prolonged period of institutional stagnation and concentrated inequality.
The contest is not fought between good and evil. It is fought between two rational responses to the same structural situation. The incumbents are not villains. They are organizations and individuals who invested enormous resources in the installation-phase paradigm and have legitimate interests in protecting those investments. The reformers are not saints. They are political actors with their own constituencies, their own blind spots, and their own capacity for regulatory overreach that stifles the innovation the deployment phase requires. The contest is messy, contingent, and decided by the specific political dynamics of the society in which it occurs.
The canal companies lobbied against railway legislation. The railway companies lobbied against road-building subsidies. The telegraph monopolies lobbied against telephone regulation. The broadcast media lobbied against internet deregulation. In each case, the incumbents used their accumulated political influence — their lobbying capacity, their regulatory relationships, their public credibility, their campaign contributions — to shape the regulatory environment in ways that protected their existing business models against the new paradigm's creative destruction. In each case, the degree to which they succeeded determined the speed and completeness of the deployment phase. Where the incumbents captured the regulatory process most completely, the deployment phase was delayed, the gains remained concentrated, and the golden age was diminished. Where the reformers prevailed, the regulatory framework enabled broad deployment and the golden age that followed was correspondingly more expansive.
The AI turning point is producing a regulatory contest of extraordinary complexity, because the technology being regulated is unlike anything previous regulatory frameworks were designed to address.
Consider the landscape as of mid-2026. The European Union's AI Act — the most comprehensive regulatory framework yet enacted — addresses AI through a risk-classification system that categorizes AI applications by their potential for harm and imposes regulatory requirements proportional to the risk category. The approach has the virtue of specificity: it identifies particular applications (biometric surveillance, critical infrastructure management, employment decisions) as high-risk and imposes particular requirements (human oversight, transparency, accuracy standards) on their deployment. It has the corresponding vice of all classification-based regulation: the technology evolves faster than the classification system. An AI capability that does not exist when the classification is written may emerge, find widespread adoption, and produce social consequences before the regulatory process can evaluate, classify, and regulate it.
The American approach, expressed through executive orders and agency guidance rather than comprehensive legislation, has been more adaptive but less coherent. Different agencies regulate AI within their existing mandates — the FDA for medical AI, the SEC for financial AI, the FTC for consumer AI — producing a patchwork of regulatory requirements that varies by sector, by agency, and by the political priorities of the administration in power. The approach has the virtue of flexibility: agencies can respond to new developments without waiting for legislative action. It has the corresponding vice of fragmentation: no single institution has responsibility for the systemic effects of AI that cross sectoral boundaries, and the gaps between agency mandates create spaces where significant AI impacts go unaddressed.
These regulatory responses are supply-side frameworks. They regulate what AI companies may build, what disclosures they must make, what risks they must assess. Supply-side regulation addresses genuine risks — the potential for AI systems to perpetuate bias, produce harmful outputs, compromise privacy, concentrate market power — and its development is both necessary and inevitable. But it is also the form of regulation most susceptible to capture, because it directly affects the competitive dynamics of the AI industry.
The parties with the largest financial stakes in the outcome of supply-side regulation — the AI companies themselves, the SaaS incumbents facing the Death Cross, the platform companies whose market positions depend on the regulatory treatment of AI — are also the parties with the greatest resources to invest in shaping the regulatory outcome. The lobbying expenditures of technology companies in the United States and Europe have increased dramatically since 2023, and the revolving door between regulatory agencies and the industries they regulate continues to spin. This is not conspiracy. It is the structural dynamic of regulatory capture operating in its characteristic manner: the parties most affected by regulation invest the most in shaping it, and the result is regulation that serves the interests of the regulated as much as the interests of the public.
Segal's call for demand-side regulation in The Orange Pill — regulation that empowers citizens, workers, students, and parents rather than constraining companies — represents a different approach to the same structural problem. Demand-side regulation does not directly affect the competitive dynamics of the AI industry, which means it attracts less lobbying and is less susceptible to capture. But it directly affects the capacity of the broader society to benefit from the deployment phase, which means its impact on the trajectory of the revolution is arguably greater.
What does demand-side regulation look like in practice? Investment in educational infrastructure that prepares citizens for the AI-augmented economy. Funding for transition support programs that help displaced workers navigate the shift from installation-phase skills to deployment-phase capabilities. Information infrastructure that enables citizens to understand AI's capabilities, limitations, and risks well enough to participate meaningfully in democratic deliberation about its trajectory. Standards and frameworks that give workers, consumers, and communities the tools to evaluate AI-augmented services, hold providers accountable, and demand quality that serves human needs rather than algorithmic metrics.
The distinction between smart regulation and captured regulation can be difficult to identify in real time, but Perez's historical analysis suggests several diagnostic principles.
The first: smart regulation is designed for the new paradigm, not the old one. Regulation that preserves existing professional categories against AI-driven restructuring — for example, regulation that requires human performance of cognitive tasks that AI can perform competently — may be presented as worker protection but functions structurally as incumbent protection. It preserves the installation-phase distribution of skills and rewards rather than enabling the deployment-phase restructuring that would produce broader benefits. The historical analogy is the regulation that protected hand-loom weavers by restricting the use of power looms: well-intentioned, politically popular among the affected workers, and structurally counterproductive because it delayed the transition to the new paradigm without building the institutions that would have made the transition manageable.
The second: smart regulation focuses on outcomes rather than processes. Regulation that specifies how AI systems must be built — which architectures, which training methods, which safety protocols — locks in a snapshot of current technical practice and becomes obsolete as the technology evolves. Regulation that specifies what outcomes AI systems must achieve — accuracy standards, fairness requirements, transparency obligations, accountability mechanisms — can accommodate technological evolution while maintaining the social protections that the deployment phase requires. The distinction is not academic. It is the difference between regulatory frameworks that age gracefully as the technology matures and frameworks that become obstacles to innovation within years of their enactment.
The third: smart regulation empowers rather than restricts. The most effective deployment-phase institutions have historically been the ones that expanded capability rather than constrained it. Universal education empowered workers rather than restricting factory owners. The G.I. Bill expanded access to higher education rather than limiting the industries that required it. The social insurance systems of the post-war era empowered workers to take risks — to change jobs, start businesses, invest in new skills — rather than merely protecting them from the consequences of inaction. The AI-age regulatory framework should follow this pattern: expanding the capacity of citizens, workers, and communities to participate in the AI-augmented economy rather than merely constraining the technology companies that build the tools.
The fourth: smart regulation is informed by genuine understanding of the technology. The most dangerous regulatory proposals are the ones produced by lawmakers who do not understand what they are regulating. Regulation based on fear of AI's capabilities produces frameworks that address imagined risks while ignoring actual ones. Regulation based on misunderstanding of AI's limitations produces frameworks that assume capabilities the technology does not possess or fail to address vulnerabilities that it does. The governance institutions built during the turning point require regulators who understand the technology — not necessarily at the level of technical implementation, but at the level of structural understanding: what these systems can do, how they are likely to evolve, where the genuine risks concentrate, and where the genuine opportunities for broad social benefit reside.
The international dimension of AI regulation adds a layer of complexity that no previous turning point has faced at comparable scale. AI capabilities are being developed simultaneously across competing national systems with different institutional structures, different relationships between state and market, and different approaches to the turning point. The risk is regulatory arbitrage: AI companies migrating to the jurisdictions with the weakest regulatory requirements, producing a race to the bottom that undermines deployment-phase institutions everywhere. The opportunity is regulatory experimentation: different jurisdictions trying different approaches, producing evidence about what works and what fails that can inform the development of more effective frameworks globally.
The history of previous turning points suggests that both dynamics will operate simultaneously. The progressive regulatory innovations of different national contexts — factory legislation in Britain, social insurance in Bismarck's Germany, progressive regulation in the United States — emerged independently and were adapted across borders as their effectiveness became apparent. The same dynamic could operate with AI regulation, if the international community maintains sufficient coordination to learn from each other's experiments while preventing the worst forms of regulatory competition.
Perez's insistence that the state must actively shape the trajectory of the new paradigm — that it is not enough to wait for the market to deliver broad diffusion — carries particular weight in the regulatory domain. The market will not build deployment-phase regulatory institutions, because the market's incentives run in the opposite direction: toward regulatory environments that maximize the freedom of capital and minimize the constraints on profitable deployment. The regulatory institutions that every previous golden age required were built through political processes that overrode market incentives in the interest of broader social welfare. The AI golden age, if it arrives, will require equivalent political commitment to building regulatory frameworks that serve the deployment phase rather than protecting the installation phase's incumbents.
The window is open. The regulatory contest is underway. The outcome is not determined by the historical pattern, which shows both progressive and regressive resolutions. It is determined by the specific political dynamics of the societies in which the contest occurs — by who shows up, who organizes, who lobbies, who votes, and who builds the frameworks that channel the technology's power toward broad social benefit rather than narrow private gain.
The framework knitters had no voice in the regulatory process that governed the power loom. The knowledge workers facing AI-driven displacement do have a voice — if they use it, and if the institutions through which they use it are functional enough to translate their participation into policy. Whether those conditions are met is, itself, one of the central questions of the turning point.
In 1870, the British Parliament passed the Elementary Education Act, which established for the first time in English history a national system of primary schools. The Act was not a gesture of philanthropic goodwill. It was a structural response to a structural problem: the factory system had created an economy that required literate, numerate workers, and the haphazard collection of charity schools, church schools, and dame schools that constituted the existing educational infrastructure could not produce them at the scale or the quality the economy demanded. The industrialists who lobbied for the Act were not primarily motivated by moral concern for the children of the poor. They were motivated by the practical observation that illiterate workers made more mistakes, required more supervision, and produced lower-quality output than literate ones. The moral arguments were real and were made passionately by reformers who believed that every child deserved an education. But the political momentum came from the economic argument: the deployment phase of the Industrial Revolution could not proceed without an educational foundation designed for its requirements.
The pattern repeated with each subsequent revolution. The mass-production economy required workers who could read technical manuals, manage organizational processes, and participate in the consumer economy as informed purchasers. Universal secondary education — the high school movement in the United States, the grammar school system in Britain — was the institutional response. The information economy required workers who could program computers, manage databases, analyze data, and navigate knowledge-intensive organizations. The expansion of higher education — more universities, more community colleges, more professional training programs — was the institutional response, though, as Perez has argued, an incomplete one whose limitations contributed to the institutional deficit that AI inherited.
In each case, the educational restructuring was not incremental. It was paradigmatic — a fundamental change in what education was for, who it served, and how it operated. The shift from guild apprenticeship to universal primary education was not a marginal improvement in the existing system. It was the replacement of one educational paradigm with another, designed from the ground up for a different economic and social reality. The shift from primary to secondary education was not the addition of a few more years to the same model. It was the creation of a new institutional form — the comprehensive secondary school — with different curricula, different assessment methods, different relationships between school and economy.
The AI paradigm demands an equivalent restructuring, and the restructuring has not yet begun at any scale commensurate with the speed of the transformation it must address.
The structural problem is this: existing educational institutions, from primary schools through graduate programs, are designed to produce people who can perform specific cognitive tasks — write code, analyze data, draft legal arguments, build financial models, conduct research, produce reports. These are precisely the tasks that AI is learning to perform with increasing competence. An educational system that continues to optimize for the production of these skills is training students for a labor market that is being restructured in real time, investing years of their lives and substantial financial resources in acquiring capabilities whose market value is declining faster than the educational system can adjust.
The numbers illuminate the scale of the mismatch. A student who entered a four-year computer science program in 2024 will graduate in 2028 into a world where the specific programming skills she learned — the languages, the frameworks, the debugging techniques, the software engineering methodologies — have been substantially commoditized by AI tools that did not exist when she enrolled. The institutional knowledge embedded in her degree — the curriculum design, the faculty expertise, the assessment standards — reflects the information-age paradigm, not the AI paradigm. She will graduate with a credential that certifies competence in tasks that AI performs competently, and she will enter a labor market that is already repricing execution downward and judgment upward.
This is not an argument against computer science education. It is an argument that computer science education, like all education, must be restructured for the paradigm it actually serves rather than the one it was designed for. The restructuring must address not what students learn but what capacities they develop — and the distinction between learning and development is the heart of the matter.
Learning, in the traditional educational sense, is the acquisition of knowledge and skills: facts to be remembered, procedures to be followed, techniques to be applied. Learning can be assessed through tests that measure recall, problem sets that measure procedural competence, and projects that measure the ability to apply techniques to specified problems. The educational system is optimized for learning. Its curricula, its assessment methods, its institutional incentives — all are designed to produce people who have learned specific things and can demonstrate that they have learned them.
Development is different. Development is the cultivation of capacities: judgment, questioning, ethical reasoning, integration across domains, tolerance for ambiguity, critical evaluation, the ability to direct and evaluate processes rather than execute them. Development cannot be assessed through tests that measure recall, because the capacities it produces are not facts to be remembered. It cannot be assessed through problem sets that measure procedural competence, because the capacities it produces are not procedures to be followed. It requires different pedagogies — mentorship, dialogue, sustained engagement with ambiguous problems, the kind of friction-rich interaction that builds embodied understanding rather than transferable information.
The educational paradigm shift that the AI turning point demands is the shift from learning to development — from an educational system optimized for the production of competent executors to one designed for the cultivation of capable directors. This shift is paradigmatic rather than incremental because it requires changes at every level of the educational system: the purpose (from skill transmission to capacity development), the pedagogy (from instruction to mentorship and dialogue), the assessment (from testing recall and procedure to evaluating judgment and questioning), the institutional culture (from standardization to individuation), and the relationship between education and the economy (from training for existing jobs to developing capabilities for roles that do not yet exist).
What does this look like in practice? The examples that exist are small-scale, experimental, and illuminating. Segal described one: a teacher who shifted from grading essays to grading the questions students asked before writing them. The shift was small in execution — a single assignment in a single classroom. It was enormous in principle: the educational objective moved from demonstrating knowledge to demonstrating the capacity to identify what one does not know. Students who produced the best questions demonstrated the deepest engagement with the material, because asking a question that genuinely opens inquiry requires a more sophisticated cognitive operation than producing an answer that closes it.
Other experiments point in the same direction. Project-based learning programs that require students to direct AI tools toward solving real-world problems, with assessment focused not on the quality of the output but on the quality of the direction: Did the student identify the right problem? Did she ask the right questions? Did she evaluate the AI's output critically? Did she recognize where the AI was wrong, or shallow, or confidently asserting something that broke under examination? These are the capacities the deployment-phase economy will value, and they can only be developed through educational experiences designed to cultivate them.
But small-scale experiments, however illuminating, do not constitute the systemic restructuring that the turning point demands. The challenge is not demonstrating that the new educational paradigm works. It is implementing it at scale, across thousands of institutions, millions of teachers, and hundreds of millions of students, within the compressed timeline that the speed of AI-driven transformation imposes.
The obstacles to scaling are structural rather than pedagogical. The first is institutional inertia. Educational institutions are among the most conservative institutions in any society, and their conservatism is not irrational. It is the product of institutional structures — tenure systems, accreditation requirements, standardized testing regimes, funding models tied to enrollment and graduation rates — that reward continuity and penalize experimentation. A university that restructures its computer science curriculum around judgment development rather than skill acquisition risks falling afoul of accreditation standards designed for the old paradigm, losing enrollment from students and parents who expect traditional credentials, and alienating faculty whose expertise and professional identities are tied to the subjects they have taught for decades.
The second is the teacher preparation gap. The shift from instruction to mentorship requires teachers who possess the capacities they are asked to develop in their students — judgment, questioning, critical evaluation, integration across domains. Many teachers possess these capacities. But the teacher preparation systems, the professional development infrastructure, and the institutional incentives that shape teaching practice are all designed for the instructional paradigm. Redesigning them for the developmental paradigm requires investment, time, and institutional will that the turning point's compressed timeline makes difficult to generate.
The third is assessment. The educational system's assessment infrastructure — standardized tests, grading rubrics, credentialing examinations — is designed to measure learning, not development. Measuring development is harder: it requires evaluating capacities that manifest differently in different contexts, that evolve over time, and that cannot be reduced to a single score or grade. New assessment frameworks must be invented, validated, and implemented — a process that, under normal institutional conditions, takes years or decades.
Perez's framework illuminates both the urgency and the structural difficulty of the educational restructuring. The urgency comes from the historical pattern: every golden age was built on an educational foundation designed for its paradigm, and the golden age could not arrive until the educational foundation was in place. The AI golden age cannot arrive until the educational system is restructured for the AI paradigm, because the deployment-phase economy requires people with capacities that the current educational system does not reliably produce.
The structural difficulty comes from the institutional deficit inherited from the incomplete deployment of the information age. The educational system that AI is asking to restructure is the same educational system that the information age never fully reformed. The expansion of higher education that the information age demanded was real but incomplete — accessible in principle, unaffordable in practice for a growing share of the population, and increasingly disconnected from the skills the economy actually demanded. AI is arriving into an educational system that was already struggling under the weight of the previous paradigm's unfinished reforms, and it is asking that system to undertake a restructuring more fundamental than any it has faced since the establishment of universal primary education in the nineteenth century.
The risk of inaction is not gradual decline. It is the rapid evaporation of educational relevance. Segal's observation that young people were already making the rational calculation — weighing years of expensive education in commoditizing skills against immediate participation in an AI-augmented economy — describes a dynamic that, if it accelerates, could produce a generation of self-taught, AI-augmented workers with no institutional foundation for the judgment and ethical reasoning the deployment phase demands. That outcome would be worse for everyone: for the workers who lack the developmental support that education provides, for the economy that lacks the judgment-capable workforce the deployment phase requires, and for the society that lacks the critically thinking citizens that education is meant to produce.
The educational restructuring is not one institutional innovation among several. It is the foundation on which all the other deployment-phase institutions depend. Labor market restructuring requires workers with the capacities that restructured education develops. Social insurance innovation requires a population capable of navigating transition with the agency that education cultivates. Governance reform requires citizens equipped to participate in democratic deliberation about complex technological trajectories. Without the educational foundation, the other institutions cannot function — just as the Victorian factory legislation could not function without literate workers, and the post-war social compact could not function without an educated middle class.
The foundation does not yet exist. Building it is the most urgent, most difficult, and most consequential institutional challenge of the AI turning point. The historical pattern says it must be built. The speed of the technology says it must be built now. The institutional deficit inherited from the previous revolution says the starting point is further behind than any previous turning point's educational challenge. And the window of institutional opportunity that the turning point has opened will not remain open indefinitely.
Every previous golden age was built on an educational foundation. The AI golden age will be no different. The question is whether the foundation will be built in time — or whether, like the framework knitters, a generation will navigate the transition without the institutional support that could have made the difference between catastrophe and flourishing.
The quarterly board meeting is the smallest unit of institutional failure.
This is not a criticism of boards, or of the people who sit on them. It is a structural observation about the temporal mismatch between the time horizon on which installation-phase financial capital operates and the time horizon on which deployment-phase institutions must be built. Financial capital thinks in quarters. Deployment-phase institutions are built over years. When the quarterly review asks a builder to justify keeping twenty engineers whose collective output could theoretically be replicated by five engineers with AI tools, the builder faces a choice that is simultaneously personal, organizational, and civilizational — though it rarely feels like all three at once.
Segal described this choice in The Orange Pill with the candor of someone who had lived inside it. The arithmetic was clean and seductive: if each person could do the work of twenty, why not reduce the team to five and convert the productivity gain into margin? The Believer's path was faster, leaner, more immediately profitable. The board conversation would return next quarter. The market rewarded efficiency more reliably than it rewarded vision. And the Beaver, as Segal acknowledged, built on a longer timeline than a quarter.
Perez's framework explains why this choice matters beyond the individual organization — why the aggregate of millions of such choices, made by millions of builders in millions of quarterly reviews, constitutes the institutional landscape of the turning point.
During the installation phase, the dominant logic is extraction: convert the gains of the new technology into returns for the parties that funded its development. This logic is not pathological. It is the mechanism by which financial capital recovers its investment and is incentivized to fund the next round of infrastructure. The canal investors needed returns. The railway promoters needed returns. The venture capitalists who funded the internet infrastructure needed returns. The extraction is the price the economy pays for the speed at which financial capital installs the new paradigm's infrastructure.
But extraction becomes pathological when it persists into the deployment phase — when the gains of the technology continue to flow to the parties that captured them during the installation phase rather than being redirected, through institutional mechanisms, toward the broader population. The post-war golden age was built on the deliberate construction of mechanisms — collective bargaining, progressive taxation, public investment in education and infrastructure — that redirected productivity gains from capital to labor, from the few to the many. When those mechanisms were weakened, beginning in the 1970s, the extraction resumed, the gains reconcentrated, and the institutional foundation of the golden age eroded.
The builder who converts AI productivity gains directly into headcount reduction is operating within the extraction logic. The builder who keeps the team and expands their capability is operating within the deployment logic. Both are rational responses to the incentives they face. The extraction logic is rewarded by the quarterly cycle. The deployment logic requires faith in a future that has not yet arrived — faith that the deployment phase will eventually reward patient investment in human capability over short-term extraction of productivity surplus.
Perez's historical analysis provides grounds for that faith, though not certainty. The deployment phase has arrived in every previous revolution, and the organizations that invested in institutional depth during the turning point were rewarded when it did. The factory owners who invested in worker welfare built the enterprises that survived into the Victorian golden age. The industrial leaders who invested in training and organizational capability built the enterprises that dominated the post-war era. The pattern is consistent: patient investment in human capability during the turning point produces durable competitive advantage during the deployment phase.
But the pattern also shows that many builders who chose the extraction logic during the turning point were rewarded in the short term and destroyed in the long term. The overleveraged railway companies of the 1840s, the speculative conglomerates of the 1920s, the dot-com companies that optimized for growth over sustainability — all chose the installation-phase logic, pursued extraction over investment, and were destroyed when the turning point's repricing rewarded the enterprises that had built institutional depth.
The individual builder's wager is therefore a bet on timing. The builder who invests in team capability is betting that the deployment phase will arrive soon enough to justify the investment. If it does, the team's enhanced capability — their judgment, their integration across domains, their capacity to direct AI wisely — becomes the competitive advantage that the deployment phase rewards. If the deployment phase is delayed — if the turning point resolves regressively, if the institutional innovations are not built, if the extraction logic persists — the investment may be a competitive disadvantage, because competitors who extracted will have lower costs and higher short-term profitability.
This is a genuine risk, and Segal's honesty about feeling it — the quarterly pressure, the board conversation that returns, the market that does not reward patience — is the honesty the moment requires. The builder's wager is not a comfortable choice between right and wrong. It is an uncertain bet on the timing and character of the turning point, made under conditions of genuine ambiguity about whether the deployment phase will arrive in time to reward the investment.
But the individual builder's wager has a collective dimension that transforms its significance. The aggregate of individual builders' choices constitutes the institutional landscape of the turning point. If most builders choose extraction — converting AI productivity gains into headcount reduction, outsourcing judgment to AI without investing in the human capacity to direct it, optimizing for quarterly returns over long-term capability — the turning point's institutional deficit deepens. The workers who would have developed deployment-phase capabilities within organizations are instead displaced. The organizational knowledge that would have informed deployment-phase institutional design is instead lost. The demand for deployment-phase institutions — education reform, labor market restructuring, social insurance — increases at the same time that the capacity to build those institutions decreases, because the people with the deepest understanding of what the deployment phase requires are the people the extraction logic displaces.
If most builders choose investment — keeping teams, expanding capability, developing the judgment and direction capacities that the deployment phase rewards — the turning point's institutional response accelerates. The workers develop deployment-phase capabilities within organizations. The organizational knowledge accumulates. The demand for deployment-phase institutions is met by a workforce that has already begun the transition from installation-phase skills to deployment-phase capacities.
The individual builder cannot control the collective outcome. She can only make her own choice, in her own organization, in the face of her own quarterly pressure. But the choice matters beyond its immediate organizational context, because it contributes to the aggregate institutional landscape that determines whether the turning point resolves progressively or regressively.
This is why the builder's wager is ethical as well as strategic. The builder who chooses extraction bears partial responsibility for the collective consequence — the deepened institutional deficit, the accelerated displacement, the weakened capacity for deployment-phase construction. The builder who chooses investment contributes to the collective capacity for institutional adaptation, even if her individual organization pays a short-term cost for the choice.
The ethical dimension does not make the choice easier. It makes it heavier. The builder knows that her choice matters, that it contributes to something larger than her quarterly results, and that the aggregate of choices like hers will determine whether the turning point produces a golden age or a lost generation. She also knows that the market does not reward ethical choices on a quarterly basis, that the competitive dynamics of the installation phase penalize patience, and that the deployment phase — the period when patient investment is rewarded — may not arrive in time to justify the sacrifice.
Perez's framework provides structural assurance but not temporal certainty. The deployment phase has arrived in every previous revolution. It will arrive in this one. But "will arrive" does not specify when, and the when matters for the individual builder facing the quarterly review. The framework says the bet is structurally sound. It does not say the bet will pay off within the planning horizon of the person making it.
This uncertainty is the condition of the turning point. The builders who built the Victorian golden age did not know they were building a golden age. They were making choices in conditions of genuine uncertainty about whether their investments would be rewarded. The builders who constructed the post-war social compact did not know that the compact would produce three decades of broadly shared prosperity. They were making political and institutional choices in the aftermath of catastrophe, motivated by the determination that the catastrophe should not recur but uncertain about whether their institutional innovations would be adequate to prevent it.
The current generation of builders faces the same uncertainty. The pattern says the deployment phase will arrive. The pattern does not say when, or in what form, or whether the specific choices any individual builder makes will be rewarded within that builder's career. The wager is a bet on the pattern's structural reliability, made in the face of temporal uncertainty — the same bet that every builder who contributed to every previous golden age made, without the benefit of knowing how the story would end.
The story has not ended. The turning point is underway. The window of institutional opportunity is open. And the aggregate of individual builders' choices — extraction or investment, quarterly optimization or long-term capability, installation-phase logic or deployment-phase logic — is, right now, determining the landscape of the turning point and the trajectory of the revolution for decades to come.
The post-war golden age lasted roughly twenty-five years — from the late 1940s to the early 1970s — and it was the most broadly shared period of prosperity in the history of industrial capitalism. Real wages rose across all income levels. The middle class expanded to encompass the majority of the population in the advanced economies. Home ownership, access to higher education, and material living standards improved for hundreds of millions of people. The gains were not distributed equally — racial minorities, women, and the populations of the Global South were systematically excluded from many of the golden age's benefits — but the direction of the distribution was progressive, and the institutional infrastructure that produced it was the most ambitious experiment in managed capitalism that any society had undertaken.
That golden age was not inevitable. It was constructed — through the New Deal, through Bretton Woods, through the welfare state, through the G.I. Bill, through the social compact between capital and labor, through progressive taxation that funded public investment, through the expansion of education that created the human capital base the mass-production economy required. Each of these institutions was contested, debated, and built through political processes that could have produced different outcomes. The Great Depression could have ended in fascism — it did, in several European countries. The post-war settlement could have produced a different distribution of gains — it did, in the Soviet bloc, where the gains were distributed through command rather than managed capitalism, with results that demonstrated the limits of that approach. The golden age was one possibility among several, and it became the actual outcome because specific institutional choices were made during the turning point by people who understood what the moment demanded.
The question that hangs over the AI revolution — whether golden ages still come — is not a question about the technology. The technology is sufficient. AI's capacity to expand human capability, to collapse the distance between imagination and realization, to democratize the tools of creation — all of this is real, demonstrated, and accelerating. The question is whether the institutional infrastructure that turns technological capability into broadly shared prosperity can still be built, given the political, social, and institutional conditions of the early twenty-first century.
The conditions are not favorable. Perez has acknowledged this, most clearly in her 2019 statement identifying the present as structurally analogous to the 1930s — a turning point between installation and deployment, with two frenzies completed and no golden age yet realized. The conditions that made previous golden ages possible — the political coalitions, the social movements, the institutional capacity for ambitious reform — have been weakened by decades of polarization, institutional distrust, regulatory capture, and the erosion of the social compact that the information age's incomplete deployment produced.
Kedrosky's critique — that the Perez framework risks becoming a secular theodicy — points at the real danger. If the historical pattern is read as a guarantee rather than a structural possibility, it becomes a reason for complacency: the golden age will come because it always has, the institutions will be built because they always have been, the turning point will resolve progressively because that is what the pattern does. This reading is both unfaithful to the historical record — which shows regressive resolutions alongside progressive ones — and dangerous, because it excuses the inaction that regressive resolutions require.
The honest reading of the pattern is more demanding and more useful. Golden ages are possible. They have been constructed before, under conditions that were, in their own ways, as challenging as the current moment. The Victorians built a golden age out of the social devastation of early industrialization. The post-war generation built a golden age out of the catastrophe of the Great Depression and two world wars. In each case, the crisis of the turning point was severe enough to concentrate political attention, mobilize constituencies for reform, and create the conditions under which ambitious institutional innovation became politically possible.
The crisis of the current turning point is still forming. The trillion-dollar repricing, the displacement of knowledge workers, the educational mismatch, the governance gap — these are early expressions of a crisis whose full dimensions have not yet been revealed. Whether the crisis, when it fully arrives, produces the political conditions for institutional innovation depends on factors that the Perez framework identifies but cannot control: the strength of the political coalitions that advocate for reform, the quality of the institutional proposals they advance, the capacity of the political system to translate popular demand into effective policy, and the willingness of the parties that benefited from the installation phase to accept the redistribution that the deployment phase requires.
The technology's contribution to the turning point's resolution is ambiguous and significant. AI is both the source of the crisis and a potential tool for building the institutions that address it. AI can accelerate educational restructuring — personalized learning at scale, adaptive assessment, the democratization of access to mentorship and feedback. AI can inform labor market policy — modeling the dynamics of displacement and retraining, identifying the skills the deployment phase will value, predicting the sectoral impacts of technological change. AI can enhance governance — analyzing regulatory proposals, modeling policy outcomes, enabling more informed democratic deliberation. The recursive quality that makes AI governance so challenging — governing a technology that can itself perform governance functions — is also what makes AI a potential instrument of the institutional innovation the turning point demands.
Whether AI is used to build deployment-phase institutions or to consolidate installation-phase advantages depends on who directs it and toward what ends. The technology amplifies whatever signal it receives — Segal's core argument in The Orange Pill holds here with full force. AI directed toward educational restructuring accelerates the construction of the deployment phase's foundation. AI directed toward optimizing extraction accelerates the concentration of gains that makes the deployment phase impossible. The technology does not choose. The people who wield it choose, and the institutions that channel its use determine the aggregate direction.
Perez's consistent emphasis on the active role of the state at the turning point is not nostalgic statism. It is a structural observation drawn from two and a half centuries of evidence: markets alone do not build deployment-phase institutions, because markets optimize within existing institutional frameworks rather than constructing new ones. The factory legislation was not a market outcome. The welfare state was not a market outcome. The G.I. Bill was not a market outcome. Each required political will to override market incentives in the interest of broader social welfare. The AI golden age, if it arrives, will require equivalent political commitment — commitment that the current political environment, in most advanced economies, appears ill-equipped to produce.
But the current political environment is not the permanent political environment. Turning points change political conditions. The crises they produce concentrate attention, reveal the insufficiency of existing arrangements, and create constituencies for reform that did not exist before the crisis made the insufficiency visible. The New Deal coalition did not exist before the Great Depression. The post-war consensus did not exist before the war. The political conditions for institutional innovation are created by the turning point's crisis, not by the normal political processes that precede it.
This is the ground for what Perez has called "tempered optimism" — not the assurance that the golden age will arrive, but the structural observation that the conditions for its construction are being created by the same crisis that makes it necessary. The crisis that AI is producing — the displacement, the repricing, the institutional mismatch, the educational gap, the governance failure — is also the crisis that will create the political conditions for the institutional innovations the deployment phase requires. Whether those conditions are seized depends on the people who recognize them for what they are and act accordingly.
Hermas Ayi, writing from an African perspective, raised a structural objection to the Perez framework that the framework itself must absorb: the model assumes the global economy advances synchronously through the phases of the cycle, yet "markets are interconnected, but temporalities remain profoundly unequal." The golden ages of the past were golden ages for the advanced economies. They were not golden ages for the colonized world, the developing world, the populations excluded from the institutional infrastructure that produced broadly shared prosperity in Europe and North America. If the AI golden age replicates this pattern — broadly shared within the advanced economies, concentrated and extractive in its relationship to the Global South — it will not be a golden age in any morally defensible sense.
The AI paradigm creates conditions that could, for the first time, genuinely globalize the golden age's distribution — because the technology that collapses the imagination-to-artifact ratio does so regardless of geography. The developer in Lagos, the entrepreneur in Accra, the teacher in rural India — all can, in principle, access the same capability-enhancing tools as the engineer in San Francisco. But "in principle" is not "in practice," and the gap between principle and practice is filled by precisely the institutional infrastructure — connectivity, education, financial access, governance, regulatory frameworks — that the deployment phase must build. The global dimension of the AI golden age is not an afterthought. It is a test of whether the deployment-phase institutions are genuinely deployment-phase institutions or merely a new form of the advanced economies' self-regarding prosperity.
The pattern says golden ages are possible. The pattern says they must be built. The pattern says the building happens during the turning point, when the crisis creates the political conditions for institutional innovation. The pattern says the outcome depends on choices — choices made by builders, by educators, by regulators, by citizens, by the people who understand what the moment demands and choose to act on that understanding.
The AI revolution has installed the infrastructure of extraordinary capability. The frenzy has funded it. The creative destruction is underway. The repricing has begun. The crisis is forming. The window of institutional opportunity is open.
Whether a golden age emerges on the other side depends entirely on what gets built inside that window. Not by the technology, which is indifferent to the distribution of its benefits. Not by the market, which optimizes within existing frameworks rather than constructing new ones. But by the people and institutions that recognize the structural requirements of the moment and choose — deliberately, urgently, with full awareness of what is at stake — to build the deployment-phase infrastructure that every previous golden age required.
Perez's four decades of historical analysis converge on a single practical conclusion: the technology is ready. The question is whether the society is.
---
There is a sentence in Perez's 2024 essay that functions differently from the rest of her argument. Most of the essay is framework — the five revolutions, the installation-deployment distinction, the structural position of AI within the ICT paradigm. Analytical architecture. Then this: "The power of AI, IoT, 3D, robots, blockchain is there to be shaped."
There to be shaped. Not "there to be celebrated." Not "there to be feared." Not even "there to be governed," which would still place the emphasis on constraint rather than direction. Shaped. The verb implies material in your hands — raw, powerful, waiting for form. It implies that the form is not yet determined. It implies that the determination is yours.
That verb changed something for me.
I wrote The Orange Pill from inside the frenzy. I was building Napster Station on thirty days' notice, training engineers in Trivandpur, crossing oceans to showcase what AI-augmented teams could do, writing chapter drafts on transatlantic flights at three in the morning because the conversation with Claude was more stimulating than sleep. The exhilaration was genuine. So was the vertigo. I described the experience as "falling and flying at the same time," and I meant it — the simultaneous sensation of capability expanding and ground shifting, of seeing further while standing on less.
What Perez's framework gave me was the structural context that the frenzy, by its nature, obscures. When you are inside the installation phase, operating at the frontier, building at speeds that would have been inconceivable five years earlier, the frenzy feels like the whole story. The technology is transforming everything. The adoption curves are vertical. The productivity multipliers are real. The implications are staggering. And they are — all of that is true. But the frenzy is not the whole story. The frenzy is the first act of a drama that has played out five times before, and the first act's ending is not the play's ending.
The play's ending depends on the turning point. On whether the institutions get built. On whether the educational systems are restructured. On whether the workers who bear the cost of creative destruction are supported through the transition or abandoned to it. On whether the governance frameworks are designed to enable broad deployment or captured to protect installation-phase incumbents. On whether the gains of this extraordinary technology are distributed broadly enough to sustain a golden age, or concentrated narrowly enough to produce a lost generation.
I described my twenty engineers in Trivandpur achieving twenty-fold productivity multipliers and feeling both the exhilaration and the terror of what that implied. Perez's framework tells me what the implication actually is: those engineers are living in the gap between installation and deployment, experiencing the capability expansion of the new paradigm without the institutional infrastructure that would ensure the expansion benefits more than the people in that room. The technology works. The institutions do not yet exist to make the technology work for everyone.
That gap is where I build. It is where everyone reading this builds, whether they know it or not. The choices we make inside the gap — quarterly choices about headcount, pedagogical choices about curriculum, policy choices about regulation, personal choices about how we engage with tools that amplify everything we bring to them — these choices aggregate into the institutional landscape of the turning point. They determine whether the gap closes progressively or regressively. They determine whether the golden age arrives.
I don't know if the golden age will arrive. Perez's framework offers structural possibility, not guarantee. The honest reading of two and a half centuries of evidence is that golden ages are built by people who recognize the turning point for what it is and choose to act — knowing that the outcome is uncertain, knowing that the quarterly pressure never relents, knowing that the market rewards extraction more reliably than investment, and choosing anyway to build the institutions that the deployment phase requires.
The technology is there to be shaped.
The shaping is the work.
— Edo Segal
A trillion dollars vanished from software companies in eight weeks. The AI frenzy is building infrastructure at unprecedented speed — and the correction is coming, because it always comes. The question that Carlota Perez's four decades of research makes inescapable is not whether the bubble will burst, but what institutions get built in the window the burst opens. Every golden age in the history of capitalism was constructed during that window. None arrived automatically. This book maps Perez's framework onto the AI revolution — the speculative frenzy, the Death Cross repricing, the institutional vacuum left by the information age's incomplete deployment — and asks the hardest question of the moment: are we building the educational, labor, and governance institutions that the deployment phase demands, or are we assuming the technology will sort it out? History says it won't. From the framework knitters of Nottingham to the knowledge workers of 2026, the pattern holds. The technology installs the capability. The institutions determine who benefits. The turning point is now. — Carlota Perez

A reading-companion catalog of the 15 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Carlota Perez — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →