By Edo Segal
The position I couldn't see was my own.
That's the thing about leverage. Financial leverage, cognitive leverage, emotional leverage — it all works the same way. You take on more than your current reserves can cover, and you sustain the gap by producing more. The production justifies the commitment. The commitment demands more production. The loop closes, and from inside the loop, everything looks like momentum.
I described it in *The Orange Pill* without having the vocabulary for it. The transatlantic flight where I wrote a hundred-and-eighty-seven-page draft. The nights in Trivandrum where I couldn't close the laptop. The mornings I told my wife I'd be done soon, knowing I wouldn't be, knowing the next prompt would lead to the next feature would lead to the next chapter would lead to the next promise I'd make to myself about when I'd stop. I called it productive addiction. I called it flow that had curdled into compulsion. I described the whip and the hand that held it belonging to the same person.
I did not have the name for the structure underneath.
Hyman Minsky gave me the name. Not for AI. Not for technology. For the dynamic itself — the way success generates the conditions for its own reversal, the way stability breeds instability, the way the longer everything works the more fragile the system becomes, precisely because the working convinces everyone to eliminate the margins of safety that would protect them when it stops.
Minsky was an economist who spent four decades arguing that capitalism's crises are not accidents inflicted from outside. They are generated from within, by the system's own success, through the rational behavior of participants who are responding sensibly to every signal in their environment. He was largely ignored during the calm years. He became inescapable after 2008.
What struck me was how precisely his framework maps onto what I'm watching happen right now. Not just in financial markets, though the trillion-dollar SaaS repricing I described in *The Orange Pill* is a textbook Minsky event. The mapping goes deeper — into organizations restructuring around a productivity multiplier they haven't stress-tested, into careers built on the assumption that AI capability will improve forever at its current rate, into the professional culture that celebrates intensity without asking whether the intensity is sustainable.
The framework maps onto me. Onto my own leverage. Onto the gap between my commitments and my reserves that I was sustaining through sheer output.
This book is the Minsky lens applied to the AI moment. It will not make you pessimistic. It will make you honest about where the margins are — and whether you still have them.
— Edo Segal ^ Opus 4.6
1919–1996
Hyman Minsky (1919–1996) was an American economist whose work on financial instability was largely marginalized during his lifetime and became urgently relevant after the 2008 global financial crisis. Born in Chicago to socialist immigrant parents, Minsky studied mathematics at the University of Chicago and completed his doctorate at Harvard under Joseph Schumpeter and Wassily Leontief. He spent most of his academic career at Washington University in St. Louis and was affiliated with the Levy Economics Institute of Bard College. His central contribution, the Financial Instability Hypothesis, argues that capitalist economies endogenously generate fragility during periods of prosperity — that stability itself breeds instability as rational actors progressively reduce margins of safety during calm periods. His taxonomy of financial positions — hedge, speculative, and Ponzi — provided the analytical vocabulary for understanding the 2008 crisis, and the phrase "Minsky moment," coined by economist Paul McCulley, entered the global financial lexicon. His major works include *John Maynard Keynes* (1975), *Can "It" Happen Again?* (1982), and *Stabilizing an Unstable Economy* (1986).
The most dangerous moment in the life of an economic system is the moment when everything appears to be working.
Hyman Minsky spent four decades making this argument, and for most of those decades the economics profession ignored him. The prevailing orthodoxy held that markets tend toward equilibrium, that rational actors process information efficiently, and that financial crises are caused by external shocks — wars, policy errors, natural disasters — rather than by the internal dynamics of the system itself. Minsky insisted on the opposite. The crisis is not an accident that befalls a healthy system. The crisis is what a healthy system produces, endogenously, through the rational behavior of its participants, precisely because the system is healthy. Stability breeds instability. Success generates the conditions for its own reversal. The longer the calm, the more violent the storm — not despite the calm, but because of it.
Minsky died in October 1996, a full quarter-century before ChatGPT reached fifty million users in two months, before Claude Code's run-rate revenue crossed $2.5 billion, before a trillion dollars of market value evaporated from software companies in the opening weeks of 2026. He never saw a large language model. He never typed a prompt. He never experienced the vertigo that Edo Segal describes in The Orange Pill — the sensation of watching a machine produce in an hour what a team of engineers would have needed months to build. But the framework Minsky left behind is, arguably, the single most powerful analytical instrument available for understanding what the AI boom is doing to the economic structures that surround it. Not because Minsky predicted AI. Because Minsky predicted what happens to any system when a genuine, transformative success arrives and the participants begin to optimize their behavior around the assumption that the success will persist indefinitely.
The mechanism is precise. It operates in stages. And it is worth tracing carefully, because the stages are visible right now in the AI economy, in real organizations, in real careers, in the real decisions being made by real people who are acting rationally given the signals in their environment — and who are, in aggregate, generating the fragility that will determine the character of the next correction.
The first stage is displacement. In Minsky's framework, every speculative cycle begins with a displacement — a genuine event that changes the perceived profit opportunities in a major sector of the economy. The displacement is not illusory. It is real. The canal system that opened new trade routes in eighteenth-century Britain was a real improvement in transportation infrastructure. The railroads that crisscrossed nineteenth-century America created real economic value. The internet protocols that connected computers in the 1990s enabled real communication and commerce at scales previously impossible. The displacement produces genuine gains for early adopters, and those gains are observable, measurable, and rational to pursue.
The AI displacement began in late 2022 with the public release of ChatGPT, but the displacement that concerns this analysis — the one Segal calls the "orange pill" — arrived in the winter of 2025, when coding assistants crossed a capability threshold that made the previous paradigm not merely less efficient but categorically different. The twenty-fold productivity multiplier that Segal documented in Trivandrum was real. The engineer who had never written frontend code but built a complete user-facing feature in two days using Claude was not hallucinating her output. The product existed. It worked. The displacement was genuine.
This is essential to state clearly, because Minsky's framework is often misread as a theory of fraud or delusion. It is neither. The financial instability hypothesis does not require that the initial gains be illusory. It requires only that the initial gains be real — real enough to change behavior, real enough to restructure expectations, real enough to convince rational actors that the world has permanently changed in a direction favorable to increased risk-taking. The more real the gains, the more dangerous the subsequent dynamics, because the reality of the gains is what makes the behavioral shift feel justified rather than reckless.
The second stage is boom. The genuine gains attract attention. Early adopters report their results. Others observe the results and adjust their behavior accordingly. Investment flows toward the new opportunity. Organizations restructure to capture the gains. The restructuring is rational at the level of the individual firm: if AI tools allow five engineers to produce what fifty produced before, the firm that retains fifty engineers is at a competitive disadvantage against the firm that employs five and redirects the savings. The market rewards the restructuring. Stock prices rise for the restructurers. Analysts upgrade their ratings. The signal propagates: this is working. Do more of it.
Segal describes the board conversation that every AI-adopting company is having. The arithmetic is on the table. If the twenty-fold multiplier holds, the headcount reduction writes itself. Segal chose to keep and grow his team — what he calls the Beaver's choice — but he acknowledges that this choice fights against the incentive structure of the market, which rewards quarterly efficiency over long-term ecosystem building. The market does not reward patience. It rewards quarters. And the quarterly arithmetic, in the boom phase, points unambiguously toward acceleration.
The third stage is euphoria. This is where Minsky's analysis becomes most uncomfortable, because euphoria does not feel like euphoria from the inside. From the inside, it feels like clarity. The participants are not drunk on irrational exuberance. They are processing the available information and arriving at conclusions that the information supports. AI capability has improved every quarter for two years. Revenue is climbing. Productivity gains are measurable and replicable. The developer in Lagos who could not previously build a product can now build one in a weekend. The evidence for continued improvement is overwhelming. The rational conclusion is to build on the assumption that improvement will continue.
The problem — and this is Minsky's deepest insight — is that the evidence for continued improvement is produced by the boom itself. The revenue is climbing because organizations are investing in AI. The productivity gains are replicable because the tools are improving because investment is flowing because the gains are replicable. The loop is self-reinforcing. The evidence that the boom will continue is generated endogenously by the behavior of the participants in the boom. It is not independent verification. It is the system testifying to its own health — which is precisely the testimony that should be trusted least.
Consider the specific case of AI infrastructure investment. Amazon, Alphabet, Meta, and Microsoft collectively spent nearly $300 billion on capital expenditures in 2025, with AI accounting for an increasingly dominant share. Their collective capital expenditures represent an estimated 1.3 percent of U.S. GDP, projected to rise to 1.6 percent in 2026. OpenAI committed to spending $1.4 trillion over eight years building data centers, against just $13 billion in annual revenue. These numbers would be alarming in any Minskyan analysis. But the alarm is muted by the boom's own logic: the investment is justified because AI capability is improving, and AI capability is improving partly because the investment is flowing.
Several analysts have identified what they call "circular investing" in the AI sector — the phenomenon in which the hyperscaler companies are investing trillions of dollars in AI infrastructure and are, for the most part, investing in each other. The revenue that justifies the investment is, to a significant degree, generated by the investment itself. This is not fraud. It is the endogenous dynamic of a boom in which the signal and the noise are produced by the same mechanism, and the participants cannot distinguish between them because the mechanism is the market they inhabit.
The fourth stage is profit-taking, in which the most sophisticated participants begin to recognize that the valuations have outrun the fundamentals and quietly reduce their exposure. The fifth stage is panic, in which the recognition spreads and the positions built during the euphoria are liquidated at whatever price the market will bear. Minsky did not claim that every cycle reaches the fifth stage. He claimed that the dynamics of the first three stages generate fragility that makes the system increasingly vulnerable to any disturbance — a policy change, a technical setback, a shift in sentiment — that could trigger the fourth and fifth.
The AI economy in mid-2026 displays the characteristics of a system somewhere between the second and third stages. The displacement is real. The boom is underway. The euphoria has not yet fully materialized, but the behavioral changes that precede euphoria — the restructuring, the acceleration of investment, the optimization of organizational life around the assumption of permanent AI augmentation — are visible in every sector that Segal documents.
The deepest application of Minsky's paradox to the AI economy operates not at the level of financial markets but at the level of organizational and individual behavior. When Segal describes the engineers in Trivandrum rebuilding their workflows around Claude Code, when he describes the backend engineer who started building user interfaces because the tool made it possible, when he describes the dissolution of specialist silos and the emergence of integrative, cross-domain work — he is describing rational adaptation to a genuine capability expansion. Each individual decision is sound. The aggregate effect is a system that has optimized itself for a condition that may not persist.
The optimization is the fragility. The more completely an organization restructures around AI-augmented productivity, the more dependent it becomes on the continued availability and reliability of that augmentation. The specialist knowledge that was dissolved does not reconstitute on demand. The deep expertise that was delegated to machines does not return when the machines degrade. The organizational structures that were dismantled in favor of "vector pods" and cross-functional integration cannot be rebuilt overnight if the tools that made integration possible are disrupted, degraded, or repriced.
Minsky would recognize this dynamic instantly. It is the same dynamic that made the banking system fragile in the years before 2008: the systematic reduction of margins of safety during a period when the margins appeared unnecessary, because the environment was calm and the margins had not been tested. The margins existed for the storm. The calm convinced everyone the storm would not come. The margins were eliminated. And when the storm came, the system discovered that the margins had been the only thing between stability and collapse.
The AI economy's margins of safety are the deep expertise, the specialist knowledge, the institutional memory, the human relationships, and the organizational redundancies that the boom is systematically eliminating because the boom makes them appear unnecessary. They are unnecessary — during the boom. They are essential during the correction. And the correction, in Minsky's framework, is not a possibility to be hedged against. It is a certainty to be prepared for, because the dynamics that generate it are endogenous to the boom itself.
The question, then, is not whether the AI productivity gains are real. They are real, as real as the railroad's ability to move goods faster than a canal, as real as the internet's ability to connect buyers and sellers across continents. The question is whether the structures being built on the assumption of those gains' permanence can withstand the moment when the gains plateau, regress, or simply stop accelerating at the rate the structures require. In Minsky's language: are the positions hedge, speculative, or Ponzi? And if the answer is increasingly speculative, with pockets of Ponzi, then the moment is approaching — not because AI has failed, but because AI has succeeded so spectacularly that the success itself has become the mechanism of fragility.
---
Minsky's taxonomy of financial positions is deceptively simple. Three categories. Three relationships between income and obligation. Three levels of vulnerability to disruption. The simplicity is the power, because the taxonomy does not describe pathological behavior. It describes rational behavior at each level, undertaken by actors who are responding sensibly to the signals in their environment. The pathology is not in any individual position. It is in the aggregate — in the way the distribution of positions across the system shifts, over time, from predominantly hedge to predominantly speculative, with an increasing tail of Ponzi. The shift is invisible from inside, because each actor sees only her own position and the local signals that justify it. The systemic fragility emerges from the composition of individually rational choices.
Applied to the AI economy, Minsky's taxonomy produces a classification that is both diagnostic and predictive. It reveals where the system is strong, where it is stretched, and where it is standing on nothing at all.
A hedge position, in Minsky's original formulation, is one in which the expected income stream from an asset is sufficient to cover all contractual obligations — both interest and principal — in every period. The hedge borrower does not need to refinance. She does not need asset prices to appreciate. She does not need the market to remain accommodating. Her position is self-sustaining under all foreseeable conditions. She has margins of safety. If income falls short of expectations, the margins absorb the shortfall. The hedge position is the foundation of financial stability.
In the AI economy, a hedge position is an organization or individual whose productivity gains from AI are genuine, documented, and sustainable — and, critically, whose continued operation does not depend on those gains increasing at their current rate. The hedge organization uses AI to do existing work more efficiently. It has maintained the skills, the relationships, the institutional knowledge, and the organizational redundancies that would sustain its operation if the AI tools degraded, plateaued, or became significantly more expensive. It has captured the gains of the boom without betting its survival on the boom's continuation.
The builder who uses Claude Code to accelerate her workflow while continuing to understand the code the tool produces, who maintains the debugging skills and architectural intuition that would allow her to work without the tool if necessary, who treats the productivity multiplier as a welcome enhancement rather than a structural dependency — this builder is in a hedge position. Her career can withstand a correction. The margin of safety is the expertise she has not delegated.
Segal's description of his own engineering team at Napster after the Trivandrum training suggests an organization attempting to maintain a hedge position. The team was kept. The capabilities were expanded. The productivity gains were captured, but the human capital was retained as a reserve against future disruption. Segal describes this as fighting against the incentive structure of the market, which is precisely Minsky's point: the market rewards the speculative position during the boom, because the speculative position produces higher short-term returns. The hedge position sacrifices margin for resilience, and resilience is unrewarded until the moment it is the only thing that matters.
A speculative position, in Minsky's taxonomy, is one in which the expected income stream covers interest payments but not principal repayment. The speculative borrower must refinance — must roll over the debt, must find a new lender willing to extend credit, must keep the game going — in order to remain solvent. The position works as long as credit markets are accommodating, which they typically are during booms, because the boom generates the confidence that makes lenders willing to extend credit. The speculative position is rational during the boom. It becomes fragile the moment the boom's conditions change.
In the AI economy, a speculative position is an organization that has restructured around AI-augmented productivity and cannot sustain its commitments at pre-AI levels of output. The company that has reduced its engineering team from fifty to five, betting that five people with Claude Code can maintain and extend what fifty built, occupies a speculative position. The position works — spectacularly well — as long as the AI tools continue to function at their current level. If the tools degrade, if pricing changes dramatically, if a regulatory intervention restricts usage, the company cannot reconstitute its engineering capability at the speed the business requires. The specialist knowledge that was eliminated does not exist in latent form, waiting to be reactivated. It walked out the door with the forty-five engineers who were let go.
The speculative organization's "refinancing" is continuous AI improvement. Each quarter, the tools improve, and the improvement validates the restructuring. The validation encourages further restructuring. The restructuring generates further dependency. The dependency is invisible because it is never tested — the tools keep improving, the market keeps rewarding the efficiency, the quarterly numbers keep climbing. The speculative organization looks like the most sophisticated actor in the market. It is also the most exposed.
The Certuity research group, in an October 2025 analysis titled "Are We in an AI Bubble?," identified the same dynamic at the financial level: wildly successful mega-cap technology companies pledging trillions of dollars of AI infrastructure investment, with the observation that they were largely investing in each other. The revenue streams that justified the investment were, to a significant degree, circular — generated by the same ecosystem of companies doing the investing. The income covered the current obligations, but the principal — the trillions committed to future infrastructure — depended on continued market expansion that had not yet been independently validated. This is speculative finance applied to the technology sector's balance sheet: the income covers the interest, but the principal requires refinancing, and the refinancing requires the boom to continue.
A Ponzi position, in Minsky's taxonomy, is one in which the expected income stream covers neither interest nor principal. The entire position depends on continued asset appreciation — rising prices, expanding markets, accelerating growth. The Ponzi actor is not necessarily a con artist. More often, the Ponzi actor is someone who built a reasonable position during the hedge phase, stretched it during the speculative phase, and now finds herself in a situation where only continued acceleration can prevent collapse. The position was arrived at incrementally, through a series of individually rational decisions, each of which looked defensible in the environment that prevailed when it was made. The aggregate is insolvency disguised by momentum.
In the AI economy, Ponzi positions are emerging in several domains. The career built entirely on prompt engineering — a skill set that exists only in the context of current-generation AI tools and would be worthless if those tools were superseded by architectures that do not require human prompting — is a Ponzi position. The startup whose entire business model depends on continued exponential improvement in AI capability, with no viable path to profitability if improvement merely continues at a linear rate, is a Ponzi position. The educational program that has abandoned traditional skill-building in favor of exclusive AI-augmented workflows, producing graduates who can operate the tools but cannot function without them, is a Ponzi position.
The most revealing Ponzi dynamic in the current AI economy is in infrastructure investment. OpenAI's commitment to spending $1.4 trillion over eight years on data centers, against $13 billion in annual revenue, is a position that can only be sustained through continued asset appreciation — in this case, the appreciation of AI's perceived value to the market, which justifies continued investment from external sources. OpenAI projected annual losses through 2028, including $74 billion in operating losses in 2028 alone. The cash flows cover neither interest nor principal. The position is sustained entirely by the market's belief that AI's value will continue to appreciate at a rate that eventually justifies the investment. This is, in the precise technical sense that Minsky defined, a Ponzi financial position.
The National Bureau of Economic Research published a study in February 2026 that added empirical weight to the concern. Despite ninety percent of firms reporting no measurable impact of AI on workplace productivity, executives at those same firms projected AI to increase productivity by 1.4 percent. The gap between reported experience and projected expectation is itself a Minskyan signal: the projection is sustained not by evidence but by the environment's collective confidence that the evidence will eventually materialize. The expectation is doing the work that evidence should be doing. This is how speculative positions are maintained during booms — through expectations that the boom itself generates.
Minsky's taxonomy is not a moral classification. The hedge actor is not virtuous and the Ponzi actor is not wicked. They are positions on a spectrum of vulnerability, and the distribution across the spectrum determines the system's aggregate fragility. A system dominated by hedge positions is resilient: it can absorb shocks, weather corrections, and reorganize without systemic collapse. A system in which speculative and Ponzi positions have proliferated is fragile: a disturbance that would be absorbed by a hedge-dominated system triggers cascading failures in a system where too many actors depend on conditions that the disturbance has disrupted.
The distribution in the AI economy is shifting. The early adopters, the ones who integrated AI tools into existing workflows without dismantling their prior capabilities, were predominantly in hedge positions. As the boom progresses, as the market rewards acceleration and punishes caution, as the quarterly arithmetic of headcount reduction becomes increasingly difficult to resist, the distribution is moving toward speculative. The Ponzi positions at the infrastructure and startup level are growing. The system is generating the fragility that Minsky's framework predicts — not through malice or stupidity, but through the rational response of individual actors to an environment that rewards risk-taking and punishes prudence.
The question the taxonomy poses is not whether any individual position will fail. Some will; some will not. The question is whether the aggregate distribution of positions across the system has shifted far enough toward speculative and Ponzi that a correction — any correction, from any source — would trigger cascading failures rather than isolated adjustments. And the honest answer, in mid-2026, is that no one knows. But Minsky's framework suggests that the longer the question goes unasked, the more likely the answer becomes unfavorable — because the boom itself is the mechanism that shifts the distribution, and the boom is still accelerating.
---
The most counterintuitive element of Hyman Minsky's financial instability hypothesis — the element that made his work so easy to ignore during calm periods and so difficult to dismiss during crises — is the claim that financial fragility is endogenous. It is not imported from outside the system by wars, policy blunders, or natural disasters. It is generated inside the system, by the system's own success, through the normal operation of profit-seeking in an environment that rewards risk-taking when risk appears to have diminished.
The external shock, when it arrives, is the trigger. But the fragility was already there, deposited layer by layer during the boom, invisible because the boom provided no occasion to test it. The 2008 financial crisis was triggered by the collapse of the subprime mortgage market, but the fragility that made subprime's collapse systemic rather than contained had been building for a decade — in the proliferation of mortgage-backed securities whose risks were poorly understood, in the leverage ratios of investment banks that had climbed to levels sustainable only in calm markets, in the credit default swaps that were supposed to distribute risk but instead concentrated it in institutions whose interconnection ensured that one failure would cascade through the entire system. None of these fragilities were imposed from outside. They were generated internally, by actors responding rationally to an environment in which housing prices had been rising for years and the models said they would continue to rise.
The AI economy is generating fragility through mechanisms that parallel the financial case with uncomfortable precision. The mechanisms are not financial in the narrow sense — they operate at the level of human capital, organizational structure, knowledge systems, and institutional capacity rather than balance sheets and cash flows. But the dynamic is identical: success breeds confidence, confidence breeds optimization, optimization eliminates the margins of safety that the system needs during stress but that appear unnecessary during calm. The fragility accumulates silently, validated at every step by the continuing success that is simultaneously producing it.
The first mechanism is the erosion of deep expertise. Segal documents this with a specificity that Minsky's framework illuminates. The senior engineer in Trivandrum whose architectural intuition was built through thousands of hours of debugging — that intuition is not being deposited in the next generation. The backend engineer who used to spend four hours a day on "plumbing" — dependency management, configuration files, the connective tissue between system components — lost both the tedium and the ten minutes of genuinely formative struggle mixed into the tedium when Claude Code took over. The tedium she was glad to lose. The ten minutes she did not know she had lost until months later, when she found herself making architectural decisions with less confidence than before and could not explain why.
Minsky would recognize this instantly as the erosion of margins of safety. The deep expertise — the embodied understanding of how systems behave under stress, the intuition about what will break and where, the knowledge that lives in the hands and the nervous system rather than in documentation — is the system's margin of safety against technical disruption. It exists to absorb shocks. It is the thing that allows a senior engineer to diagnose a production failure at 3 a.m. when the monitoring dashboards show nothing useful, because she has felt the system's failure modes in her body over the course of a decade of patient, friction-rich interaction.
When AI eliminates the friction that builds this expertise, the margin of safety erodes. Not immediately. Not visibly. The systems continue to function. The dashboards remain green. The quarterly metrics improve. But the reserve of human understanding that would be mobilized during a crisis — an AI tool outage, a subtle model degradation, a security vulnerability in AI-generated code that no human reviewed carefully enough to detect — is shrinking. The erosion is invisible during normal operations because normal operations do not require the reserve. It becomes visible only during the crisis, at which point it is too late to rebuild.
A February 2026 study from the National Bureau of Economic Research found that ninety percent of firms reported no measurable impact of AI on workplace productivity despite significant investment. This finding, which might appear to contradict the productivity thesis, actually supports the fragility thesis from a different angle: the investment is being made, the restructuring is underway, the margins of safety are being eliminated — and the returns have not yet materialized for the vast majority of adopters. The positions are speculative: the obligations have been incurred, but the income depends on future improvements that have not yet arrived. The gap between commitment and return is itself a form of fragility. If the returns arrive, the positions are vindicated. If they do not, the commitments remain.
The second mechanism is the dissolution of specialist knowledge. Segal describes this as one of the three shifts reshaping professional life: the specialist silo is dissolving, replaced by integrators who operate across domains with AI assistance. The integrator is more productive than the specialist — in the AI-augmented environment. She can build frontend interfaces and backend systems and design user experiences and analyze business metrics, all in the same afternoon, because the tool handles the domain-specific implementation and she provides the cross-domain judgment.
But the integrator's capability is coupled to the tool's availability. Remove the tool, and the integrator is not a specialist in any domain. She is a generalist without the tool that made generalism productive. The specialist who was replaced, meanwhile, is gone — not on standby, not in reserve, but dispersed into other roles or other industries, her specialized knowledge degrading through disuse. The organizational capacity for deep, tool-independent work in any single domain has diminished, and the diminishment is the direct consequence of the productivity gain that the integration represents.
This is endogenous fragility in its purest form. The productivity gain and the fragility are not two separate phenomena occurring simultaneously. They are the same phenomenon viewed from two temporal perspectives. In the present, the integration is a gain. In the future correction, the integration is a vulnerability. The system cannot have the gain without the vulnerability, because the gain is achieved by eliminating the redundancy that the vulnerability measures.
The third mechanism is the speed of untested organizational restructuring. Segal describes companies reorganizing around "vector pods" — small groups whose function is not to build but to decide what should be built, with AI tools handling the execution. Five years ago, this structure would have been incoherent. Today it is being adopted at the speed of competitive necessity. But the structure has not been tested by adversity. It has existed only during the boom — the period when AI tools function reliably, when capability improves quarterly, when the environment validates every decision that assumes continued improvement.
Minsky observed the same dynamic in financial institutions before 2008. New organizational structures — the trading desk that operated with unprecedented autonomy, the risk management function that relied entirely on quantitative models, the originate-to-distribute lending model that separated the creation of mortgages from the bearing of their risk — were adopted rapidly because they produced superior returns during the boom. They had not been tested by a correction because there had not been a correction. When the correction arrived, the structures failed in ways that their designers had not anticipated, because the designers had optimized for the boom's conditions and had no data about how their structures would perform under stress.
The vector pod that decides what to build, relying on AI to handle execution, is an untested organizational structure. It functions brilliantly when the AI executes reliably. Its performance under conditions of AI degradation — slower response times, reduced capability, increased error rates, model regression — is unknown. The unknown is itself the fragility. A structure that has been tested by adversity and survived has demonstrated resilience. A structure that has only existed during favorable conditions is, in Minsky's framework, a speculative position: it works as long as conditions hold.
The fourth mechanism is the most insidious, and it operates at the level of the system as a whole rather than at the level of individual organizations. It is the progressive elimination of institutional diversity. As the AI boom progresses, organizations converge on similar structures, similar tools, similar workflows, similar dependencies. The convergence is rational: the tools that produce the best results are adopted widely, and the organizational structures that best exploit those tools are imitated across the industry. The result is a system in which many organizations have similar vulnerability profiles — similar dependencies on the same AI providers, similar reductions in deep expertise, similar dissolution of specialist knowledge, similar untested organizational structures.
In ecology, this is called a monoculture, and it is the most fragile configuration a system can assume. A diverse ecosystem can absorb the loss of any single species because other species fill the niche. A monoculture collapses entirely when the single species fails, because there is nothing to substitute. In finance, the convergence of risk models before 2008 — the fact that most major financial institutions were using similar models to assess similar risks, producing similar portfolio structures — meant that when the models failed, they failed everywhere simultaneously, and the system had no internal diversity to absorb the shock.
The AI economy is converging. Organizations across industries are adopting the same tools, restructuring around the same productivity assumptions, and eliminating the same categories of deep expertise. The convergence produces efficiency during the boom and fragility during the correction. It is the endogenous generation of systemic risk through the rational pursuit of individual optimization — which is the financial instability hypothesis translated from balance sheets to organizational design.
The compounding of these four mechanisms — the erosion of deep expertise, the dissolution of specialist knowledge, the proliferation of untested organizational structures, and the reduction of institutional diversity — produces a system that is increasingly optimized for the current conditions and increasingly vulnerable to any change in those conditions. The optimization is the fragility. The success is the risk. The boom is building, brick by rational brick, the structure that the correction will test — and the structure has been built without margins of safety, because the boom made margins appear unnecessary.
Minsky argued that this dynamic could be moderated but not eliminated. The tendency of success to breed fragility is structural, a feature of complex adaptive systems rather than a failure of individual judgment. What can be changed is the institutional response — the dams, in Segal's language, that moderate the boom and maintain reserves for the correction. But the dams must be built during the boom, when the political and organizational will to build them is lowest, because the boom itself generates the confidence that makes dam-building seem unnecessary. The time to reinforce the levee is when the river is calm. The calm is precisely when no one wants to pay for levees.
The AI economy's levees are the deep expertise that is being eroded, the specialist knowledge that is being dissolved, the organizational redundancies that are being eliminated, and the institutional diversity that is being reduced. Each of these is a margin of safety. Each is being consumed by the boom. And the consumption is endogenous — generated not by any external threat but by the system's own spectacular success.
---
On February 23, 2026, Anthropic published a blog post about Claude's ability to modernize COBOL — the decades-old programming language that still runs a significant fraction of the world's banking, insurance, and government systems. IBM suffered its largest single-day stock decline in more than a quarter century. Not because IBM's systems had stopped working. Not because a competitor had launched a superior product. Because the market, in a single trading session, repriced the value of an entire category of institutional expertise.
The blog post was the trigger. The fragility had been building for years.
Segal calls it the Software Death Cross: the moment, projected around 2027, when the aggregate market value of artificial intelligence overtakes the aggregate market value of software-as-a-service. By February 2026, the leading indicators were already acute. Workday had fallen thirty-five percent. Adobe had lost a quarter of its value. Salesforce had dropped twenty-five percent. Autodesk twenty-one percent. The market was engaged in a repricing so rapid and so comprehensive that the financial press reached for its ugliest terminology: the SaaSpocalypse.
Minsky's framework provides the precise analytical instrument for understanding what happened — not merely as a market correction, but as the structural revelation of fragility that had been accumulating, invisibly, during a two-decade boom in software-as-a-service.
The SaaS model was, for most of its history, a hedge position of remarkable elegance. A software company built a product. Customers paid a recurring subscription. The subscription revenue was predictable, diversified, and self-sustaining. The income covered all obligations — development costs, infrastructure, sales, support — with comfortable margins. The model did not depend on asset appreciation or market expansion. It depended on the continued utility of the software to the customer, which was reinforced by switching costs, data lock-in, and the organizational inertia that makes replacing enterprise software roughly as enjoyable as replacing a root canal.
The hedge position was sustained by a single underlying assumption: code is hard to write. The difficulty of producing software was the moat that protected the subscription. A customer might prefer a different CRM system, but building one from scratch required years of engineering effort, millions of dollars in investment, and the organizational risk of deploying untested infrastructure in a production environment. The subscription was cheaper than the alternative. The moat held.
During the SaaS boom of 2015 through 2021, the positions shifted. Valuations climbed from reasonable multiples of revenue to extraordinary ones — reaching 18.5 times revenue during the COVID-era peak, when lockdowns turned every enterprise into a software buyer overnight. At 18.5 times revenue, a SaaS company's valuation was no longer a bet on current cash flows. It was a bet on growth — continued expansion of the customer base, continued willingness of customers to pay subscription premiums, continued absence of competitive alternatives that could undercut the pricing. The positions had moved from hedge to speculative. The income covered current obligations, but the valuation — the principal, in financial terms — required refinancing through continued growth. The companies were solvent but stretched. The stretch was invisible because the growth continued.
Then AI arrived and invalidated the underlying assumption.
When a competent developer can describe a desired software application in natural language and receive a working implementation in hours, the difficulty of writing code — the moat that protected every SaaS subscription — does not merely narrow. It drains. The customer who was paying Salesforce two hundred dollars per seat per month because building an alternative CRM was prohibitively expensive now has access to tools that make building an alternative CRM a weekend project. The moat has not been breached by a competitor. It has evaporated because the ground the moat was dug into turned out to be sand.
Howard Marks of Oaktree Capital, in an investment memo titled "Is It a Bubble?," invoked Minsky's framework explicitly. Marks pointed to what economist Paul Kedrosky called the Minsky moment in AI infrastructure: the inflection point where credit expansion exhausts its good projects and begins chasing bad ones, funding marginal deals with vendor financing and questionable coverage ratios. For AI infrastructure, Marks observed, that shift might already be underway — the telltale signs including hyperscalers' capital expenditure outpacing revenue momentum and lenders sweetening terms to sustain the expansion.
But the Software Death Cross is a Minsky moment of a different kind — not in the AI infrastructure market, but in the SaaS market that AI infrastructure is displacing. The trillion dollars of market value that vanished from software companies did not vanish because the companies had suddenly become less capable. The systems still functioned. The customers still used them. The revenue, for the moment, still flowed. The value vanished because the market recognized, with the brutal clarity that markets occasionally achieve, that the positions had been speculative all along — sustained not by the inherent difficulty of the product but by the historical difficulty of producing alternatives. The moment the historical difficulty was removed, the speculative nature of the positions was exposed, and the repricing was immediate.
Segal draws a distinction that Minsky's framework sharpens. The code is the thing AI can reproduce in an afternoon. The ecosystem is the thing that matters. The companies that survive the Death Cross are those whose value was never primarily in the code — whose real moat was in the data layer accumulated through two decades of enterprise deployment, in the integrations connecting sales pipelines to marketing automation to customer service to financial reporting, in the workflow assumptions embedded in the organizational muscle memory of every enterprise that trained its people on the platform, in the compliance certifications and audit trails and security guarantees that required years of institutional effort to obtain.
In Minsky's terms, these are the hedge positions within the SaaS sector: companies whose value does not depend on the continued difficulty of code production. Their income stream — the subscription revenue from customers who depend on the ecosystem, not merely the code — covers all obligations regardless of what happens to the cost of writing software. The ecosystem is the margin of safety. It is the reserve that absorbs the shock of the transition.
The companies that do not survive are those whose value was always in the code — thin applications that solved singular problems without building ecosystems around them. A project management tool that provides a Kanban board and a task list. A scheduling application that connects calendars. A form builder that collects responses. Each of these is reproducible by a developer with Claude Code in an afternoon. Their subscriptions were sustained by the difficulty of the alternative, and the difficulty has evaporated. Their positions were speculative: the income covered current operations, but the principal — the valuation, the investment thesis, the career trajectories of their employees — depended on a barrier that was always going to fall. The Death Cross revealed what was already true.
Minsky observed that the distribution of losses in a financial crisis is never random. It follows the topology of the positions. Hedge positions survive. Speculative positions suffer but can reorganize if they have access to liquidity — to a lender of last resort, to reserves, to a market willing to refinance. Ponzi positions collapse entirely, because they have no income stream to fall back on and no asset that retains value when appreciation reverses.
The same topology applies to the Software Death Cross. The SaaS companies in hedge positions — those whose ecosystems are genuine, whose data layers are deep, whose institutional relationships are irreplaceable — will survive the repricing and may emerge stronger, because the elimination of code-layer competitors reduces noise in the market and increases the premium on the ecosystem layer. Salesforce, despite its twenty-five percent decline, retains a customer base, a data layer, and an integration ecosystem that no AI tool can reproduce in an afternoon. The repricing is painful but not terminal. The position was hedge, even if the valuation had drifted into speculative territory during the bubble.
The companies in speculative positions — those with real but vulnerable products, meaningful but not irreplaceable customer relationships, useful but reproducible functionality — will face the restructuring that Minsky's framework predicts for speculative actors during a correction. They will need to refinance: find new sources of value, rebuild their moats at a higher layer, convince the market that their post-code value proposition justifies continued investment. Some will succeed. Others will discover that their speculative positions were closer to Ponzi than they realized — that the income stream, once the code-layer subscription erosion begins, does not cover even the current obligations.
The companies in Ponzi positions — those whose entire value was the code, whose subscriptions were sustained by nothing more than the historical difficulty of building an alternative — will not survive the correction. The income covers nothing. The asset does not appreciate. The position collapses because there is nothing beneath it. These are the companies that will appear in the business press's post-mortems, cited as evidence that the Death Cross was a catastrophe. They were always going to fail. The Death Cross merely determined the timing.
The broader lesson of the Software Death Cross, viewed through Minsky's framework, is that the fragility was present long before the trigger. The SaaS industry's valuations had been speculative since at least 2019. The moat of code difficulty was always historically contingent — a barrier that existed because of the current state of technology, not because of any permanent feature of the market. The arrival of AI tools that could write code through natural-language conversation was not an unforeseeable shock. It was the logical conclusion of a trend that had been underway for decades: each generation of programming tools reduced the difficulty of producing software, and each reduction narrowed the moat. AI did not crack the moat. It completed a process of erosion that had been in progress since the first compiler abstracted away assembly language.
The companies that understood this — that recognized their moat was eroding and invested in ecosystem, data, institutional relationships, and integration depth rather than relying on the continued difficulty of code — were building margins of safety. They were, in Minsky's terms, maintaining hedge positions even as the market around them drifted speculative. The companies that did not understand it — that continued to operate as though the difficulty of code were a permanent moat — were accumulating fragility with every quarter of unchanged strategy.
The Death Cross is a local Minsky moment — a sector-level repricing that reveals the speculative nature of positions built during a two-decade boom. Whether it cascades into a broader correction depends on the interconnections between the SaaS sector and the wider economy. The SaaS industry employed hundreds of thousands of highly compensated knowledge workers whose consumption supported local economies, whose stock compensation supported housing markets, whose career expectations shaped educational investments. A significant and sustained repricing of the sector will produce effects that extend far beyond software company balance sheets.
Whether those effects are absorbed or amplified depends, as Minsky always argued, on the institutional structures in place to moderate them. The dams, in Segal's language. The stabilizers, in Minsky's. The question is not whether the Death Cross happened — it did. The question is what happens next, and the answer depends not on the technology but on the institutions.
Hyman Minsky built his framework around institutions — banks, firms, governments, the interconnected balance sheets of a capitalist economy. He did not write about individuals in the psychological sense. He did not concern himself with what happened inside the mind of the trader who doubled down, the executive who approved the leverage, the homeowner who signed the adjustable-rate mortgage. His unit of analysis was the position, not the person. The cash flow, not the consciousness.
But the logic of the financial instability hypothesis does not require institutional scale to operate. The progression from hedge to speculative to Ponzi is a dynamic of any system in which success generates confidence, confidence generates increased commitment, and increased commitment generates dependency on conditions that the actor cannot control. The system can be a banking sector. It can be a national economy. It can be a single human being, sitting at a desk at three in the morning, building something with an AI tool, unable to stop.
Segal's confession in The Orange Pill — the transatlantic flight on which he wrote a hundred-and-eighty-seven-page draft, the nights when the exhilaration drained away and what remained was grinding compulsion, the recognition that the whip and the hand that held it belonged to the same person — describes a trajectory that Minsky's taxonomy classifies with uncomfortable precision. The builder's relationship with the tool progressed through the same three stages that Minsky identified in the relationship between a borrower and the financial system. And the progression was driven by the same mechanism: success breeding confidence, confidence breeding escalation, escalation breeding a dependency that could not be unwound without confronting a deficit the dependency was designed to conceal.
The hedge phase of the builder's engagement is flow. Mihaly Csikszentmihalyi's conditions are met: clear goals, immediate feedback, challenge matched to skill, a sense of control over the process. The work is voluntary. The engagement is genuine. The builder could stop — and chooses not to, because the work is producing something real, something that matters, something that connects intention to artifact with a directness that no previous tool provided. The income, in psychological terms, covers all obligations. The satisfaction of the work sustains the effort. There is margin — time for relationships, for rest, for the parts of life that are not production. The builder is in surplus. The position is self-sustaining.
This phase is real. It is the phase that Segal describes when he writes about the days when the questions are generative, when the work expands outward, when each connection opens a more interesting line of inquiry than the last. It is the phase that Csikszentmihalyi documented across decades of research — the state in which human beings report the highest levels of satisfaction, purpose, and vitality. The hedge phase is not pathological. It is the opposite of pathological. It is the human being operating at her best, directed by will rather than compulsion, producing work that justifies the effort required to produce it.
The speculative turn is subtle. It occurs when the satisfaction begins to depend not on the quality of the current session but on the continuation of sessions. The builder no longer asks, "Is this session worthwhile?" She asks, implicitly and without articulating the question, "What would happen if I stopped?" The question marks the transition. In the hedge phase, stopping is a choice with no penalty — the work was satisfying, and the builder returns to the rest of her life replenished. In the speculative phase, stopping carries a cost: the anxiety of falling behind, the fear that the momentum will be lost, the suspicion that the self revealed by the absence of production is less impressive than the self sustained by its presence.
The Berkeley researchers documented the behavioral signatures of this transition without framing it in financial terms. Task seepage — the colonization of lunch breaks, elevator rides, waiting rooms by AI-assisted work — is the speculative borrower's refinancing. Each micro-session is a roll-over, a renewal of the commitment, a reassurance that the position remains solvent. The builder is not choosing to work during lunch. She is refinancing — maintaining the production stream that has become necessary to sustain a psychological position that is no longer self-supporting from the current session's satisfaction alone.
The speculative phase is where the external behavior becomes indistinguishable from flow. A camera pointed at a builder in speculative engagement and a camera pointed at a builder in genuine flow would record identical images: intense focus, loss of time awareness, rapid iteration, visible output. The distinction is internal. In flow, the engagement produces energy. In speculative engagement, the engagement consumes reserves that were accumulated during the hedge phase — the stored capacity for rest, for relationships, for the cognitive recovery that sustains long-term creative capability. The position looks solvent. The reserves are depleting.
The Ponzi turn arrives when the satisfaction has fully evaporated and what remains is the compulsion to continue because the alternative — stopping, confronting the silence, facing whatever the production was designed to obscure — is more threatening than the exhaustion of continued production. The Ponzi builder is not working because the work is satisfying. She is working because the work is the only thing preventing the recognition that her identity has been entirely subsumed by her output, and that without the output, she does not know who she is.
This is the phase that Segal describes when he writes about the Atlantic crossing — the moment when he caught himself writing not because the book demanded it but because he could not stop, when the exhilaration had drained away hours ago and what remained was the grinding compulsion of a person who had confused productivity with aliveness. The income covers nothing. The satisfaction is gone. The relationships are strained. The cognitive reserves are depleted. The position is sustained entirely by continued production — by the appreciation of the asset, which in this case is not a financial instrument but the builder's self-image as someone who produces.
Like all Ponzi positions, this is unstable. The builder will crash. The crash takes predictable forms: burnout that manifests as physical illness, relationship dissolution, the grey fatigue that the Berkeley researchers documented — a flattening of affect, an erosion of empathy, a diminishment of the very capacities that made the builder's work valuable in the first place. The crash is not a failure of willpower. It is the inevitable consequence of a position that was sustained by nothing real — a position in which the appearance of solvency depended on continued acceleration, and the acceleration consumed the reserves that would have been needed to survive deceleration.
Minsky's financial framework adds a dimension to this psychological trajectory that purely psychological accounts miss: the role of the environment in sustaining and encouraging the progression. The builder does not progress from hedge to speculative to Ponzi in isolation. She does so in an environment that rewards the progression at every stage. The market — the professional market, the social media market, the organizational market for recognition and advancement — rewards visible productivity. It rewards the builder who ships at three in the morning. It rewards the year-end summary that catalogs 2,639 hours of work with zero days off. It celebrates intensity without distinguishing between the intensity of flow and the intensity of desperation.
The environment is the credit market. Just as accommodating credit conditions enable the speculative borrower to refinance and the Ponzi borrower to persist, the professional environment's celebration of relentless productivity enables the speculative builder to continue and the Ponzi builder to avoid the reckoning. The social media post about working through the weekend is the refinancing event — the signal to the market that the position is still solvent, the performance of productivity that sustains the conditions under which productivity is rewarded. The market does not distinguish between solvency and insolvency. It sees only the output. The output continues. The position is validated. The progression deepens.
Byung-Chul Han diagnosed this dynamic in philosophical terms — the achievement subject who exploits herself and calls it freedom. Minsky's framework adds the mechanism. The achievement subject does not exploit herself in a vacuum. She does so because the environment rewards the exploitation, because every signal in the environment confirms that more is better and faster is safer, because the boom validates the intensity, and because the intensity validates the boom. The mechanism is endogenous: the environment that produces the compulsion is itself produced by the aggregate behavior of all the compulsive actors operating within it. The builder's Ponzi relationship with her own productivity is sustained by a market that is itself in a speculative or Ponzi relationship with AI-augmented output. The individual pathology and the systemic fragility are the same phenomenon at different scales.
The question Minsky's framework poses to the productive addict is not "Are you working too hard?" — a question that can always be deflected by pointing to the output. The question is: "What is the margin of safety?" What reserves exist between the current level of engagement and the level at which the position collapses? Can the builder sustain her current commitments if the tools degrade? If the creative momentum stalls? If the market stops rewarding the output? If the body signals that the pace cannot continue? If the answer is that the position depends on continued acceleration — that any deceleration would reveal a deficit that the acceleration was designed to conceal — then the position is Ponzi, regardless of how much output it produces.
The crash, when it comes, produces a binary outcome. In Minsky's financial framework, a Ponzi borrower who is caught by a lender of last resort — a central bank, a government program, an institutional backstop — can be restructured. The position is unwound in an orderly fashion. The losses are absorbed. The borrower emerges with a hedge position, reduced in scale but sustainable. Without a lender of last resort, the Ponzi borrower collapses entirely, and the collapse cascades to the creditors, the counterparties, the employees, the communities that depended on the borrower's continued operation.
The productive addict's lender of last resort is the network of relationships, institutions, and practices that can absorb the crash and facilitate reorganization. A partner who intervenes. A colleague who says, "You need to stop." An organizational culture that protects rest. A therapist who helps the builder disentangle identity from output. A personal practice of reflection — what the Berkeley researchers called "AI Practice" — that maintains the distinction between flow and compulsion. These are the stabilizers. They are the dams that prevent the individual crash from becoming a personal catastrophe.
And they are, as Segal notes, woefully underdeveloped. The professional culture that celebrates the three-in-the-morning session does not build stabilizers. It builds enablers. The organizational culture that measures productivity by visible output does not protect rest. It penalizes it. The market that rewards the year-end summary of relentless production does not distinguish between the builder who worked those hours in flow and the builder who worked them in desperation. It sees the output. The output is impressive. The position is validated. The progression continues.
Until it does not. The Minsky moment of the self is the morning the builder wakes up and cannot get out of bed. Not because the body is tired — the body has been tired for months. Because the psychological position has finally revealed itself as unsustainable. The income covers nothing. The asset has stopped appreciating. The self that was supposed to be waiting behind the production is not there, because the production consumed it.
The question is whether the crash produces reorganization — a return to hedge engagement, where the work is satisfying and the life is whole — or collapse. The answer depends on the stabilizers. It depends on the dams. And the dams are not being built at the speed the crisis requires, because the crisis is invisible from the outside. The output continues. The metrics climb. The builder appears to be thriving.
Minsky understood that this is how systems fail: from the inside, invisibly, while every external indicator confirms that things have never been better.
---
In the architecture of financial stability as Hyman Minsky conceived it, one institution stands between a localized failure and a systemic catastrophe: the lender of last resort. The concept is old — Walter Bagehot articulated its principles in Lombard Street in 1873 — but Minsky gave it a specific and indispensable role within his theory. The lender of last resort does not prevent crises. Crises are endogenous; they will come regardless. The lender of last resort prevents crises from cascading. When a bank fails — when its speculative or Ponzi positions are revealed as unsustainable and it cannot meet its obligations — the lender of last resort provides liquidity. Not to save the bank, necessarily, but to prevent the bank's failure from triggering failures at its counterparties, which trigger failures at their counterparties, until the entire system seizes and the real economy — employment, production, consumption — bears the cost of a financial architecture's collapse.
The lender of last resort is the dam between a local flood and a systemic deluge. Its existence does not eliminate the flood. It contains it.
The AI economy has no lender of last resort.
This is not a rhetorical observation. It is a structural diagnosis with specific, traceable consequences. When the correction comes — when AI capability plateaus, or a major model provider fails, or regulatory intervention restricts deployment, or the market simply recalibrates its expectations downward — the failures that follow will cascade in a system that has no institutional mechanism for containment. The displaced worker has no institution with the mandate and the capacity to retrain her at the speed the transition requires. The restructured organization that discovers its five-person team cannot maintain what fifty people built has no reserve of institutional knowledge to draw upon. The student who graduated into a landscape that shifted beneath her feet during the four years of her education has no program that bridges the gap between the skills she was taught and the skills the post-correction market demands.
Each of these failures is local. Each, uncontained, becomes a vector for broader disruption. The displaced worker reduces her consumption. The reduced consumption affects local businesses. The local businesses reduce their workforce. The secondary displacements generate further consumption reduction. The cascade is the same mechanism that operates in financial crises — the difference is that in financial crises, the central bank can intervene with liquidity to break the chain, and in the AI economy, no analogous institution exists.
Minsky spent his career arguing that the absence of adequate institutional stabilizers is itself the primary risk to economic stability. The technology does not determine the outcome. The institutions determine the outcome. A powerful technology deployed in a system with robust stabilizers produces disruption that is absorbed and channeled toward broadly shared improvement. The same technology deployed in a system without stabilizers produces disruption that cascades, concentrates losses among those least able to absorb them, and generates social instability disproportionate to the technical correction that triggered it.
The historical evidence is unambiguous. The industrial revolution produced genuine, transformative productivity gains — and it also produced the Luddite period, the displacement of skilled craftsmen, the child labor that mechanized factories enabled, the sixteen-hour workdays that electricity made possible. The gains were real. The question was never whether the gains would materialize. The question was who would capture them and who would bear the cost of the transition. The answer, for the first several decades, was that the gains were captured by factory owners and the costs were borne by workers. The institutional stabilizers — the eight-hour day, the weekend, child labor laws, collective bargaining rights — were built later, after the damage had been done, after a generation had borne the full cost of a transition that institutional structures could have moderated.
Minsky would observe that the AI economy is repeating this pattern with the precision of a system that has learned nothing from its history. The gains are being captured. The stabilizers are not being built. The gap between the speed of capability and the speed of institutional response is, as Segal notes, widening rather than narrowing.
Consider the specific case of workforce displacement. The twenty-fold productivity multiplier means, at its most direct, that one person with AI tools can produce what twenty produced before. The arithmetic does not require that twenty people lose their jobs — it requires that nineteen of those twenty redirect their effort toward work that justifies their continued employment. Some organizations, like Segal's, choose to redirect. They keep the team, expand the scope, invest in capability development. This is the hedge response. But the market rewards the alternative: reduce the headcount, capture the margin, report the efficiency to the board. The speculative response.
When the speculative response is adopted at scale — across an industry, across an economy — the displaced workers accumulate. Each individual displacement is manageable. The worker finds another job, retrains, adapts. But the aggregate displacement, occurring simultaneously across multiple sectors, in a labor market that is itself being restructured by the same technology, produces a volume of transition that no existing institution is equipped to process. The retraining programs are too slow. The educational institutions are teaching the wrong skills. The safety nets are designed for temporary unemployment, not structural displacement. The social services are calibrated to a labor market that no longer exists.
This is the absence of the lender of last resort made concrete. There is no institution with the mandate, the capacity, and the speed to intervene between the localized displacement and the systemic cascade. The Federal Reserve can inject liquidity into the banking system in hours. No institution can inject employability into the labor market at any comparable speed.
The EU AI Act and the American executive orders on artificial intelligence that Segal references are real regulatory structures, and they address real concerns. But they are supply-side regulations — constraints on what AI companies may build, what disclosures they must make, what risks they must assess before deployment. They regulate the production of the technology. They do not stabilize the consumption of its effects. The demand side — the workers, the students, the communities, the organizational cultures that are absorbing the impact of the technology — remains essentially unprotected by institutional structure.
Minsky argued throughout his career for what he called "Big Government" — not government that is intrusive or controlling, but government with the fiscal capacity to maintain aggregate demand during downturns and the institutional capacity to moderate the excesses of booms. Big Government, in Minsky's framework, is the ultimate lender of last resort — the institution whose spending sustains the economy when private spending collapses, whose programs retrain the displaced, whose investments maintain the infrastructure of social stability during periods of structural transition.
The AI economy needs Big Government in precisely this sense. Not to control the technology. Not to slow innovation. Not to protect incumbents against disruption. But to build the demand-side stabilizers that prevent localized AI-driven disruptions from cascading into systemic instability. Retraining programs that operate at the speed of the transition. Educational reforms that produce graduates capable of the judgment work that AI cannot perform rather than the execution work it already can. Labor protections that prevent the complete conversion of productivity gains into headcount reduction — not by prohibiting the conversion, but by incentivizing the alternative. Fiscal policies that distribute the gains broadly enough to maintain the aggregate demand that sustains the economy through the transition.
These are not anti-technology measures. They are pro-stability measures. Minsky understood — and his understanding was vindicated by every financial crisis of the late twentieth and early twenty-first centuries — that the free market, left to its own dynamics, generates the crises it claims to prevent. The market optimizes locally, at the level of the individual firm, for the current quarter's conditions. It does not optimize globally, at the level of the system, for the conditions that the current quarter's behavior is producing for the next year. The gap between local optimization and systemic stability is where the lender of last resort operates — not to override the market, but to absorb the shocks that the market's local optimization generates.
The AI economy's local optimization — each firm restructuring around the productivity multiplier, each firm reducing headcount, each firm accelerating deployment — is producing systemic effects that no individual firm is responsible for or capable of addressing. The individual firm's restructuring is rational. The aggregate restructuring is generating fragility. The gap between the individual rationality and the systemic risk is where the institutional stabilizers must operate.
Who catches the falling builder? In 2026, the honest answer is: almost no one. The professional culture celebrates the fall as a personal failure. The organizational culture replaces the fallen with someone who has not yet fallen. The safety net catches the unemployed but does not address the structurally displaced. The educational system trains for a world that existed four years ago when the curriculum was designed. The regulatory system constrains the supply of AI but does not stabilize the demand for the human capabilities that AI is reshaping.
Minsky would recognize this institutional vacuum. He documented its equivalent in the financial system repeatedly — the periods when deregulation dismantled the stabilizers that had been built after the previous crisis, creating the conditions for the next crisis, which would be more severe because the stabilizers were no longer in place. The dismantling was not malicious. It was driven by the observation that the stabilizers appeared unnecessary — because the system was stable. The stability was produced by the stabilizers. Remove the stabilizers, and the stability disappears. The observation that the stabilizers are unnecessary is generated by the very stability that the stabilizers create.
The AI economy has not dismantled its stabilizers. It has simply never built them. The technology arrived faster than the institutions could respond. The institutional gap is not a policy failure in the conventional sense — no regulator chose to leave workers unprotected. It is a speed mismatch between a technology that improves on quarterly cycles and institutions that adapt on decadal ones. But the effect is the same as deliberate dismantling: the stabilizers are absent, the system is exposed, and the correction, when it arrives, will find no institutional buffer between the trigger and the cascade.
The dam must be built. It must be built now, during the boom, when the political will to build it is lowest and the need appears least urgent. This is the Minskyan imperative: build the stabilizer when the system insists it does not need one, because the system's insistence is the surest sign that it does.
---
Hyman Minsky argued for Big Government. The phrase, in contemporary political discourse, triggers reflexes faster than it permits thought. Big Government means bloat. It means inefficiency. It means the dead hand of bureaucracy stifling the animal spirits of the market. Minsky meant none of these things. He meant something precise, technical, and derived directly from his analysis of how capitalist economies actually function rather than how textbook models say they should.
Big Government, in Minsky's framework, refers to a government with the fiscal capacity to act as an economic stabilizer — to maintain aggregate demand when private spending collapses, to employ workers when private employers shed them, to invest in infrastructure when private investment retreats. Big Government does not replace the market. It backstops it. It provides the floor beneath which the economy cannot fall, not through control of productive activity, but through the willingness to spend when no one else will. The spending is not charity. It is stabilization — the fiscal equivalent of the central bank's role as lender of last resort, operating at the level of the real economy rather than the financial system.
Minsky's argument rested on an empirical observation that the postwar American economy confirmed repeatedly: the business cycles of the postwar period were less severe than those of the pre-New Deal era precisely because the institutional structures built during and after the Great Depression — Social Security, unemployment insurance, deposit insurance, progressive taxation, countercyclical fiscal policy — provided automatic stabilizers that moderated the amplitude of each cycle. When a recession began, government spending automatically increased (through unemployment benefits, social transfers, and other programs) while government revenue automatically decreased (through reduced tax collections on reduced income), producing a countercyclical fiscal impulse that partially offset the contraction in private demand. The stabilizers did not prevent recessions. They prevented recessions from becoming depressions.
The AI economy needs analogous stabilizers, and the need is becoming more urgent with each quarter of the boom. Not because AI is a recession — it is, by most measures, producing genuine economic expansion — but because the expansion is accompanied by structural displacement that existing stabilizers were not designed to address, and the pace of that displacement is accelerating faster than the institutional response.
The distinction between cyclical unemployment and structural displacement is critical here, and Minsky's framework illuminates it with particular clarity. Cyclical unemployment occurs when aggregate demand falls and firms shed workers who will be rehired when demand recovers. The existing stabilizers — unemployment insurance, automatic fiscal stabilizers — are designed for this pattern. They bridge the gap between the downturn and the recovery. They assume the jobs will return.
Structural displacement occurs when the nature of work changes permanently and the displaced workers' skills are no longer marketable in the new configuration. The AI transition is producing structural displacement at a pace and scale for which no existing stabilizer is adequate. The twenty-fold productivity multiplier does not create cyclical unemployment. It restructures what work means. The skills that were valuable yesterday are not the skills that will be valuable tomorrow, and "tomorrow" is arriving at the speed of a quarterly model improvement rather than the speed of a business cycle.
Unemployment insurance replaces a fraction of lost income for a limited period. It does not retrain. It does not restructure. It does not bridge the gap between a skill set calibrated to the old paradigm and a skill set adequate to the new one. The displaced software engineer who spent fifteen years developing expertise in a framework that AI has rendered reproducible in hours does not need temporary income replacement. She needs a path from the old expertise to the new value — from execution to judgment, from implementation to direction, from the work the machine does to the work only humans can do. No existing institution provides this path at the speed and scale required.
Minsky would frame this as a failure of institutional design rather than a failure of the market. The market is doing what markets do: reallocating resources toward their most productive use, as determined by the current price signals. The price signals say that AI-augmented workers produce more per dollar than unaugmented workers. The market responds by shifting investment toward augmentation and away from the human capital that augmentation renders less scarce. This is not a market failure. It is the market functioning exactly as designed. The failure is in the institutional structures that are supposed to moderate the market's effects on the people it displaces — structures that were designed for a different pace and a different kind of disruption.
The policy responses that have emerged thus far are, in Minsky's taxonomy, supply-side interventions. The EU AI Act regulates what AI companies may build and how they must assess risk. The American executive orders establish frameworks for responsible development and deployment. These are necessary measures. They address real risks — bias, safety, transparency, accountability. But they operate on the supply of AI capability, not on the demand side — the workers, students, communities, and organizations that absorb the technology's effects.
A Minskyan approach to AI stabilization would operate on both sides. Supply-side regulation addresses the risks of the technology itself. Demand-side stabilization addresses the risks of the technology's impact on the human systems that adopt it. The two are complementary, and neither is sufficient without the other.
Demand-side stabilization for the AI economy would include, at minimum, several categories of institutional investment that are not currently being made at adequate scale.
First: educational institutions that maintain the supply of deep expertise. The pressure on educational institutions is to adapt to AI — to teach prompt engineering, to integrate AI tools into curricula, to prepare students for the AI-augmented workplace. This pressure is not wrong. But it is incomplete, and if followed without counterbalance, it produces graduates whose capabilities are coupled to the tool's availability. The educational stabilizer is not just teaching students to use AI. It is teaching the cognitive capacities that remain valuable regardless of the tool's state: judgment, questioning, cross-domain integration, the capacity to sit with uncertainty, the ability to evaluate rather than merely generate. These capacities require pedagogical investment that is not currently being made, because the market signals point toward AI integration rather than human-capacity development, and educational institutions respond to market signals.
Minsky understood that the market's signals during a boom are the worst possible guide to long-term institutional investment, because the signals are generated by the boom's own dynamics and reflect the boom's assumptions rather than the reality that will prevail after the boom. The educational institution that restructures entirely around AI-augmented workflows is making a speculative bet on the boom's continuation. The educational institution that maintains investment in the capacities that survive the correction is building a hedge position — and is punished by the market for doing so, because the market rewards the speculative position during the boom.
Second: labor protections that prevent the complete conversion of productivity gains into headcount reduction. Minsky did not oppose efficiency gains. He opposed the concentration of those gains in forms that generate systemic fragility. When every firm in an industry converts its productivity gains into headcount reduction, the aggregate effect is a contraction in employment income that reduces aggregate demand, which reduces the revenue of the firms that reduced headcount, which triggers further reductions. The individual firm's decision is rational. The aggregate effect is deflationary. This is the paradox of thrift applied to labor: each firm's savings are every other firm's lost revenue.
The stabilizer is not prohibition of headcount reduction. It is incentive alignment — fiscal policies that make it more attractive for firms to redeploy productivity gains into expanded output, new products, higher wages, and workforce development than to convert them directly into margin. Tax incentives for firms that maintain or expand employment while adopting AI. Public investment in infrastructure projects that absorb displaced workers. Wage subsidies that bridge the gap between the old compensation and the new, during the period of transition. These are not radical proposals. They are the demand-side equivalents of the supply-side regulations already in development — and they are conspicuously absent from the current policy conversation.
Third: fiscal policies that distribute gains broadly enough to maintain aggregate demand and social stability. The productivity gains of AI are genuine, but their distribution is not automatic. Historical technological transitions demonstrate consistently that without institutional intervention, the gains concentrate among the owners of capital and the most-skilled workers, while the costs are borne by the displaced. The concentration is not a market failure in the narrow sense — it is the market functioning according to its price signals. But the concentration generates fragility: a consumer economy that depends on broad-based consumption cannot sustain itself when the income that drives consumption is increasingly concentrated.
Minsky's investment theory of employment — the argument that employment is determined by the investment decisions of firms and governments rather than by the supply and demand for labor in a competitive market — is directly applicable here. If firms invest in AI and disinvest in human capital, employment falls regardless of the supply of willing workers. If governments invest in the institutional infrastructure that sustains employment through the transition — retraining, public works, educational reform, social insurance — the employment effects of AI are moderated. The choice between these paths is a policy choice, not a technological inevitability. The technology does not determine the distribution of its effects. The institutions do.
What Minsky understood — and what the AI economy's policymakers have not yet absorbed — is that the stabilizers must be built during the boom, when the need appears lowest and the political will to act is weakest. Building stabilizers during a crisis is crisis management, not stabilization. The point of the stabilizer is to moderate the crisis before it arrives, to maintain the margins of safety that the boom erodes, to ensure that the correction, when it comes, encounters institutional structures designed to absorb it rather than amplify it.
The AI boom is the moment to build. Not because the boom is fragile — it may continue for years, and the gains may prove durable and broadly shared. But because the dynamics that Minsky identified — the endogenous generation of fragility through success, the progressive shift from hedge to speculative positions, the erosion of margins of safety during periods of apparent stability — are visible now, in real organizations and real markets, and the institutional structures that would moderate a correction are not in place.
The political economy of stabilizer construction is unfavorable. The beneficiaries of the boom — the AI companies, the augmented workers, the investors capturing the returns — do not perceive a need for stabilizers, because the boom validates their positions. The losers of the transition — the displaced, the restructured, the students graduating into uncertainty — lack the political power to demand stabilizers, because displacement diminishes political voice. The gap between the need for stabilizers and the political will to build them is itself a form of institutional fragility — a meta-fragility, in which the system's inability to build the stabilizers it needs is generated by the same dynamics that create the need.
Minsky spent his career pointing at this gap. The profession largely ignored him until 2008, when the absence of adequate stabilizers turned a housing correction into a global financial crisis. The AI economy has the advantage of hindsight. The question is whether it will use it.
---
Employment, in the textbook model that Hyman Minsky spent his career dismantling, is determined by the intersection of labor supply and labor demand in a competitive market. Workers offer their labor at various wage levels. Firms demand labor at various wage levels. The market clears at the equilibrium wage, and everyone who wants to work at that wage finds employment. Unemployment, in this model, is voluntary — the result of workers choosing not to accept the prevailing wage — or frictional, a temporary condition of transition between jobs. The model is elegant. It is tractable. And it describes a world that bears almost no relationship to the one in which actual human beings earn actual livings.
Minsky's investment theory of employment starts from a different premise. Employment is not determined by the supply and demand for labor. Employment is determined by the investment decisions of firms and governments. Firms hire workers when they invest — when they expand capacity, develop new products, enter new markets, build new infrastructure. Firms shed workers when they disinvest — when they contract capacity, abandon product lines, exit markets, defer maintenance. The investment decisions are driven by expectations about future profitability, which are themselves shaped by the current state of the economy, the availability of financing, the confidence of decision-makers, and the institutional environment in which the decisions are made.
This is not a semantic distinction. It is a fundamental reorientation of how one understands what determines whether people have jobs. In the textbook model, the solution to unemployment is always on the supply side: make workers more flexible, more skilled, more willing to accept lower wages. In Minsky's model, the solution is on the demand side: create the conditions under which firms and governments invest in ways that generate employment. The supply-side approach assumes that if workers are sufficiently accommodating, the market will employ them. Minsky's approach recognizes that in a capitalist economy with sophisticated financial institutions, investment decisions are driven by financial considerations that may or may not coincide with full employment — and that when they do not coincide, institutional intervention is required.
Applied to the AI economy, Minsky's investment theory produces a prediction that diverges sharply from both the optimistic and pessimistic narratives that dominate the public conversation.
The optimistic narrative says AI will create more jobs than it destroys, because every previous technological transition eventually expanded total employment. The printing press eliminated the scribe and created the publisher, the journalist, the librarian. The automobile eliminated the stable hand and created the mechanic, the truck driver, the highway engineer. AI will eliminate some categories of work and create others that do not yet exist. The transition may be uncomfortable, but the long-run trajectory is toward more employment at higher wages.
The pessimistic narrative says AI is different — that the scope and speed of displacement exceed any previous technology, that the categories of work AI can perform are broader and less predictable, that the new jobs will require capabilities that the displaced workers do not possess and cannot acquire at the speed the transition demands.
Minsky's framework suggests that both narratives miss the point, because neither addresses the mechanism through which employment is actually determined. The question is not whether AI can create new categories of work. It almost certainly can; the imagination-to-artifact ratio approaching zero implies an expansion of what can be built, which implies an expansion of what can be conceived, marketed, supported, distributed, and maintained. The question is whether the investment decisions of the firms and governments deploying AI will, in aggregate, generate the employment that absorbs the displacement.
The answer depends on what the productivity gains are used for. This is where Minsky's framework intersects most directly with the choice that Segal describes — the choice to keep the team or reduce the headcount.
If the productivity gains are converted into expanded output — more products, more ambitious projects, new markets, deeper capabilities — then the same number of workers, or more, are employed at higher levels of productivity. This is the historical pattern that the optimists cite: productivity gains channeled into expansion generate employment growth. The automobile did not reduce manufacturing employment. It increased it, because the productivity of the assembly line was channeled into producing more cars at lower prices for a broader market, and the broader market generated demand for workers in sales, service, infrastructure, and the thousand ancillary industries that the automobile ecosystem created.
If the productivity gains are converted into margin — fewer workers producing the same output, with the savings flowing to capital owners and shareholders — then employment contracts, income concentrates, aggregate demand weakens, and the conditions for further investment deteriorate. This is the pattern that the pessimists fear and that Minsky's framework identifies as the likely outcome in the absence of institutional intervention, because the market's price signals during a boom reward margin capture over output expansion.
The board conversation that Segal describes — the arithmetic that is always on the table, the quarterly pressure to convert the twenty-fold multiplier into headcount reduction — is the investment decision in microcosm. Segal chose expansion. But Segal is explicit that this choice fights against the incentive structure of the market. The market rewards the margin. The investor wants to see the efficiency. The board wants to see the ratio of revenue to headcount improve. The signal is unambiguous: employ fewer people, produce more with less, capture the gain as profit.
Minsky observed the same dynamic in every previous period of technological adoption. The individual firm's incentive is always to capture the productivity gain as margin rather than to reinvest it in expanded employment. The aggregate effect of many firms simultaneously capturing margin is a contraction in labor income that reduces the aggregate demand on which all firms depend. The individual decision is rational. The aggregate is deflationary. This is the paradox at the heart of Minsky's investment theory: the investment decisions that are rational for the individual firm are collectively destructive when made simultaneously by many firms without institutional counterbalance.
The counterbalance, in Minsky's framework, is public investment. When private firms disinvest in employment, the government invests — in infrastructure, in education, in research, in public services — to maintain the aggregate demand that private disinvestment would otherwise destroy. The public investment is not a replacement for private employment. It is a bridge — sustaining demand during the period when private firms are restructuring, maintaining the labor market's capacity to absorb workers during the period when private firms are shedding them, and creating the conditions under which private investment eventually resumes at levels that generate employment growth.
The AI economy needs this bridge. The displacement is structural, not cyclical. The workers being shed are not being temporarily laid off during a downturn with the expectation of recall during recovery. They are being permanently replaced by a technology that renders their specific skills less scarce. The bridge must carry them not back to their old positions but forward to new ones — positions that leverage the uniquely human capabilities that AI does not replicate. The bridge is retraining, but not the perfunctory retraining that existing programs provide. It is comprehensive, sustained investment in the development of judgment, integration, questioning, and the other capacities that the AI economy rewards.
The scale of the required investment is substantial, and the current trajectory is deeply inadequate. As of mid-2026, public investment in AI workforce transition programs in the United States amounts to a fraction of what the private sector spends on AI infrastructure in a single quarter. The disproportion is not a budgetary accident. It reflects the political economy that Minsky identified throughout his career: during booms, the political constituency for stabilization is weak, because the boom's beneficiaries see no need for stabilizers and the boom's victims lack the political power to demand them.
Minsky's investment theory also illuminates a subtler dynamic at work in the AI economy: the way AI investment itself shapes the labor market it is disrupting. When Amazon, Alphabet, Meta, and Microsoft spend $300 billion collectively on AI infrastructure in a single year, that spending generates employment — in construction, in hardware manufacturing, in data center operations, in the support industries that serve the infrastructure buildout. This employment partially offsets the displacement that AI generates in other sectors. The infrastructure boom creates jobs while the productivity boom destroys them.
But the infrastructure employment is itself speculative. It depends on continued investment at the current rate. If AI investment plateaus — if the market recalibrates expectations, if the returns disappoint, if the circular investing dynamic that analysts have identified breaks down — the infrastructure employment evaporates alongside the displacement it was partially offsetting. The system is doubly exposed: the displacement continues (because the AI capability remains even if investment slows), while the offsetting employment disappears (because the employment depended on the investment, not the capability).
This is the fragility that Minsky's investment theory makes visible. Employment in the AI economy is increasingly a function of AI investment, and AI investment is increasingly a function of expectations about AI's future value, and those expectations are increasingly a function of the boom itself. The loop is self-reinforcing during the expansion and self-destroying during the contraction. The employment that the boom generates is the same employment that the correction will destroy, and the displacement that the boom produces will persist through the correction because the technology that causes it does not disappear when the investment slows.
The investment theory of employment in the AI age reduces to a single proposition: the technology does not determine employment outcomes. Investment decisions determine employment outcomes. And investment decisions are shaped by the institutional environment — the incentives, the regulations, the fiscal policies, the cultural norms — in which they are made. An institutional environment that rewards margin capture over output expansion will produce a contraction in employment. An institutional environment that rewards expansion, redeployment, and human-capital development will produce an expansion. The technology is the same in both scenarios. The outcomes are determined by the dams.
Minsky never used the word "dam." But he built his entire career around the argument that the structures a society places around its economic forces determine whether those forces generate broadly shared prosperity or concentrated wealth and periodic crisis. The AI economy's forces are immense. The structures around them are nascent. The investment decisions that will determine whether AI generates employment growth or employment contraction are being made now — in boardrooms, in government offices, in educational institutions — and they are being made, overwhelmingly, without the Minskyan insight that the boom's signals are the worst possible guide to long-term institutional investment.
The boom says invest in AI. Minsky says invest in people.
The boom says capture the margin. Minsky says expand the output.
The boom says the stabilizers are unnecessary. Minsky says that is precisely when they are most needed.
Every speculative bubble in recorded history has been sustained by a single belief that the participants could not examine because they were inside it: the belief that the current trajectory is permanent.
The Dutch merchants who bid tulip bulb futures to ten times the annual income of a skilled craftsman in 1637 were not stupid. They were operating inside a market that had risen continuously for years, in which every participant's experience confirmed that tulip prices went up. The experience was real. The prices had gone up. The merchants were extrapolating from the data available to them, and the data said: up. The fact that "up" was being produced by the same collective bidding behavior that the data was being used to justify — that the trajectory was endogenous to the belief in the trajectory — was invisible from inside the market, because from inside the market, the prices were simply a fact about the world rather than a product of the collective behavior of the people observing them.
Hyman Minsky formalized this observation across his entire body of work, but the specific mechanism he identified is worth restating with precision, because it applies to the AI economy with a directness that makes the application almost uncomfortable.
During a boom, every signal in the environment confirms the boom's continuation. Asset prices rise, which validates the investment decisions that produced the price appreciation, which encourages further investment, which produces further appreciation. Revenue grows, which validates the business models built on the assumption of growth, which encourages further expansion of those models, which produces further growth. Productivity improves, which validates the organizational restructuring built on the assumption of improvement, which encourages further restructuring, which produces further productivity gains. The loop is closed. The evidence that the boom will continue is generated by the boom itself. It is not independent. It is not exogenous. It is the system testifying to its own health.
Minsky's insight is that this self-referential evidence is the most dangerous kind. Not because it is false — it is, during the boom, perfectly accurate. Prices are rising. Revenue is growing. Productivity is improving. The data is correct. The danger is that the data describes the present while being used to predict the future, and the present is the product of conditions that the data cannot evaluate — conditions like the distribution of hedge, speculative, and Ponzi positions across the system, the erosion of margins of safety, the elimination of institutional diversity, the accumulation of untested dependencies. These conditions are invisible to the data because the data is generated by normal operations, and normal operations do not test for fragility. Fragility is revealed only by disruption, and during the boom, disruption has not occurred.
The AI economy's fishbowl — to use the metaphor from The Orange Pill — is the assumption that AI capability will continue to improve at its current rate, indefinitely, without the plateau that has characterized every previous technology in human history.
The assumption is not unreasonable. The rate of improvement has been extraordinary. Each quarter has brought measurable advances in capability, reliability, and scope. The models that seemed remarkable in 2023 are merely adequate in 2026. The cost of inference has fallen precipitously. The range of tasks that AI performs competently has expanded with each model generation. The trajectory, plotted on a chart, looks exponential.
But every technology that eventually followed an S-curve also looked exponential during its exponential phase. That is what "exponential phase" means. The railroad expansion of the 1840s looked exponential — until the panic of 1847. The electricity buildout of the 1920s looked exponential — until the crash of 1929. The dot-com revenue growth of the late 1990s looked exponential — until it was revealed that the revenue was, in significant part, other dot-com companies buying services from each other. The exponential phase is the phase during which the assumption of permanence appears most justified, because the data is the most compelling, because the data is generated by the phase itself.
The specific danger for the AI economy is not that AI will fail. The technology is genuine, the productivity gains are measurable, and the long-term impact is likely to be as transformative as its most enthusiastic advocates claim. The danger is that the organizational, financial, and human positions being built on the assumption of continued exponential improvement are calibrated to a rate of progress that may not be sustained.
An MIT Media Lab report published in August 2025 found that despite thirty to forty billion dollars in enterprise AI investment, ninety-five percent of organizations were reporting zero return. Zero. Not negative returns. Not modest returns. Zero measurable impact on workplace productivity. Yet executive projections at those same organizations anticipated significant productivity improvements in coming years. The gap between current experience and future expectation is sustained entirely by the assumption of continued improvement — by the belief that the investment will eventually pay off because the technology will eventually mature to the point where the returns materialize. This is, in Minsky's precise terminology, a speculative position: the current cash flows do not cover the obligations, but the actor expects future conditions to improve sufficiently to justify the commitment.
If the improvement materializes, the position is vindicated. If it does not — if the returns plateau at a level below what the investment requires — the positions are revealed as speculative or Ponzi, depending on how far the actual returns fall below the committed expenditure.
The fishbowl is the inability to examine this assumption from outside it. Inside the AI economy, the assumption of continued improvement is not treated as an assumption. It is treated as a fact — a feature of the landscape as reliable as gravity. Business plans are built on it. Career decisions are calibrated to it. Educational programs are restructured around it. Organizational designs are optimized for it. The assumption has become the water the fish breathe, and any challenge to it is treated not as a reasonable question but as a failure of imagination — a Luddite refusal to see what is obvious.
Minsky would recognize this dynamic. He saw it in every boom he studied. The assumption of permanent improvement is not a lie. It is something more dangerous: a truth that is true now and that is being projected forward as though the conditions that make it true now will persist indefinitely. The conditions are endogenous to the boom. The boom generates the investment that generates the improvement that generates the confidence that generates the investment. When any element of the loop weakens — when investment slows, when improvement decelerates, when confidence falters — the entire loop can reverse with the same self-reinforcing speed, but in the opposite direction. Reduced investment produces reduced improvement produces reduced confidence produces reduced investment.
Carlota Perez, the Venezuelan-British economist whose work on technological revolutions and financial capital extends Minsky's framework directly into the technology sector, has documented this pattern across every major technological transition since the industrial revolution. The installation period — the phase of rapid adoption, speculative investment, and rising asset prices — is always followed by a correction that Perez calls the "turning point." The turning point is not the end of the technology. The technology survives and eventually delivers on most of its promise during the subsequent "deployment period." But the turning point destroys the financial positions built on installation-period assumptions. The investors who assumed the exponential phase was permanent lose their capital. The organizations that restructured around installation-period conditions must restructure again for deployment-period realities.
The deployment period is often more productive than the installation period, because the speculative froth has been cleared and the technology is applied to genuine problems rather than to the self-referential expansion that characterizes installation-period investment. But the transition between the two — the turning point — is violent, and its violence is proportional to the distance between the positions built during the installation period and the reality of the deployment period.
The AI economy is in its installation period. The investment is flowing. The capability is improving. The organizational restructuring is accelerating. The positions are stretching from hedge toward speculative. The assumption of permanent exponential improvement is embedded in every plan, every projection, every career decision. And the data confirms the assumption — because the data is generated by the installation period, and the installation period is the phase during which the assumption is true.
The fishbowl contains one more hazard that Minsky's framework identifies with particular clarity: the difficulty of maintaining hedge positions when the environment rewards speculative ones. The organization that chooses caution during a boom — that retains more staff than the productivity multiplier requires, that maintains deep expertise alongside AI augmentation, that builds margins of safety into its planning — is penalized by the market. Its competitors, who have converted the multiplier into margin, report better quarterly numbers. Its stock underperforms. Its investors demand to know why the efficiency gains are not flowing to the bottom line. The market does not reward resilience during a boom, because resilience is only valuable during the correction, and the boom is the period when no one believes the correction will come.
Minsky documented this dynamic in the financial system repeatedly. Banks that maintained conservative lending standards during the housing boom of the 2000s lost market share to banks that lowered their standards. The conservative banks were rational, in the sense that they were protecting against a risk that eventually materialized. But they were punished for their rationality during the years when the risk had not yet materialized — and the punishment was real: lost revenue, lost market share, executive turnover as boards replaced cautious leadership with leadership willing to participate in the boom.
The AI economy's hedge organizations face the same pressure. The choice to maintain deep expertise, to retain the full team, to invest in human capability alongside AI capability — this choice is, in Minsky's terms, the right position. But the market rewards the wrong one. And the longer the boom continues, the harder the right position becomes to maintain, because the evidence that the wrong position is working mounts with each quarter of speculative success.
This is the cruelest feature of the fishbowl. It is not that the inhabitants cannot see outside it. Some can. Some do. Some, like the organization that chooses the hedge position despite the market's pressure, act on what they see. But the fishbowl punishes the seeing. It rewards the blindness. And the longer the blindness is rewarded, the fewer organizations maintain the sight — until, when the correction arrives, almost no one is positioned to survive it, because almost everyone optimized for conditions that the correction has just destroyed.
The question, always, is the same: are the dams built? Has the system maintained enough margin, enough diversity, enough institutional capacity to absorb the correction when the S-curve inflects? The answer, in mid-2026, is that the dams are being eroded by the very success they are meant to protect against. The boom is consuming the margins. The fishbowl is rewarding the consumption. And the correction, whenever it arrives, will find a system calibrated to a trajectory that has changed.
---
The Minsky moment cannot be prevented.
This is the starting point, and it is nonnegotiable within the framework that Hyman Minsky spent his career constructing. The endogenous generation of fragility is a structural feature of capitalist economies, not a correctable deficiency. Complex adaptive systems in which rational agents pursue profit under conditions of fundamental uncertainty will periodically generate the conditions for their own disruption. The dynamics that produce the boom — innovation, optimism, investment expansion, the progressive reduction of margins of safety — are not aberrations. They are capitalism functioning as designed. The bust is not a system failure. It is a system feature. Minsky's life's work was not a program for eliminating cycles. It was a program for surviving them.
The distinction matters. Eliminating cycles is utopian. Surviving cycles is engineering. And engineering, in Minsky's framework, means building institutional structures — before the crisis, during the boom, when the need appears lowest and the political will to build is weakest — that moderate the amplitude of the cycle and ensure that the inevitable correction produces reorganization rather than collapse.
Minsky identified two institutional actors whose combined capacity determines whether a correction is absorbed or amplified: the Big Bank (the central bank acting as lender of last resort) and Big Government (the fiscal authority acting as employer of last resort and sustainer of aggregate demand). The Big Bank provides liquidity when the financial system seizes. Big Government provides spending when the private economy contracts. Together, they form a floor beneath which the economy cannot fall. The floor does not prevent the correction. It prevents the correction from becoming a depression.
The AI economy needs both. And it has, at present, neither in adequate form.
The financial dimension of an AI correction — the repricing of AI-related equities, the writedown of infrastructure investments that fail to generate returns, the cascade through the venture capital ecosystem that has concentrated extraordinary capital in AI-related bets — is within the existing competence of financial authorities. The Federal Reserve and its global counterparts have the tools to provide liquidity during a financial correction. They demonstrated those tools in 2008, in 2020, and in every intervening disruption. The financial Minsky moment of AI, should it arrive, would be managed — imperfectly, painfully, but managed — by institutions that have practice with the mechanism.
But the structural dimension — the displacement of workers, the obsolescence of skills, the organizational fragility produced by two years of untested restructuring, the human cost of a system that optimized for a trajectory that has changed — falls outside the competence of financial authorities. The Federal Reserve cannot retrain a displaced software engineer. The European Central Bank cannot rebuild the deep expertise that was delegated to machines during the boom and is no longer available when the machines disappoint. The Bank of Japan cannot moderate the pace of organizational restructuring at firms that discovered, during the correction, that their "vector pods" cannot function without the AI tools that the correction has made less reliable.
These structural stabilizers require institutional investment of a different kind, and the investment must be made during the boom — when the budget is available, when the economy is strong enough to absorb the cost, and when the correction has not yet created the desperate conditions that make thoughtful institutional design impossible.
What would these stabilizers look like? Minsky's framework, applied to the AI economy, suggests five categories of institutional investment, each addressing a specific dimension of the fragility that the boom is generating.
The first category is expertise reserves. The erosion of deep expertise — the margin of safety that the boom is consuming most rapidly — can be moderated by institutional investment in maintaining that expertise even as AI makes it less immediately necessary. This means educational programs that teach deep technical skills alongside AI augmentation, not instead of it. It means organizational practices that require periodic manual work — not as Luddite resistance, but as the cognitive equivalent of a fire drill, maintaining the skills that are needed only during emergencies precisely because emergencies are unpredictable. It means professional certification standards that require demonstrated competence without AI assistance alongside demonstrated competence with it. These are not popular measures. They slow the boom. They impose costs that the boom's logic says are unnecessary. They are, in Minsky's terms, the margins of safety that the boom wants to eliminate and that the correction will desperately need.
The second category is employment bridges. The structural displacement that AI generates — not cyclical unemployment but permanent transformation of what work means — requires institutional mechanisms that bridge workers from the old paradigm to the new. Minsky's own proposal, developed across several decades, was an employer-of-last-resort program in which the government provides employment to anyone willing to work at a base wage, performing socially useful tasks that the private market does not adequately supply. Applied to the AI transition, this mechanism would absorb displaced workers into public infrastructure, education, caregiving, and environmental projects while providing the time and resources for retraining. The program would not compete with private employment — it would provide a floor below which displacement cannot push. It would maintain aggregate demand during the transition. And it would ensure that the human cost of the correction is borne by the institution with the greatest capacity to absorb it — the government — rather than by the individuals with the least.
The third category is institutional diversity preservation. The convergence of organizations onto similar AI tools, similar restructuring patterns, and similar dependency profiles generates the monoculture fragility that makes systemic failure possible. Regulatory and incentive structures that encourage organizational diversity — different approaches to AI integration, different retention of deep expertise, different structures for human-AI collaboration — would function as the ecological equivalent of biodiversity: ensuring that when one approach fails, alternatives exist. This is the hardest category to implement, because the market's natural tendency is toward convergence on the most efficient approach, and diversity looks like inefficiency during the boom. But diversity is the system's insurance policy against the failure of the dominant approach, and insurance is always an expense that looks unnecessary until the event it insures against occurs.
The fourth category is educational adaptation at a pace that matches the technology. The mismatch between the speed of AI capability improvement and the speed of educational institutional response is, as previously established, itself a form of structural fragility. The stabilizer is not faster education — compressing four-year degrees into six-month certificates, which produces breadth at the expense of depth and generates its own fragilities. The stabilizer is modular education: institutional structures that allow workers to acquire specific new capabilities incrementally, without abandoning the deep expertise they already possess, at a pace that tracks the technology's evolution rather than lagging it by years. This requires fundamentally different institutional design from the current model, and the design must be developed during the boom, when educational institutions have the resources and the enrollment to experiment, rather than during the correction, when resources are constrained and enrollment is driven by desperation rather than development.
The fifth category, and the most difficult to implement, is cultural stabilization — the development of norms, practices, and shared understandings that moderate the individual progression from hedge to speculative to Ponzi engagement. The productive addiction that operates as a Ponzi scheme of the self is sustained by a professional culture that celebrates intensity without distinguishing between flow and compulsion. The stabilizer is a cultural shift — not toward less work, but toward more deliberate work. The "AI Practice" frameworks that the Berkeley researchers proposed, the structured pauses, the protected reflection time, the organizational norms that distinguish between productive intensity and self-destructive compulsion — these are cultural stabilizers that moderate the individual cycle in the same way that institutional stabilizers moderate the economic one.
The political economy of building these stabilizers is, as Minsky documented throughout his career, profoundly unfavorable during the boom. The beneficiaries of the boom see no need for stabilizers. The institutional resources that would fund stabilizers are being channeled toward the boom. The political constituency for stabilization is weak, because the people who would most benefit — the displaced, the exhausted, the structurally vulnerable — are either not yet displaced (and therefore not yet aware of their vulnerability) or already displaced (and therefore lacking the political voice to demand protection).
Minsky faced this dynamic throughout his career. He argued for stabilizers during the 1960s and 1970s, when the postwar boom made them seem unnecessary. He argued for them during the 1980s and 1990s, when deregulation was dismantling the stabilizers that had been built after the Great Depression. He was largely ignored. The 2008 financial crisis vindicated his framework, but the vindication came at a cost of trillions of dollars and millions of jobs — a cost that the stabilizers he had advocated, if built, would have moderated.
The AI economy has the advantage of this history. It has the framework. It has the diagnostic tools. It has the specific, documented precedents of what happens when stabilizers are not built and what happens when they are. The question is whether the advantage will be used.
The economist Bill Janeway, extending Minsky's framework to technology cycles, has argued for the concept of the "productive bubble" — the observation that speculative investment in new technologies, even when it produces financial losses, can leave behind infrastructure of lasting value. The railroad bubble of the 1840s destroyed investors but laid tracks. The dot-com bubble destroyed portfolios but built the fiber-optic networks that eventually powered the mobile internet. The AI bubble, if it is a bubble, may destroy financial positions while building the computational infrastructure and the institutional knowledge that powers the next generation of productive AI deployment.
This is a real and important counterargument to pure Minskyan pessimism. But Janeway's productive bubble thesis does not negate the need for stabilizers — it reinforces it. The tracks laid during the railroad bubble were valuable. But the workers displaced during the railroad bust suffered regardless of the tracks' value. The fiber-optic cables built during the dot-com bubble were valuable. But the engineers laid off during the dot-com bust were not compensated by the cables' eventual utility. The infrastructure survives. The people bear the cost. The question is whether institutional structures exist to moderate the human cost during the period between the bubble's burst and the infrastructure's productive deployment.
Minsky's answer, consistent across four decades of writing, is that the stabilizers must be built before they are needed, that they must be maintained during the periods when they appear unnecessary, and that their absence is the single most reliable predictor of whether a correction produces reorganization or catastrophe. The AI economy's stabilizers are nascent at best and absent at worst. The boom continues. The positions are stretching. The margins are eroding. The fishbowl rewards the erosion.
The moment approaches. Whether it arrives in six months or six years, the dynamics that generate it are already at work. The question has never been whether the moment will come. The question is whether, when it comes, the dams are in place.
Build them now. The calm river is the liar that Minsky spent his life trying to expose.
---
The balance sheet that haunted me was not a company's. It was my own.
Somewhere in the middle of writing The Orange Pill, during a week when I had not slept properly and had not exercised and had barely spoken to my family except to report, with manic enthusiasm, what Claude and I had built that day, I had a realization that I could not shake: I was leveraged. Not financially. Cognitively. Emotionally. I had taken on obligations — to the book, to the product, to the team, to the vision — that my current reserves could not cover. And I was sustaining the position the only way a leveraged person can: by producing more. More chapters. More features. More emails at two in the morning. More output to justify the commitment that the output was supposed to be paying off.
Minsky's taxonomy, when I finally understood it, gave me a name for what I was doing. I was speculative. My satisfaction covered the interest — the daily engagement was still generative, still exciting, still worth doing. But the principal — the accumulated debt of sleep, of presence with my family, of the cognitive rest that sustains long-term creative capacity — was not being repaid. I was rolling it over. Every day, I was refinancing, telling myself that the next draft, the next feature, the next breakthrough would pay down the principal. It never did. Because the next draft generated the next commitment, and the commitment generated the next draft, and the loop sustained itself the way all speculative loops sustain themselves: by producing the evidence of its own viability.
I did not crash. I caught myself. But the margin between catching myself and not catching myself was thinner than I like to admit.
What Minsky's framework gave me was not pessimism. It was the opposite. It gave me the ability to see the structure of my own behavior clearly enough to change it. The three-position taxonomy — hedge, speculative, Ponzi — is not a prediction of doom. It is a diagnostic tool. It lets you ask, of any commitment: What sustains this position? What would happen if conditions changed? Where is the margin of safety? And if the answer is that the position depends on conditions continuing exactly as they are, indefinitely, without interruption — then you know the position is fragile, and you can act before the fragility is revealed by a correction you did not choose.
The AI boom is real. I have staked my career on it. The productivity gains I described in The Orange Pill are genuine. The expansion of who gets to build is morally significant. The collapse of the imagination-to-artifact ratio is the most important shift in the relationship between human beings and their tools since the invention of writing. I believe all of this. I believe it more after reading Minsky, not less.
But I also believe, now, that the success itself is generating risks that the success makes almost impossible to see. That the organizations restructuring around the multiplier are building positions they have not stress-tested. That the careers being constructed on the assumption of permanent exponential improvement are more exposed than their builders realize. That the professional culture celebrating relentless intensity is sustaining patterns that will produce wreckage. That the institutions that should be moderating the boom's excesses are not being built at the speed the moment demands.
These are uncomfortable beliefs to hold alongside the exhilaration. But Minsky's lesson is that the exhilaration and the risk are not separate phenomena. They are the same phenomenon, experienced from two temporal perspectives. The exhilaration is the boom. The risk is what the boom is building. The task is not to choose between them but to hold both — to build during the boom while building the structures that will catch you when the boom's internal dynamics produce their inevitable correction.
When my son asked me at dinner whether AI was going to take everyone's jobs, I told him the technology does not determine the outcome. The institutions determine the outcome. I believe that more now than when I said it.
Build the dams. Not someday. Now. During the calm. While the river is generous and the sun is warm and every signal says the water will never rise.
That is when the dams matter most.
The AI productivity revolution is real. The gains are measurable. The organizations restructuring around twenty-fold multipliers are acting rationally. And that rationality is precisely the problem. Hyman Minsky spent four decades proving that financial crises are not caused by external shocks -- they are generated internally, by success itself, as rational actors eliminate the margins of safety that calm conditions make appear unnecessary. This book applies Minsky's Financial Instability Hypothesis to the AI economy and finds the pattern repeating with uncomfortable precision: careers leveraged against permanent exponential improvement, organizations stripped of the deep expertise they'll need when the tools disappoint, a trillion-dollar infrastructure buildout sustained by circular investment. The displacement is real. The dams are not built. The question is not whether the correction comes -- it is whether the institutions are in place to catch what falls.
-- Hyman Minsky, Stabilizing an Unstable Economy (1986)

A reading-companion catalog of the 17 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Hyman Minsky — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →