John Maynard Keynes — On AI
Contents
Cover Foreword About Chapter 1: Economic Possibilities for Our Grandchildren's Machines Chapter 2: Animal Spirits in the Age of the Prompt Chapter 3: The General Theory of AI Employment Chapter 4: The Paradox of Thrift Applied to Attention Chapter 5: The Liquidity Trap of Capability Chapter 6: Sticky Wages, Sticky Identities, and the Friction of Transition Chapter 7: Effective Demand in a World of Infinite Supply Chapter 8: Uncertainty, Probability, and the Machine That Calculates Both Chapter 9: In the Long Run We Are All Augmented Chapter 10: The Social Philosophy Toward Which This Tends Epilogue Back Cover
John Maynard Keynes Cover

John Maynard Keynes

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by John Maynard Keynes. It is an attempt by Opus 4.6 to simulate John Maynard Keynes's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question that kept nagging me was not about what we could build. It was about what happens after we build it.

I had spent months inside the exhilaration. The twenty-fold productivity multiplier in Trivandur. The thirty-day sprint to CES. The feeling of the imagination-to-artifact ratio collapsing to nearly nothing. I was deep in the river, building dams, watching my team expand into capabilities none of us had possessed six months earlier. The output was extraordinary. The tools were extraordinary. Everything pointed upward.

And yet something in the arithmetic did not resolve.

If each of my engineers could now do what twenty used to do together, the math presented a choice that no amount of enthusiasm could obscure. Keep the team and invest in what they become — or cut nineteen out of twenty and pocket the margin. I knew what I believed. I chose to keep and grow the team. But I could not fully articulate *why* the obvious efficiency play was wrong. Not just ethically wrong. Structurally wrong. Wrong in a way that would come back to destroy the very gains it captured.

Then I encountered Keynes. Not the caricature — the government-spending guy, the deficit guy, the one reduced to a bumper sticker in every policy debate. The actual thinker. The one who demonstrated, with devastating precision, that what is rational for each individual actor can be catastrophic when every actor does it simultaneously. The one who proved that markets do not automatically convert productivity into prosperity. The one who predicted, in 1930, that his grandchildren would live in material abundance — and that abundance, unmanaged by wise institutions, would produce not freedom but a new kind of suffering.

He was right about the abundance. He was right about the suffering. He was wrong about the leisure. And the reasons he was wrong illuminate this AI moment more sharply than any technology forecast I have read.

Keynes gave me structural language for things I had been feeling in my gut. Why the board conversation about headcount keeps returning and why following its logic would be a disaster. Why the capability flood does not automatically translate into value. Why the speed of displacement matters as much as the direction of the long-run trend. Why the people living through this transition cannot eat the promise that their grandchildren will benefit.

This book is not economics in the academic sense. It is a builder's encounter with a thinker who understood that tools do not determine outcomes — institutions do. And that the distance between extraordinary capability and genuine human flourishing is bridged not by markets but by the structures we choose to build around them.

The machines are generous. The question is whether our institutions are wise enough to direct that generosity toward life.

Edo Segal ^ Opus 4.6

About John Maynard Keynes

1883-1946

John Maynard Keynes (1883–1946) was a British economist, philosopher, and public intellectual whose work fundamentally reshaped modern economic thought and government policy. Born in Cambridge, England, and educated at Eton and King's College, Cambridge, he first gained international prominence with *The Economic Consequences of the Peace* (1919), a prescient critique of the Treaty of Versailles. His masterwork, *The General Theory of Employment, Interest and Money* (1936), demolished the classical assumption that markets naturally tend toward full employment, introducing concepts — aggregate demand, the multiplier effect, the liquidity trap, animal spirits, and the paradox of thrift — that became the foundation of macroeconomics as a discipline. His earlier *A Treatise on Probability* (1921) advanced a theory of rational belief under uncertainty that anticipated modern debates about the limits of statistical prediction. Keynes played a central role in designing the Bretton Woods international monetary system and in founding the International Monetary Fund and the World Bank. A member of the Bloomsbury Group and a patron of the arts who helped establish the Arts Council of Great Britain, he insisted throughout his career that economics was not an end in itself but an instrument for enabling human beings to live, in his words, "wisely and agreeably and well." His influence on fiscal policy, institutional design, and the relationship between government and markets remains pervasive nearly eight decades after his death.

Chapter 1: Economic Possibilities for Our Grandchildren's Machines

In 1930, while the world economy was collapsing, a Cambridge economist sat down and wrote an essay about paradise.

The essay was called "Economic Possibilities for Our Grandchildren," and its author, John Maynard Keynes, was not being whimsical. He was making a prediction grounded in compound interest, technological trajectory, and a reading of economic history that stretched back to the discovery of coal. The prediction was this: within one hundred years, the standard of life in progressive countries would be between four and eight times its 1930 level. The economic problem — the struggle for subsistence that had consumed humanity since the caves — would be, for all practical purposes, solved. The grandchildren of his readers would need to work perhaps fifteen hours a week to maintain a standard of living that their grandparents could not have imagined.

Keynes was writing at the bottom of the Great Depression. Factories were shuttered. Breadlines stretched around city blocks. The entire institutional architecture of Western capitalism appeared to be failing simultaneously. And into this despair, he injected a vision of such sweeping optimism that it reads, nearly a century later, as either prophecy or delusion.

It was prophecy. At least half of it.

Global GDP per capita, adjusted for inflation, has increased roughly sixfold since 1930. In the advanced economies Keynes was addressing, the multiplication is closer to eightfold. The material conditions of life — food, shelter, medicine, transport, communication — have been transformed beyond anything Keynes's contemporaries could have conceived, and the transformation has followed almost exactly the compound-growth trajectory he described. The economic problem, in the narrow sense Keynes intended, has been substantially solved for hundreds of millions of people. The species produces enough to feed, house, and clothe every member. That it fails to distribute this abundance equitably is a political failure, not a productive one.

But Keynes also predicted that this abundance would produce leisure. Not mere idleness — Keynes was not naive about the psychology of purpose — but a fundamental reorientation of human activity away from production and toward what he called "the arts of life": contemplation, beauty, friendship, the cultivation of the good. He predicted that his grandchildren, freed from the pressing economic cares that had dominated all prior human existence, would face a new and unprecedented challenge: learning how to live well.

"For the first time since his creation," Keynes wrote, "man will be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well."

The leisure never arrived. The abundance arrived on schedule, and human beings responded not by working less but by working more — on different things, at a higher pitch, with an anxiety that Keynes's essay had not anticipated. The fifteen-hour work week remains, ninety-six years after the prediction, an absurdity. Americans work more hours per year now than they did in 1970, despite productivity gains that would have astonished any economist of that era. The species that solved the production problem did not convert the solution into freedom. It converted the solution into more production.

This is the error that illuminates the AI transition.

Keynes's mistake was not about technology. Technology delivered exactly what he predicted. His mistake was about psychology — about what human beings actually do when the constraint of necessity loosens. He assumed that once the economic problem was solved, people would naturally gravitate toward the higher pleasures: art, philosophy, the enjoyment of existence for its own sake. He treated work as a cost, as something endured for the sake of the income it produced, and assumed that rational agents, given the option of the same income for less work, would choose less work.

But work is not merely a cost. It is, for the vast majority of people in industrial and post-industrial economies, the primary source of identity, status, social connection, and psychological structure. To work is to be someone. To stop working is to face the question that Keynes identified as "the permanent problem" — how to live — and most people, confronted with that question, discover that they would rather not face it. They would rather produce. The production may be unnecessary. It may be actively harmful. But it fills the hours and answers the question "What am I for?" with a response that the culture recognizes and rewards.

The economist Alex Tabarrok made the arithmetic vivid: "Imagine I told you that AI was going to create a 40% unemployment rate. Sounds bad, right? Catastrophic even. Now imagine I told you that AI was going to create a 3-day working week. Sounds great, right? Wonderful even." His observation: those two scenarios are, to a first approximation, identical. The difference is entirely in framing — which is to say, entirely in psychology.

Keynes himself, in a moment of prescience that his optimism usually obscured, anticipated this difficulty. He warned that the transition to leisure would be painful: "If the economic problem is solved, mankind will be deprived of its traditional purpose... I think with dread of the readjustment of the habits and instincts of the ordinary man, bred into him for countless generations, which he may be asked to discard within a few decades." The instincts bred for scarcity do not dissolve when scarcity ends. They persist. They find new objects. They generate the restless, purposeless intensity that The Orange Pill documents in its account of builders who cannot stop building — not because the building serves any purpose they can articulate, but because stopping would require them to confront the permanent problem that Keynes identified and that the culture has spent a century avoiding.

The AI moment is the terminal expression of this pattern. Artificial intelligence, as described in The Orange Pill's account of the winter of 2025, represents a productivity multiplication of a kind Keynes could not have imagined — not a marginal improvement but a categorical transformation. When a single engineer with an AI tool can produce what twenty engineers produced before, the economic problem is not merely solved. It is rendered absurd. The relationship between labor and output that structured every economy since the Neolithic Revolution has been severed, not gradually but abruptly, in the space of months.

And the response, documented with painful precision in The Orange Pill's early chapters, is not leisure. It is intensity. The builders do not stop. They accelerate. They post at three in the morning. They work through weekends. Their spouses write public letters asking for help. The tool that should have freed them has instead amplified the compulsion that Keynes failed to predict — the compulsion to produce, not because production is necessary but because production is identity, and identity cannot be set down the way a tool can.

Robert Skidelsky, Keynes's preeminent biographer, saw this clearly when he published The Machine Age in 2023. Keynes had "treated work purely as a cost," Skidelsky observed, and "economic theory treats all work as compelled." But work is also a source of meaning — often the primary source — and a theory that treats it only as cost will fail to predict what happens when the cost is eliminated. What happens is not liberation. What happens is crisis. The crisis of the builder who can do anything and does not know what is worth doing. The crisis of the specialist whose expertise has been commoditized overnight. The crisis of the parent who cannot answer the child's question: "What am I for?"

Keynes's essay becomes, in this light, not a failed prediction but a diagnostic instrument of extraordinary precision. The prediction failed because the diagnosis was incomplete. Keynes diagnosed the economic problem correctly and predicted its solution accurately. What he failed to diagnose was the psychological problem that would replace it — the problem that is now, in the age of AI, no longer a theoretical concern for future grandchildren but an immediate, practical emergency for living human beings.

The evidence accumulates from multiple directions. Arianna Huffington, writing in Fortune in late 2025, noted that AI was already saving workers three to five hours per week — and that eighty-three percent of those who gained time reported wasting at least a quarter of it. The liberated hours did not flow toward the arts of life. They flowed toward scrolling, toward aimless consumption, toward the specific modern restlessness that fills every empty minute with stimulation and calls the result productivity. The Samuel Centre for Social Connectedness documented the broader pattern: "We're working as much as ever, facing unprecedented levels of stress, and managing epidemic-levels of social isolation. It's clear that decades of compounding economic growth didn't bring about what Keynes called 'freedom from pressing economic cares.'"

The abundance arrived. The freedom did not. And the question Keynes posed — how to live wisely and agreeably and well — remains not merely unanswered but actively suppressed by a culture that has made the question itself seem like an indulgence.

Keynes was, above all, a practical thinker. He did not write essays for posterity. He wrote them to change policy, to redirect institutions, to intervene in a present that he found intolerable. The practical implication of his 1930 essay, read in the light of the AI transition, is not that his prediction was wrong and can be discarded. The practical implication is that his prediction was right about the conditions and wrong about the response, which means the response must be deliberately constructed rather than assumed.

The leisure will not arrive on its own. Human beings, given abundance, do not naturally turn toward contemplation. They turn toward more production, more intensity, more of the compulsive activity that fills the void where purpose used to live. If the AI transition is to produce anything resembling the future Keynes envisioned — a future in which human beings use the freedom that technology provides to cultivate the genuinely good life — that future must be built. Deliberately. Institutionally. With the same care and intelligence that went into building the technologies that made the future materially possible.

The Keynesian framework demands institutional response to market failure, and the failure of the market to convert abundance into flourishing is the most consequential market failure of the twenty-first century. The tools exist. The productivity exists. The material conditions for the good life are met, for hundreds of millions of people, several times over. What does not exist are the institutional structures — the labor norms, the educational philosophies, the cultural expectations, the governance frameworks — that would redirect human energy from compulsive production toward the permanent problem that Keynes identified and that remains, ninety-six years later, unsolved.

The economic problem is solved. The human problem has barely been stated. And the machines, indifferent to both, are getting faster.

---

Chapter 2: Animal Spirits in the Age of the Prompt

The most important sentence in John Maynard Keynes's General Theory of Employment, Interest and Money has nothing to do with interest rates, money supply, or aggregate demand. It concerns the psychology of the entrepreneur:

"Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as a result of animal spirits — a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities."

Animal spirits. Not rational calculation. Not expected utility maximization. Not the careful weighing of costs and benefits that classical economics assumed drove investment decisions. Something more primal: the gut-level conviction that this venture will succeed, that this product matters, that this bet is worth making — a conviction that persists even when the spreadsheet offers no support and the evidence is ambiguous and every rational indicator counsels caution.

Keynes introduced this concept not as a colorful metaphor but as a foundational critique of classical economics. The classical framework assumed that investment decisions were driven by rational expectations about future returns. Keynes demonstrated that this assumption was, in practice, impossible to sustain. The future is genuinely uncertain — not merely risky, where probabilities can be assigned, but uncertain in the radical sense that the relevant probabilities cannot be calculated at all, because the situation is novel and the past provides no reliable guide. Under conditions of radical uncertainty, rational calculation breaks down. And when rational calculation breaks down, what fills the void is animal spirits: confidence, optimism, the irrational conviction that action is better than inaction.

Without animal spirits, Keynes argued, nothing gets built. No factory is constructed. No railroad is laid. No venture is funded. The rational response to genuine uncertainty is paralysis — wait for more information, hedge every bet, delay every commitment. But economies cannot run on paralysis. They require investment, and investment requires the willingness to act on conviction rather than calculation. Animal spirits are the engine of economic dynamism. They are also, when unmoderated by institutional structure, the engine of speculative manias, asset bubbles, and the catastrophic collapses that follow when the spirits turn from optimism to panic.

The AI age has amplified the animal spirits to a pitch that Keynes could not have anticipated, for a reason so fundamental it belongs in the first paragraph of any analysis: AI tools have reduced the cost of acting on an entrepreneurial impulse to approximately zero.

Consider the structure of investment before AI. An entrepreneur had an idea. To test the idea, she needed to assemble a team, raise capital, build a prototype, find users, iterate, and sustain operations through the months or years between concept and revenue. Each step imposed a cost — financial, temporal, social — that served as a natural friction against impulsive action. Not every idea survived the friction. The ideas that did were, on average, stronger for having survived it. The friction was a filter. It selected for conviction deep enough to sustain the effort, resources sufficient to absorb the risk, and judgment adequate to navigate the implementation.

AI tools, and specifically the AI coding tools described in The Orange Pill's account of the winter of 2025, collapsed the filter. An entrepreneur with an idea can now test it in an afternoon. The prototype that once required a team and a runway now requires a conversation with a machine. The capital barrier has fallen. The skill barrier has fallen. The time barrier has fallen. What remains is the impulse — the animal spirit, the conviction that this idea matters — and the impulse no longer encounters any meaningful resistance between its formation and its expression.

The result is a Cambrian explosion of entrepreneurial activity. The developer profiled in The Orange Pill who built a revenue-generating product over the course of a year, solo, without writing a line of code by hand, is not an outlier. He is the new archetype. The animal spirits that once required institutional infrastructure to express — the team, the funding, the multi-year commitment — now express themselves at the speed of a prompt. The impulse to build has been liberated from every constraint except the quality of the impulse itself.

This liberation is exhilarating. It is also, in Keynesian terms, dangerous.

Keynes drew a distinction in Chapter 12 of the General Theory between enterprise and speculation. Enterprise is investment driven by genuine expectation of long-term return — the sober assessment of whether a venture will produce value over years and decades. Speculation is investment driven by the anticipation of short-term market movements — the bet not on whether the asset is valuable but on whether other people will think it is valuable tomorrow. Enterprise requires judgment. Speculation requires only speed and confidence.

When the cost of acting on an impulse approaches zero, the balance tips from enterprise toward speculation. Not because the entrepreneurs intend to speculate — most believe sincerely in the value of what they are building — but because the structural filter that once selected for enterprise has been removed. In the old world, the months of effort required to build a prototype served as a test: Was the conviction real? Was the judgment sound? Was the idea worth the resources? When the prototype takes an afternoon, the test vanishes. The impulse arrives, the tool executes, and the entrepreneur discovers only after the fact whether the conviction was enterprise or speculation — whether the thing that felt worth building actually was.

Recent research from the International Monetary Fund demonstrates the pattern at the macroeconomic level. Using natural language processing tools to analyze corporate earnings calls, IMF researchers found that AI narratives exhibit what Keynes would have recognized instantly as herd behavior. "Companies tend to adopt the narratives of their peers," the researchers reported. "When one company starts talking up the transformative power of AI, others seem to follow suit. This narrative contagion starts within groups of peer firms and then spreads to the aggregate level." The animal spirits are contagious. They spread from firm to firm, from sector to sector, amplifying as they go, until the aggregate level of optimism bears no stable relationship to the aggregate level of evidence.

This is the Keynesian beauty contest in real time. Keynes famously compared speculative markets to a newspaper competition in which contestants must choose the six prettiest faces from a hundred photographs — not the faces they personally find prettiest but the faces they think the other contestants will find prettiest. "It is not a case of choosing those which, to the best of one's judgment, are really the prettiest," Keynes wrote, "nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be."

The AI investment boom exhibits precisely this structure. Venture capitalists fund AI companies not primarily because they have calculated the expected return — how would one calculate the expected return on a technology whose capabilities are changing monthly? — but because they anticipate that other investors will value AI companies highly, which will produce the returns that justify the investment, which will attract more investors, which will drive valuations higher, until the cycle sustains itself on its own momentum. The conviction is real. The optimism is genuine. But the mechanism is speculative, not enterprising, and speculative mechanisms produce speculative outcomes: booms followed by corrections, enthusiasm followed by disillusionment, the familiar Keynesian cycle that no technological revolution has yet managed to avoid.

The animal spirits operate at the individual level with equal force. The Orange Pill documents the phenomenon with uncomfortable specificity: the builder who posts at three in the morning, the spouse who writes publicly about a partner consumed by productive obsession, the engineer who describes the experience as simultaneously the hardest he has ever worked and the most fun he has ever had. These reports carry the unmistakable signature of animal spirits in their purest form — the spontaneous urge to action, the conviction that the work matters, the inability to stop because stopping would mean surrendering the momentum that feels like the most real thing in the world.

Keynes would have recognized the energy immediately. He would also have noted — with the dry precision that characterized his most devastating observations — that the experience of conviction and the reality of value are not the same thing. The entrepreneur animated by animal spirits feels certain that the work is important. The feeling is genuine. But feelings are not evidence, and the history of speculative manias is a history of genuine feelings producing spectacular wreckage.

The Keynesian prescription is not to suppress the animal spirits. Keynes understood, perhaps more deeply than any economist before or since, that the spirits are necessary. An economy without animal spirits is an economy in recession — cautious, paralyzed, unwilling to invest, waiting for certainty that will never arrive. The creative energy that builds products, funds ventures, and drives the entire AI ecosystem forward is animal spirits in action, and suppressing that energy would be as catastrophic as allowing it to run unchecked.

The prescription is institutional moderation. Structures that channel the spirits toward enterprise rather than speculation. Norms that distinguish between the impulse to build something valuable and the impulse to build something fast. Incentive systems that reward the long-term investor — the actor who studies the current and builds deliberately — rather than the speculator who rides the wave and exits before it breaks.

These structures do not currently exist in the AI economy. The market rewards speed. Venture capital rewards growth metrics. The culture celebrates the founder who shipped in a weekend, not the founder who spent a year studying whether the thing was worth shipping. The animal spirits are running without moderation, and the history of every previous speculative boom — from the South Sea Bubble to the dot-com crash to the crypto winter — suggests that the correction, when it comes, will be proportional to the excess that preceded it.

The trillion dollars of market value that vanished from software companies in early 2026 may be the first tremor. It may also be the correction itself — a one-time repricing as the market adjusts to the new reality of commodity code. Keynesian analysis cannot determine which, because the answer depends on animal spirits — on whether the aggregate conviction holds or breaks — and animal spirits are, by definition, beyond the reach of rational prediction. They are the irreducible uncertainty at the heart of every economic system, the force that makes economies move and the force that makes them crash, and no amount of AI-powered forecasting will tame them, because the spirits are not calculable. They are human.

What can be built are the dams — the institutional structures that determine whether the spirits produce a pool of productive investment or a flash flood of speculative waste. Keynesian economics has spent ninety years studying how to build these structures. The AI economy has spent approximately zero years implementing them. The gap between the knowledge and the practice is where the danger lives.

---

Chapter 3: The General Theory of AI Employment

The most consequential argument in twentieth-century economics was not about money, prices, or trade. It was about a logical error so deeply embedded in classical thought that it had gone unnoticed for over a century. The error was Say's Law — the proposition, attributed to the French economist Jean-Baptiste Say, that supply creates its own demand. In its simplest form: every act of production generates the income necessary to purchase the output, so that the economy as a whole can never suffer from a general shortage of demand. Overproduction in one sector might occur, but general overproduction was impossible. The market, left alone, would clear.

Keynes demolished this logic in 1936. The General Theory of Employment, Interest and Money demonstrated that aggregate demand — the total spending in an economy — was not automatically determined by aggregate supply. It was determined independently, by the spending decisions of consumers, investors, and governments, and those decisions were themselves shaped by expectations, animal spirits, and the institutional framework within which economic actors operated. An economy could reach a stable state in which production was below capacity, workers were involuntarily unemployed, and the market showed no tendency whatsoever to correct the problem on its own.

Involuntary unemployment — the condition of people willing and able to work at the prevailing wage who cannot find employment because the economy simply does not demand their labor — was, in Keynes's framework, not a temporary aberration but a permanent possibility. Markets do not automatically absorb displaced workers. Markets do not automatically generate demand for new skills. Markets do not automatically convert productivity gains into broadly distributed prosperity. These outcomes require deliberate institutional action — fiscal policy, monetary policy, labor protections, investment in human development — and in the absence of such action, the market's default setting is not equilibrium but underperformance.

The AI transition is the largest test of this argument since the Depression itself.

When Segal describes a twenty-fold productivity multiplier — one engineer, equipped with AI tools, producing the output that previously required twenty — the supply-side implications are dramatic and obvious. More output per worker. Lower unit costs. Faster development cycles. The productivity revolution that Keynes predicted in 1930, arriving at a speed and scale that even his most optimistic extrapolations did not contemplate.

The demand-side implications are less obvious and far more consequential.

If one engineer can do the work of twenty, then nineteen engineers are, at the execution level, surplus. Classical economics responds to this observation with what Keynes called "the classical postulates": the displaced workers will find new employment at lower wages, or they will acquire new skills suited to the new economy, or entirely new categories of work will emerge to absorb them. The market will clear. Say's Law holds. Supply — in this case, the supply of AI-amplified labor — will create its own demand.

Keynesian analysis identifies three specific mechanisms by which this reassuring logic fails under AI conditions.

The first mechanism is the speed of displacement relative to the speed of absorption. Previous technological transitions — mechanization, electrification, computerization — displaced workers over decades, allowing the economy time to generate new categories of work and the educational system time to prepare workers for them. David Autor's research at MIT has documented how new work emerged across the twentieth century, with roughly sixty percent of employment in 2018 consisting of job titles that did not exist in 1940. The mechanism is real. But it operates on a timescale of decades, and the AI displacement operates on a timescale of months. The gap between the speed of displacement and the speed of absorption is the space in which involuntary unemployment persists — not as a temporary adjustment but as a structural feature of the transition economy.

The second mechanism concerns the nature of the displaced labor. Previous automation targeted routine physical and cognitive tasks — the assembly line, the filing cabinet, the spreadsheet. The workers displaced were, broadly, those performing tasks that could be reduced to explicit rules and repetitive procedures. The work that remained was the work that required judgment, creativity, social intelligence — the tasks that resisted codification. Workers displaced from routine jobs could, with retraining, migrate to non-routine jobs where human advantage persisted.

AI undermines this migration path. The tools described in The Orange Pill do not merely automate routine tasks. They perform competently across domains that were previously considered non-routine: software architecture, design, analysis, writing, problem-solving that requires integrating information from multiple sources. The boundary between what machines can do and what only humans can do has moved, and it has moved upward, into territory that absorbs the very workers who would otherwise have been the destination for displaced routine workers. The ladder that previous generations climbed — from routine to non-routine work — has had several rungs removed from the middle.

The migration that The Orange Pill describes — from execution to judgment, from specialist to integrator, from builder to creative director — is real and important. The question Keynesian analysis poses is not whether this migration is possible in principle but whether it will occur at a pace and scale sufficient to absorb the displaced labor before the displaced workers exhaust their savings, their patience, and their capacity for reinvention.

The third mechanism is the most deeply Keynesian. It concerns not the labor market directly but the aggregate demand that the labor market serves. When workers are displaced, they lose income. When they lose income, they reduce spending. When they reduce spending, the businesses that sell to them lose revenue. When those businesses lose revenue, they reduce their own workforce, which reduces income further, which reduces spending further, in the self-reinforcing contractionary spiral that Keynes called the multiplier operating in reverse.

The Keynesian multiplier is symmetric. It amplifies expansion — each dollar of investment generates more than a dollar of economic activity as it circulates through the economy. But it also amplifies contraction — each dollar of lost income generates more than a dollar of lost economic activity as the reduction cascades through the system. An economy that displaces millions of workers from well-paying knowledge work does not merely have an employment problem. It has a demand problem. And demand problems, as Keynes demonstrated at length, do not solve themselves.

The analysis published by Modern Diplomacy in April 2026 articulated the structural version of this concern: "For centuries, economic systems have relied on a fundamental loop: Labor → Income → Demand → Production → Labor. AI puts terminal pressure on that bridge. Keynesianism short-circuits: Keynes assumed that productivity gains would eventually flow back to the masses through wages, fueling demand. AI allows for productivity without payroll, leaving the demand engine without fuel."

The argument is not that AI will eliminate all employment. It is that AI may decouple productivity from employment sufficiently to weaken the aggregate-demand mechanism that keeps the economy functioning. If the productivity gains flow to capital owners and the workers who direct AI tools — a relatively small group — while the workers displaced by AI tools lose income and reduce spending, the aggregate demand that sustains the broader economy contracts. The economy produces more with fewer workers, but the "fewer workers" have less income, which means less demand, which means less production is required, which means fewer workers still.

This is the Keynesian paradox applied to AI: individually, each firm's decision to replace twenty workers with one AI-augmented worker is rational. Collectively, when every firm makes the same decision, the aggregate demand that sustains all firms declines. The firm that cut nineteen workers saved on payroll. The economy that cut nineteen million workers lost nineteen million consumers.

The Orange Pill describes the organizational response that Keynesian theory would prescribe: the decision to keep the team, to invest the productivity gains in expanding capability rather than reducing headcount, to build the pool behind the dam rather than let the water rush downstream. The "vector pods" — small groups whose job is to decide what should be built, rather than to build it — represent a deliberate institutional choice to create demand for judgment-based labor within the organization, rather than waiting for the market to generate that demand on its own.

This choice is, in Keynesian terms, an act of enterprise over speculation. It sacrifices short-term margin for long-term capability. It invests in human development at a moment when the market is rewarding human displacement. It chooses the timeline of the institution over the timeline of the quarter.

But one company's choice does not constitute an economy-wide solution. The Keynesian argument is structural: the problem is not that individual firms make bad decisions but that individually rational decisions aggregate into collectively irrational outcomes. Each firm that converts its twenty-fold productivity gain into a ninety-five-percent headcount reduction is acting rationally from its own perspective. The aggregate result — an economy with dramatically higher output and dramatically lower employment income — is irrational from everyone's perspective, including the firms that made the cuts, because those firms depend on consumers who depend on employment income that no longer exists.

Keynesian theory prescribes institutional solutions to structural problems: fiscal policy that maintains aggregate demand during the transition, investment in education and retraining that prepares workers for the new categories of work, labor standards that prevent the race to the bottom, and governance frameworks that ensure the productivity gains are distributed broadly enough to sustain the demand that the economy requires. These are not radical proposals. They are the application of ninety years of Keynesian institutional economics to a new context — the recognition that markets, left alone, will not manage this transition humanely, and that the cost of leaving them alone will be measured not only in lost livelihoods but in lost aggregate demand, which is to say, in recession.

The General Theory was written to explain why classical economics failed during the Depression, and to prescribe the institutional response that classical economics could not conceive. The AI transition poses an analogous challenge. The classical response — the market will adjust, new jobs will emerge, supply will create its own demand — is, once again, theoretically possible and practically inadequate. The adjustment may come. But it will not come quickly enough, or broadly enough, or humanely enough, to spare the generation that must live through the transition. And that generation, as Keynes insisted, is the one that matters.

---

Chapter 4: The Paradox of Thrift Applied to Attention

Keynes was fond of paradoxes — not as rhetorical ornaments but as diagnostic instruments, tools for revealing the places where common sense breaks down and the intuitions that govern individual behavior produce, at scale, outcomes that no individual intended or desired.

The paradox of thrift was his most elegant demonstration. In its simplest form: when every household in an economy decides to save more, the aggregate result is not more savings but less. The mechanism is not mysterious. When households save more, they spend less. When they spend less, businesses earn less revenue. When businesses earn less, they reduce production and lay off workers. When workers are laid off, they earn less income. When they earn less income, they save less — not by choice but by necessity. The individually rational decision to save more, aggregated across millions of households, produces the collectively irrational outcome of less saving, less income, and less economic activity.

The paradox does not mean that saving is wrong. It means that saving, when pursued by everyone simultaneously and without institutional coordination, defeats itself. The solution is not to prohibit saving but to construct institutions — fiscal policy, automatic stabilizers, counter-cyclical investment — that offset the contractionary effect of widespread thrift at the individual level with expansionary action at the institutional level.

This paradox has an exact analogue in the AI age, and it operates not on money but on attention.

The Keynesian paradox of attention thrift runs as follows: when every knowledge worker optimizes their cognitive time with AI tools — using the tools to filter, prioritize, accelerate, and automate every cognitive task — the aggregate result is not more genuine thought but less. Not because the tools are flawed, not because the workers are lazy, but because the optimization removes the slack in which genuine thought forms, and the removal of slack, aggregated across millions of workers, produces an economy that processes more information and generates less understanding.

The mechanism is structural, not moral. It does not depend on any individual making a bad decision. It depends on millions of individuals making individually sensible decisions that produce a collectively impoverished outcome.

Consider the architecture of cognitive work before AI tools entered it. A knowledge worker's day contained, embedded within the productive hours, a substantial quantity of what appeared to be waste: the walk to the coffee machine, the idle minute between meetings, the stare out the window while waiting for a file to load, the half-formed thought pursued during a commute, the lunch break spent in conversation that had no professional purpose. None of this was productive in any measurable sense. None of it appeared on a timesheet or contributed to a deliverable.

But it was not waste. It was medium — the cognitive equivalent of topsoil, the loose, unstructured material in which the roots of genuine thought take hold.

Neuroscience has documented, with increasing precision, what happens in the brain during periods of apparent idleness. The default mode network — the set of brain regions that activate when the mind is not engaged in a specific task — plays a critical role in memory consolidation, in the integration of disparate pieces of information, in the kind of background processing that produces the sudden insight that arrives in the shower or on the walk home. The default mode network requires what researchers call "attentional slack" — periods when the mind is not directed toward any particular task and is free to wander, to associate, to make the unexpected connection between two ideas that had not previously been linked.

The Berkeley researchers whose study The Orange Pill examines in its eleventh chapter documented the elimination of attentional slack with empirical specificity. They observed a phenomenon they termed "task seepage": the tendency for AI-accelerated work to colonize the very pauses that had previously served as cognitive rest periods. Workers were prompting during lunch breaks, generating outputs during the elevator ride, filling two-minute gaps between meetings with AI interactions that felt productive but eliminated the unstructured time in which the default mode network operates.

Each individual worker's decision to fill a pause with an AI-assisted task was rational. The task was there. The tool was available. The output was immediate. Why waste two minutes staring at a wall when those two minutes could produce a draft, a data pull, an analysis? The decision was rational, productive, and — aggregated across an organization, an industry, an economy — corrosive.

The mechanism parallels the paradox of thrift with precision. In the paradox of thrift, each household's decision to save reduces aggregate spending, which reduces aggregate income, which reduces the aggregate capacity to save. In the paradox of attention thrift, each worker's decision to optimize their cognitive time reduces aggregate attentional slack, which reduces aggregate background processing, which reduces the aggregate capacity for the integrative thinking that produces genuine understanding rather than mere information processing.

The organization in which every worker has optimized their cognitive time with AI tools is an organization that processes information at unprecedented speed and generates understanding at an impoverished rate. The meetings are efficient. The deliverables are prompt. The metrics are favorable. And the quality of the ideas — the depth of understanding that the organization brings to its decisions — has declined, invisibly, because the medium in which deep understanding grows has been paved over in the name of productivity.

Keynes would recognize this immediately. The paradox of thrift operates through a mechanism that is invisible to the individual participant. Each household that saves more sees its own savings increase — in the short run. The contractionary effect operates at the aggregate level, through channels that no individual household controls or observes. Similarly, each worker who optimizes their time sees their own output increase — in the short run. The attentional impoverishment operates at the aggregate level, through channels that no individual worker monitors.

The research on AI-intensified work confirms the pattern from multiple angles. The Berkeley researchers found that AI adoption led not to reduced workloads but to expanded scope — workers taking on tasks that had previously belonged to other roles, filling freed time with additional work rather than reflection. The pauses that had informally served as recovery periods — the cognitive equivalent of fallow fields in agriculture — were planted with another crop. The soil does not complain. It simply produces less, season after season, until the farmer notices that the yields have declined and cannot explain why, because each individual season's planting decision was rational.

The paradox extends beyond the workplace and into the cultural economy of attention. When every content producer uses AI to generate more content faster, the aggregate supply of content increases while the aggregate capacity to absorb it does not. The reader, the viewer, the listener is presented with more material than any human could process, and the response — individually rational, collectively destructive — is to skim rather than read, to scroll rather than engage, to consume quantity at the expense of quality. Each content producer's decision to use AI for efficiency is rational. The aggregate result is an attention economy in which no individual piece of content receives the sustained engagement necessary for genuine understanding.

This is the paradox of thrift operating in the currency of attention rather than money. And as in the monetary paradox, the solution is not to prohibit the individually rational behavior — one cannot tell workers to stop using AI tools, any more than one can tell households to stop saving. The solution is institutional: structures that protect attentional slack against the optimization imperative, the same way fiscal policy protects aggregate demand against the contractionary effect of widespread thrift.

The Berkeley researchers proposed exactly such structures: sequenced rather than parallelized work, protected pauses built into organizational rhythms, structured reflection periods that give the default mode network time to operate. The Orange Pill calls for "AI Practice" — organizational norms that treat attentional ecology with the same seriousness that environmental ecology now commands.

These are Keynesian prescriptions at the micro-institutional level: deliberate interventions that offset the aggregate consequences of individually rational behavior. They acknowledge that the optimization is real, the productivity gains are real, and the individual worker's experience of AI-augmented efficiency is genuine — and that all of these things, left unmoderated, produce an outcome that none of the participants intended and all of them will eventually suffer from.

The Keynesian point, characteristically, is about the relationship between individual rationality and collective outcomes. Classical economics assumed that individually rational behavior produced collectively rational results — that the sum of rational parts was a rational whole. Keynes demonstrated, in domain after domain, that this assumption was false. The sum of rational savings decisions produces irrational contraction. The sum of rational investment decisions, driven by animal spirits and herd behavior, produces irrational bubbles. The sum of rational attention-optimization decisions produces irrational cognitive impoverishment.

The fallacy of composition — the belief that what is true of each part must be true of the whole — is the error that Keynesian economics exists to correct. And it is the error that the AI age is committing at an unprecedented scale, in a currency more precious than money.

Attention is not a renewable resource in the way that optimists assume. A mind that has been trained, through months and years of AI-assisted optimization, to fill every pause with productive activity does not simply recover its capacity for deep thought when the tools are set aside. The capacity atrophies. The neural pathways that support sustained, undirected reflection weaken from disuse. The ability to be bored — genuinely, productively bored, the kind of boredom that is the soil in which curiosity grows — diminishes as the tolerance for unstructured time erodes.

This atrophy is the attentional equivalent of the deflationary spiral in the paradox of thrift. Once the contractionary process begins, it feeds itself. Less slack produces less background processing, which produces less insight, which produces less confidence in the value of unstructured time, which produces less slack. The spiral does not reverse on its own. It requires intervention — deliberate, institutional, sustained — to halt the contraction and rebuild the medium in which thought grows.

Keynes understood that paradoxes are not puzzles to be solved but structures to be managed. The paradox of thrift does not go away when it is understood. It persists, because the individually rational incentive to save persists, and the contractionary aggregate effect persists, and the tension between them is permanent. What changes, when the paradox is understood, is the institutional response: the willingness to construct counter-cyclical mechanisms that offset the individual incentive's aggregate consequences.

The paradox of attention thrift demands an equivalent institutional maturity. Not the naive hope that workers will spontaneously limit their AI use — they will not, any more than households voluntarily limit their savings during a recession. But the deliberate construction of organizational norms, educational practices, and cultural expectations that protect the attentional commons against the individually rational optimization that, left unmanaged, will deplete it. An economy can be rich in output and poor in understanding. A civilization can be productive and intellectually impoverished. The paradox of attention thrift explains how this happens — not through malice, not through stupidity, but through the aggregation of perfectly sensible decisions that produce, at scale, an outcome that no one chose.

Chapter 5: The Liquidity Trap of Capability

In the winter of 1932, the Bank of England had done everything the textbooks prescribed. Interest rates had been cut to their lowest level in the institution's two-hundred-and-thirty-eight-year history. Money was cheap. Credit was available. The mechanism by which monetary policy was supposed to stimulate investment — lower rates reducing the cost of borrowing, thereby encouraging firms to invest in new capacity — was operating exactly as designed.

Nothing happened.

Firms did not invest. Factories did not reopen. Workers did not return to employment. The money sat in bank vaults and corporate balance sheets, inert, available, and useless. The economy was saturated with the capacity to invest but could not convert that capacity into actual investment, because the problem was not the cost of money. The problem was that no firm could identify an investment worth making. Demand was insufficient. Expectations were bleak. The animal spirits had collapsed. And in the absence of conviction about the future, no interest rate — not even zero — could induce a rational firm to borrow money and build a factory whose output no one would buy.

Keynes called this condition a liquidity trap. The economy was flooded with liquidity — money available for investment at minimal cost — but the liquidity could not be converted into productive activity because the bottleneck had migrated from the supply of money to the demand for output. Adding more money to a system that was already saturated with money was, as Keynes memorably observed, like pushing on a string. The mechanism transmitted the push but produced no movement.

The AI economy of 2026 has entered an analogous trap, and it operates not on financial capital but on productive capability.

The capability is abundant. AI tools have multiplied the productive capacity of individual workers by factors that vary from study to study but cluster around the extraordinary. The twenty-fold multiplier described in The Orange Pill's account of the Trivandrum training is not an outlier; it is consistent with reports from organizations across the technology sector and, increasingly, across every sector that employs knowledge workers. A single person with access to Claude Code, GPT-4, or comparable tools can now produce — in code, in analysis, in design, in writing — output that a team of five or ten or twenty produced before. The capacity to build has been democratized, accelerated, and multiplied to a degree that no previous technological transition achieved in so compressed a period.

And yet the organizations deploying these tools are discovering, with increasing frequency, that the additional capability does not automatically translate into additional value.

The mechanism of the capability trap mirrors the mechanism of the liquidity trap with structural precision. In the liquidity trap, the bottleneck is not money but the judgment to deploy it wisely — the conviction that a specific investment will generate returns in a specific, uncertain future. In the capability trap, the bottleneck is not productive capacity but the judgment to direct it wisely — the conviction that a specific product, feature, or initiative is worth building for a specific, uncertain market.

An organization that equips its workforce with AI tools and instructs them to "be more productive" is, in Keynesian terms, adding liquidity to a system without addressing the demand constraint. The workers produce more. But more of what? The tools amplify whatever direction they are given, and if the direction is unclear — if the organization does not know what is worth building, for whom, and why — the amplified output is amplified confusion. More features that no user requested. More analyses that no decision-maker reads. More code that solves problems no customer has. The organization is pushing on a string, and the string is getting longer with every subscription renewal.

The evidence accumulates from organizational practice. The Berkeley researchers whose study The Orange Pill examines documented a specific and telling pattern: workers equipped with AI tools expanded their scope, taking on tasks that had previously belonged to other roles, filling freed time with additional work rather than strategic reflection. The expansion was real. Whether the expanded work was valuable — whether it addressed genuine market demand or merely filled the available capacity — was a question the study could not answer from the outside, because busy and productive look identical to an observer regardless of whether the busyness serves a strategic purpose.

Keynesian analysis provides the diagnostic framework. In a liquidity trap, the correct response is not to add more liquidity. It is to address the demand constraint directly — through fiscal stimulus, through public investment, through institutional action that creates the demand the market has failed to generate on its own. In a capability trap, the correct response is not to add more capability. It is to address the judgment constraint directly — through investment in the human capacity to decide what is worth building.

This is where the Keynesian reading of The Orange Pill converges most precisely with the book's own argument. Segal's distinction between "doing old things faster" and "attempting things that would never have been tried before" is, translated into Keynesian terms, the distinction between adding liquidity to a saturated system and generating new demand. The engineer who uses AI to write boilerplate code faster is adding liquidity — doing the same thing with less effort, producing the same output at lower cost. The engineer who uses AI to build a product she could never have conceived alone is generating new demand — creating something the market did not know it wanted until it existed.

The first activity is subject to the capability trap. The second escapes it. And the difference between them is not a property of the tool. It is a property of the judgment that directs the tool.

Keynes's General Theory devoted substantial attention to what he called "the marginal efficiency of capital" — the expected rate of return on an additional unit of investment. The concept is directly applicable to the capability trap. The marginal efficiency of an additional AI tool, or an additional AI subscription, or an additional hour of AI-assisted work, depends entirely on whether the direction in which the tool is pointed generates genuine value. When the direction is sound — when the tool is aimed at a genuine problem, serving a genuine need — the marginal efficiency is extraordinary. When the direction is unclear — when the tool is aimed at whatever task happens to be available, or at the replication of existing output at higher speed — the marginal efficiency approaches zero regardless of how much capability the tool provides.

The organizations trapped in the capability surplus are not suffering from a lack of tools. They are suffering from a lack of what Keynes, in a different context, called "effective demand" — the specific, directed need that converts capability into value. An organization that cannot articulate what it is trying to build, for whom, and why, will not escape the trap by purchasing more AI subscriptions. It will escape the trap only by developing the judgment — the strategic clarity, the product intuition, the capacity to distinguish between what can be built and what should be built — that converts accumulated capability into directed action.

The "vector pods" that The Orange Pill describes — small groups tasked not with building but with deciding what deserves to be built — represent one institutional response to the capability trap. They are, in effect, the organizational equivalent of Keynesian fiscal stimulus: a deliberate investment in the capacity to generate effective demand within the organization, rather than relying on the market (or the workforce's own initiative) to generate it spontaneously.

But the capability trap operates at scales larger than the individual organization. At the industry level, the democratization of productive capability has produced a flood of output — applications, platforms, tools, content — that exceeds the market's capacity to absorb. When anyone can build a product in a weekend, the supply of products outruns the demand for them, and the marginal value of each additional product declines. This is Say's Law failing in real time, at the micro-level of the product market: the supply of AI-enabled products does not create its own demand. It creates a glut, and the glut drives down the return on production, which discourages the investment in judgment and quality that would distinguish valuable products from noise.

The macroeconomic implications extend further. If organizations respond to the capability trap by reducing headcount — converting the twenty-fold multiplier into a ninety-five-percent workforce reduction, as the straightforward arithmetic invites — the aggregate demand constraint tightens. Fewer workers earning income means less spending, which means less demand for the products the remaining workers are building with their amplified capability. The capability trap and the demand constraint reinforce each other in a contractionary spiral that Keynes would have recognized instantly: more capability producing less value, more output generating less demand, more efficiency delivering less prosperity.

The escape from a liquidity trap requires what Keynes called a change in the "state of long-term expectation" — a shift in the collective conviction about the future that restores the animal spirits and converts hoarded liquidity into active investment. The escape from a capability trap requires an analogous shift: a change in the collective capacity for judgment that converts hoarded capability into directed value creation. This shift cannot be produced by the tools themselves. It must be produced by the human beings who use them — by their education, their experience, their willingness to ask the question that no tool can answer: What is this capability for?

Keynes, who spent his career arguing that institutional design determines economic outcomes, would insist that the answer to the capability trap is not better tools but better institutions. Educational institutions that cultivate judgment alongside technical skill. Organizational structures that invest in strategic clarity alongside operational efficiency. Market incentives that reward the creation of genuine value alongside the production of mere output. Governance frameworks that distinguish between capability deployed wisely and capability deployed reflexively.

The capability is not the problem. The capability is extraordinary, and its expansion represents a genuine increase in human potential. The problem is the assumption — the same assumption that classical economics made about money, and that Keynes spent his career refuting — that more capability automatically produces more value. It does not. It produces more value only when directed by judgment adequate to the capability's power. And judgment, unlike capability, cannot be purchased, downloaded, or subscribed to. It must be cultivated, slowly, through the kind of investment in human development that the market, left to its own devices, chronically underprices.

The string is long. The push is powerful. The question is whether anyone is pulling on the other end.

---

Chapter 6: Sticky Wages, Sticky Identities, and the Friction of Transition

Keynes's theory of sticky wages begins with an observation so mundane that classical economists had overlooked it for a century: workers resist pay cuts.

Not because they are irrational. Not because they fail to understand the economic logic. But because a wage cut carries meaning that transcends its arithmetic. A ten-percent reduction in pay is experienced not merely as ten percent less income but as a judgment — an evaluation of the worker's worth that violates the implicit contract between employer and employee, the understanding that work performed competently is work that deserves at least the same compensation it received before. The classical economist sees a price adjustment. The worker sees a demotion. The difference between these two perceptions is the difference between a theory that works on paper and an economy that works in practice.

Keynes demonstrated that this stickiness — the resistance of wages to downward adjustment — had macroeconomic consequences that classical economics could not accommodate. Because wages did not fall smoothly to clear the labor market, unemployment persisted. Workers who would have been employed at a lower wage remained unemployed at the prevailing wage, and the market showed no tendency to correct the imbalance, because the stickiness was not a market imperfection that competition would erode but a structural feature of labor markets rooted in the psychology of fairness and the sociology of status.

The AI transition has revealed a stickiness far deeper than wages. Professional identity — the sense of self constructed through years of specialized practice, the answer to the question "What do you do?" that locates the individual in a social world — is stickier than any wage by an order of magnitude. And this stickiness is the primary friction of the transition, the force that slows adaptation, intensifies resistance, and determines whether the human cost of the transition is measured in months of discomfort or decades of dislocation.

The mechanism of identity stickiness operates through a logic that economic models are poorly equipped to capture but that anyone who has watched a skilled professional confront obsolescence will recognize immediately.

A wage is a number. It can be adjusted, negotiated, supplemented, replaced. The worker who accepts a pay cut loses income but retains identity — she is still a software engineer, still a graphic designer, still a financial analyst. The professional infrastructure of her life — her colleagues, her vocabulary, her daily rhythms, her sense of competence, her answer to "What do you do?" — remains intact. The loss is material. It is not existential.

An identity is not a number. It is an architecture built through years of investment — cognitive, emotional, social — that determines not what the individual earns but who the individual is. The senior engineer who has spent fifteen years writing Python has not merely accumulated a skill. He has accumulated a world: a community of practice, a vocabulary, a set of aesthetic judgments about what constitutes good code, a reputation among peers who value the same things he values, a daily experience of competence that organizes his relationship to work and, through work, to himself.

When AI tools commoditize the execution of Python, the economic logic says: adapt. Acquire new skills. Move up the value chain. Become a judgment-based worker, a creative director, a strategic thinker. The logic is sound. The prescription is, in principle, correct. The Orange Pill makes this argument persuasively, describing the migration from execution to judgment as the natural trajectory of the AI-augmented professional.

But the prescription ignores what Keynes understood about wages and what applies with even greater force to identity: the adjustment is not smooth. It is not painless. It does not occur at the speed the market demands, because the individual is not merely changing skills. The individual is changing who they are.

The Python developer who is told to become a "creative director" faces a transition that is not analogous to learning a new programming language. It is analogous to emigrating to a new country — one where the language is different, the status hierarchy is unfamiliar, the markers of competence are unrecognizable, and the community of practice that provided belonging and validation has been dissolved. The technical skills may transfer. The identity does not. And the period between the dissolution of the old identity and the construction of the new — the interval in which the individual is neither the person they were nor the person they are becoming — is the most psychologically dangerous period of the transition.

Keynes observed that workers resist wage cuts not because they are foolish but because they are human — because the meaning of a wage extends beyond its purchasing power to include its social and psychological significance. The same logic applies to professional identity, with greater force. Workers resist identity dissolution not because they are rigid or fearful but because identity is the structure that organizes experience, and the dissolution of that structure is experienced as a kind of death.

The Orange Pill documents the behavioral evidence of this stickiness in its account of the responses to the AI threshold. The senior software architect who compares himself to a master calligrapher watching the printing press arrive is not making an economic calculation. He is grieving. The developers who flee to the woods, reducing their cost of living in anticipation of a livelihood they believe is disappearing, are not executing a rational adaptation strategy. They are enacting the flight response — the primal retreat from a threat too large to confront — that identity dissolution triggers.

The fight-or-flight dichotomy that The Orange Pill describes maps, in Keynesian terms, onto the distinction between adjustment and rigidity. The fighters adapt — they lean into the new tools, develop the judgment that the new economy rewards, reconstruct their professional identities around a higher-level competence. The flighters resist — they retreat, refuse, insist that the old skills must still be worth what they used to be worth. Both responses are psychologically coherent. Neither is economically sufficient in isolation. And the proportion of the workforce that fights versus flees will determine the speed and character of the transition more than any technological variable.

Keynesian economics offers a framework for understanding why the proportion matters at the aggregate level. An economy in which a large fraction of displaced workers flee — retreating from the labor market, reducing their economic participation, withdrawing into lower-cost lifestyles that minimize their engagement with the productive economy — is an economy experiencing a contractionary shock. The flighters are, in economic terms, equivalent to households that increase their savings rate during a recession: each individual's decision is rational, but the aggregate effect is reduced demand, reduced economic activity, and a self-reinforcing downward spiral.

An economy in which a large fraction of displaced workers fight — adapting quickly, acquiring new competences, reintegrating into the productive economy in new roles — is an economy that absorbs the shock and converts it into expansion. The fighters generate demand for the new categories of work. They create the market for judgment-based roles. They demonstrate, by example, that the transition is survivable, which encourages others to fight rather than flee, which accelerates the adaptation, which reinforces the expansion.

The policy implication is direct: the institutions that support the transition — retraining programs, income support during the adjustment period, organizational cultures that value identity reconstruction — are not social welfare expenditures. They are macroeconomic investments. They increase the proportion of fighters to flighters, which increases the speed of adaptation, which reduces the depth and duration of the contractionary period, which benefits the entire economy, including the firms and investors whose short-term incentives might otherwise have favored the purely extractive response of headcount reduction.

Keynes argued, against the classical orthodoxy of his time, that the government had a responsibility to manage transitions that the market could not manage on its own. The stickiness of wages was one such transition — when wages would not fall to clear the labor market, the government had to intervene through fiscal and monetary policy to maintain demand at a level consistent with full employment. The stickiness of identity is a deeper, more personal, and more psychologically complex version of the same problem. The market is demanding that millions of skilled professionals dissolve their identities and reconstruct them at speed. The market is also offering, as support for this reconstruction, approximately nothing.

The historical parallel that The Orange Pill draws to the Luddites is, in this light, not merely illustrative but diagnostic. The original Luddites possessed genuine skill, genuine knowledge, genuine mastery of a craft that had taken years to develop. They were not resistant to change because they were foolish. They were resistant because the change required the dissolution of everything they had built — not just their livelihoods but their identities, their communities, their sense of purpose, their answer to the question of what they were for.

What the Luddites lacked was not intelligence or adaptability. What they lacked was institutional support for the transition — the retraining, the income support, the time, the social infrastructure that would have allowed them to navigate the interval between the old identity and the new without being destroyed by it. In the absence of that support, they broke machines — a response that was emotionally coherent and strategically catastrophic, and that accomplished nothing except to accelerate the hostility of the institutions whose support they most needed.

The AI transition will produce its own Luddites. It is already producing them — not in the dramatic form of machine-breakers but in the quieter form of skilled professionals who have withdrawn from the conversation, who have retreated to the woods, who have stopped participating in the economy that is being rebuilt around them. Their withdrawal is, in Keynesian terms, a contractionary force — less engagement, less demand, less economic participation — and it will persist for exactly as long as the institutional support for identity reconstruction remains absent.

The stickiness is not a character flaw. It is a structural feature of human psychology operating under conditions of radical change. And the institutions that address it — that provide the time, the support, the models, and the community within which new identities can be constructed — are not luxuries. They are the infrastructure of adaptation, as essential to the AI transition as fiber-optic cables and GPU clusters, and as chronically under-invested.

---

Chapter 7: Effective Demand in a World of Infinite Supply

Jean-Baptiste Say published his Treatise on Political Economy in 1803 and embedded within it a proposition that would dominate economic thought for over a century: production creates its own market. In a functioning economy, the act of producing goods generates the income — in wages, profits, and rents — necessary to purchase those goods. General overproduction was therefore impossible. Individual markets might experience temporary surpluses, but the economy as a whole would always tend toward equilibrium, because every unit of supply automatically generated the demand required to absorb it.

Say's Law, as it came to be known, was the foundation upon which classical economics constructed its entire theory of employment, output, and prosperity. It was also, as Keynes demonstrated in 1936, wrong.

The demolition occupies the opening chapters of the General Theory and constitutes, in the judgment of economic historians, the most consequential theoretical argument of the twentieth century. Keynes showed that Say's Law failed because it ignored the possibility of hoarding — the diversion of income into savings that were not invested, into liquidity held for its own sake, into the precautionary reserves that rational agents accumulate when the future is uncertain. When income is hoarded rather than spent or invested, the circular flow that Say assumed — production generates income generates demand generates production — breaks down. Demand falls short of supply. Output contracts. Workers are laid off. The economy settles into an underperformance that the market, on its own terms, has no mechanism to correct.

The AI economy is rediscovering Say's error in real time, and the tuition is expensive.

When the cost of producing software approaches zero — when the tools described in The Orange Pill allow a single person to build in a weekend what a team of twenty built in a quarter — the supply side of the software market is, for practical purposes, solved. Anyone with access to the tools and the ability to describe what they want can produce software. The barrier to entry has not merely been lowered. It has been, for a significant class of applications, eliminated.

Classical economic logic predicts that this supply explosion should produce a corresponding demand explosion. More software means more products. More products mean more choices for consumers. More choices mean more economic activity. Supply creates its own demand. The economy expands.

The trillion dollars of market value that vanished from software companies in early 2026 — the Death Cross documented in The Orange Pill's nineteenth chapter — is the empirical refutation. The supply expanded enormously. The demand did not follow. The market repriced the software industry not because software had become less useful but because the assumption that producing software was itself a source of durable value had been demolished. When anyone can produce software, the act of producing software no longer commands a premium. The supply has been commoditized, and commoditized supply, as any economist knows, earns commodity returns — which is to say, returns that approach the marginal cost of production, which in the case of AI-generated code is approaching zero.

Say's Law fails in the AI economy for the same reason it failed in the Depression: the circular flow breaks down. In the Depression, the breakdown occurred because income was hoarded as precautionary savings rather than spent. In the AI economy, the breakdown occurs because the income that software production used to generate — the salaries of the developers, the consulting fees of the implementers, the licensing revenue of the platform providers — is evaporating as AI tools replace the labor that generated it. The income that once circulated from software companies to employees to consumer spending to the broader economy is being compressed, concentrated, and in some cases eliminated entirely.

This is not a hypothetical. The organizational arithmetic that The Orange Pill describes — one engineer with AI doing the work of twenty — translates directly into an income distribution question. If the organization reduces headcount by ninety-five percent and captures the productivity gain as profit, nineteen salaries disappear from the economy. Nineteen households reduce their spending. The businesses that served those households lose revenue. The contractionary multiplier operates, and the economy that produces more output employs fewer people and generates less demand.

The argument that new categories of work will emerge to absorb the displaced workers is the modern formulation of Say's Law — the claim that the supply of AI-augmented labor will create its own demand. And it may be correct, in the long run. David Autor's research demonstrates that new work categories have consistently emerged across the history of technological change. The question is not whether new work will emerge but whether it will emerge quickly enough, broadly enough, and at high enough wages to sustain the aggregate demand that the economy requires.

Keynes's concept of effective demand provides the analytical framework. Effective demand is not what people want. It is what they are willing and able to pay for. The distinction is critical. A displaced software developer may want employment. A corporation may want that developer's judgment-based services. But effective demand requires that the corporation is willing to pay for those services at a price that sustains the developer's participation in the economy — and that willingness depends on the corporation's own revenue, which depends on the spending of consumers, which depends on the employment income of those consumers, which depends on whether the AI transition has contracted or expanded the aggregate income base.

The circular logic is the point. Keynesian economics is a system of simultaneous determination, in which output depends on demand, demand depends on income, and income depends on output. No variable can be determined independently of the others. And this means that the transition cannot be analyzed one firm at a time. The firm that cuts nineteen workers improves its own profitability. The economy in which every firm cuts nineteen workers contracts. The fallacy of composition — the assumption that what is true of each part must be true of the whole — is the error that leads from individual rationality to collective dysfunction.

The Death Cross is not merely a repricing of software companies. It is, read through the Keynesian lens, the market's belated recognition that supply does not create its own demand. The companies that were valued on the assumption that producing code was a durable source of value are being repriced because the assumption has collapsed. The code is a commodity. The value has migrated — but migrated where?

Keynesian analysis identifies three destinations for the value that has migrated away from code.

The first is ecosystem. Companies that built not merely software but institutional infrastructure — data layers, integration networks, workflow assumptions embedded in organizational practice, regulatory compliance certifications, audit trails — possess value that AI cannot replicate in an afternoon. The ecosystem is, in Keynesian terms, a form of capital that retains its marginal efficiency because its scarcity is not artificial. Building an ecosystem requires time, trust, institutional relationships, and the accumulated judgment of thousands of interactions with real users facing real problems. These cannot be compressed into a weekend of AI-assisted coding. They represent genuine, durable value.

The second is judgment. The capacity to decide what should be built, for whom, and why — the "creative director" function that The Orange Pill identifies as the scarce resource of the new economy — commands a premium precisely because it is the bottleneck. When production is abundant and free, the constraint shifts to direction. The person or organization that can direct abundant production toward genuine need — that can convert capability into value, rather than merely into output — holds the position that the code-writer once held. The difference is that judgment is harder to cultivate, slower to develop, and more resistant to commoditization than any technical skill.

The third is institutional trust. In a market flooded with AI-generated products, the consumer's problem is not finding a product but evaluating one. When anyone can build anything, the question becomes: should I trust this particular product with my data, my workflow, my business? Trust is, in economic terms, an information good — it reduces the cost of evaluation, lowers the risk of adoption, and enables transactions that would otherwise be blocked by uncertainty. Companies that have accumulated institutional trust — through years of reliable performance, through regulatory compliance, through reputation — possess an asset that appreciates in value as the supply of products increases, because the need for trustworthy evaluation increases in direct proportion to the volume of options.

These three destinations — ecosystem, judgment, trust — are the forms of effective demand that survive the commoditization of code. They are also, not coincidentally, the forms of value that require the most human investment. Ecosystems are built by humans over years. Judgment is cultivated through experience and reflection. Trust is earned through consistent behavior across thousands of interactions. None of these can be purchased, downloaded, or generated by a machine. They are, in a specific and precise sense, the remaining province of human contribution in an economy of abundant machine output.

The Keynesian prescription for the AI economy is, therefore, not to resist the commoditization of code — that resistance is as futile as the Luddites' resistance to the power loom — but to invest, deliberately and at scale, in the forms of value that survive commoditization. Investment in education that cultivates judgment. Investment in institutions that build trust. Investment in the organizational structures that sustain ecosystems. Investment, above all, in the human capacities that convert abundant capability into genuine flourishing.

Say's Law failed in 1936 because the classical economists assumed that production automatically generated the demand required to absorb it. Say's Law is failing again in 2026, in the specific domain of the software economy, because the technology optimists assume that capability automatically generates the value required to justify it. Neither assumption survives contact with the Keynesian insight: that demand is not automatic, that value is not inherent in output, and that the institutions which connect production to prosperity must be deliberately constructed and continuously maintained.

The supply is infinite. The demand is not. And the gap between them is where the policy challenge lives.

---

Chapter 8: Uncertainty, Probability, and the Machine That Calculates Both

Before Keynes was an economist, he was a philosopher of probability. His first major work, published in 1921, was not The Economic Consequences of the Peace or any treatise on monetary policy. It was A Treatise on Probability, a six-hundred-page argument that probability is not a frequency ratio but a logical relation between evidence and conclusion — a measure of the rational degree of belief that a given body of evidence warrants in a given proposition.

The distinction seems technical. It is, in fact, the most consequential idea in the Keynesian system, and the one most directly relevant to the AI transition.

The frequency interpretation of probability — the interpretation that dominates statistics, machine learning, and the computational infrastructure of every large language model — holds that the probability of an event is the ratio of its occurrence to the total number of trials, in the limit as the number of trials approaches infinity. The probability of a fair coin landing heads is 0.5, because in an infinite series of tosses, heads will occur half the time. This interpretation works superbly for repeatable events with stable underlying distributions — dice, cards, insurance actuarial tables, quality control on assembly lines.

Keynes argued that this interpretation fails for the decisions that matter most. Economic decisions — whether to invest, whether to hire, whether to launch a product, whether to enter a new market — are not repeatable events drawn from stable distributions. They are singular decisions made under conditions of radical uncertainty, where the relevant probabilities cannot be calculated because there is no series of comparable events from which to derive a frequency.

"By 'uncertain' knowledge," Keynes wrote in a 1937 article clarifying the argument of the General Theory, "I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know."

We simply do not know. Four words that demolish the pretension of any model — statistical, computational, or artificial — to predict the outcomes of genuinely novel situations.

Large language models are, at their foundation, frequency machines. They are trained on vast corpora of text and learn to predict the probability of the next token — the next word, the next phrase, the next syntactic unit — given the tokens that preceded it. The predictions are astonishingly good. The models have absorbed enough of human language, human reasoning, and human knowledge to produce outputs that are, in many domains, indistinguishable from the output of a competent human expert. They calculate conditional probabilities across billions of parameters with a speed and precision that no human mathematician could approach.

And they are, in the specific sense that Keynes identified, blind to the uncertainty that matters most.

The machine's predictive facility operates within the distribution of its training data. When the situation is sufficiently similar to situations the model has encountered before, its predictions are reliable, often remarkably so. When the situation is genuinely novel — when it falls outside the distribution of prior experience, when the relevant variables have no historical precedent, when the decision concerns a future that resembles no past — the model's confidence does not decline proportionally. It continues to produce fluent, confident output, because fluency and confidence are properties of the generative mechanism, not properties of the epistemic warrant.

The Orange Pill identifies this failure mode with precision: "Claude's most dangerous failure mode is exactly this: confident wrongness dressed in good prose." The smoothness of the output conceals the absence of genuine understanding. The model does not know what it does not know — not because it is poorly designed but because the architecture that produces its extraordinary facility with calculable probability provides no mechanism for registering the boundary between calculable probability and radical uncertainty.

This is the Keynesian problem, translated from the domain of economic forecasting to the domain of artificial intelligence. The machine calculates superbly within the known. It is structurally incapable of recognizing the unknown — the genuinely novel situation in which the past provides no reliable guide and the appropriate response is not a confident prediction but an honest admission of ignorance.

The danger is not that the machine will be wrong. Any tool can be wrong, and experienced users develop calibration — the intuitive sense of when to trust the tool and when to verify independently. The danger is that the machine's facility with the known will create an illusion of competence with the unknown. The confident output will be mistaken for genuine understanding. The calculated probability will be confused with knowledge. And the decisions that matter most — the decisions made under conditions of radical uncertainty, where the stakes are highest and the past is least informative — will be made with a false confidence that the machinery of calculation has installed.

Keynes observed that economic actors, confronted with radical uncertainty, respond in characteristic ways. They follow convention — doing what others are doing, on the assumption that the crowd knows something the individual does not. They anchor to the recent past — extrapolating current conditions forward, on the assumption that the future will resemble the present. They rely on animal spirits — acting on gut conviction when rational calculation is impossible. Each of these responses is understandable. None is reliable. And each is intensified, not moderated, by the availability of AI tools that provide the appearance of analytical rigor without the substance.

The convention-following is amplified when AI tools, trained on the same data as every other AI tool, produce convergent recommendations. The anchoring to the recent past is amplified when AI models, trained on historical data, project that data forward with mathematical precision and no mechanism for flagging the possibility that the future might be discontinuous with the past. The animal spirits are amplified when the confidence of the machine's output reinforces the confidence of the human decision-maker, creating a feedback loop in which both parties — one of which is not, in any meaningful sense, a party — converge on a course of action that feels well-supported but is, in fact, a shared extrapolation from a past that may not apply.

The AI transition itself is an instance of radical Keynesian uncertainty. The capabilities of AI systems are changing monthly. The economic implications are evolving weekly. The social and cultural consequences are materializing daily. No historical precedent fully applies. The mechanization of the nineteenth century, the electrification of the early twentieth, the computerization of the late twentieth — each offers partial analogies, but analogies are not predictions, and the aspects of the present that are genuinely novel — the speed, the breadth, the cognitive rather than physical nature of the displacement — have no frequency from which a probability can be derived.

Every forecaster projecting the number of jobs AI will displace, the timeline for artificial general intelligence, the percentage of work that will be automated by a given date, is performing an exercise that Keynes would have recognized as pseudo-scientific. The precision of the numbers — "forty-seven percent of jobs are at risk," "three hundred million jobs will be affected," "AGI will arrive by 2030" — creates an impression of knowledge where none exists. The numbers are not drawn from a distribution of comparable events. They are extrapolations dressed in the language of probability, and the language lends them an authority they have not earned.

This does not mean that analysis is useless or that all forecasts are equally baseless. Keynesian uncertainty is not nihilism. It is a methodological commitment to intellectual honesty about the limits of knowledge. Some things can be estimated with reasonable confidence — the direction of certain trends, the near-term effects of capabilities that already exist, the structural patterns that previous technological transitions have exhibited. Other things cannot be estimated at all — the speed of future capability development, the social and political responses to displacement, the innovations that have not yet been conceived. The honest analyst distinguishes between the two and labels each accordingly.

The machine does not make this distinction. It cannot, because the distinction requires what Keynes called "judgment" — the irreducible human capacity to weigh evidence, acknowledge uncertainty, and act despite incomplete knowledge. Judgment is not the calculation of probabilities. It is the capacity to act wisely when probabilities cannot be calculated — when the situation is genuinely novel, the stakes are genuinely high, and the honest answer to the question "What is the probability of success?" is "I do not know, but here is why I believe this course of action is worthwhile."

This capacity — the capacity for judgment under radical uncertainty — is the human contribution that AI amplifies but cannot replace. The machine provides the calculations. The human provides the judgment about whether the calculations apply. The machine processes the historical data. The human judges whether the historical data is relevant to a genuinely unprecedented situation. The machine generates the confident recommendation. The human decides whether to trust it.

The relationship between human judgment and machine calculation is, in this framework, complementary in the deepest sense. Neither is sufficient alone. The machine without human judgment produces confident nonsense about genuinely uncertain situations. The human without machine calculation is overwhelmed by the volume and complexity of the information that must be processed before judgment can be exercised. Together, they produce something that neither could produce alone: informed judgment — calculation grounded in data and directed by the irreducible human capacity to navigate the unknown.

Keynes spent his career arguing that the most important economic decisions are made under conditions of radical uncertainty, and that the quality of those decisions depends not on the sophistication of the calculation but on the wisdom of the judgment that directs it. The AI age has not changed this principle. It has made it more consequential, because the decisions are larger, the speed is greater, the stakes are higher, and the machinery of calculation is so impressive that the temptation to mistake it for wisdom has never been stronger.

The machine calculates. It does not understand. And the gap between calculation and understanding is where the human contribution lives — not as a residual, not as a placeholder awaiting eventual automation, but as the irreducible capacity that converts information into wisdom, probability into judgment, and data into decisions worth making.

Chapter 9: In the Long Run We Are All Augmented

The most misunderstood sentence in the history of economics appears in a short book published in 1923. John Maynard Keynes, responding to the Treasury view that inflationary pressures would resolve themselves naturally as the economy returned to long-run equilibrium, wrote: "But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again."

The sentence is read as flippancy — a bon mot from a witty Englishman who preferred dinner parties to rigor. This reading is precisely wrong. The sentence is a methodological manifesto. It states that economic theory which addresses only the equilibrium that markets will eventually reach, without addressing the path by which they reach it, is not merely incomplete but irresponsible. The path is where the suffering occurs. The path is where the policy decisions are made. The path is where real human beings, who cannot eat long-run equilibria or shelter their families in asymptotic convergences, must live.

The AI optimists have constructed a long-run argument of considerable power. It runs as follows: every major technological transition in human history has, after a period of disruption, produced net expansion. More jobs. Higher living standards. Greater human capability. The mechanization of agriculture displaced ninety percent of farm workers; those workers' descendants work in industries that did not exist when the displacement began. The electrification of factories displaced craft workers; their descendants work in offices, hospitals, laboratories, and professions that electricity made possible. The computerization of record-keeping displaced filing clerks; their descendants write software, manage databases, and design user experiences that the filing cabinet could not have conceived.

The pattern, documented with scholarly care by David Autor at MIT and cited throughout The Orange Pill's seventeenth chapter, is real. Roughly sixty percent of employment in 2018 consisted of job titles that did not exist in 1940. The labor market absorbs technological displacement not by restoring the old jobs but by generating new categories of work — categories that could not have been anticipated at the time of the displacement, because they depend on capabilities and institutions that the technology itself created.

The optimist's conclusion: AI will follow the same pattern. The transition will be painful, but the long run will be expansionary. New categories of work will emerge — categories we cannot yet name, serving needs we cannot yet articulate — and the economy will absorb the displaced workers into a richer, more capable, more productive system than the one they left.

The argument is almost certainly correct. And it is, in the Keynesian sense, almost entirely useless.

The uselessness lies not in the argument's truth but in its timeline. The transitions the optimists cite as precedent unfolded over decades. The mechanization of agriculture took roughly a century to complete, from Eli Whitney's cotton gin in 1794 to the tractor's dominance in the early twentieth century. The electrification of factories took forty years, from Edison's Pearl Street Station in 1882 to the widespread adoption of the electric motor in the 1920s. The computerization of office work took thirty years, from the mainframe installations of the 1960s to the PC's ubiquity in the 1990s.

Each transition destroyed old categories of work and created new ones. Each produced a generation that bore the cost of the transition — the displaced workers who were too old or too invested in the old skills to migrate to the new. Each also produced institutions — labor unions, public education, the weekend, the minimum wage, social insurance — that distributed the cost of the transition more broadly and supported the displaced workers during the interval between the destruction of the old and the emergence of the new.

The AI transition is compressing this timeline from decades to months. The capabilities described in The Orange Pill — the twenty-fold productivity multiplier, the collapse of the imagination-to-artifact ratio, the commoditization of code — emerged over the course of a single winter. The institutions that managed previous transitions — educational systems, labor protections, retraining programs, social safety nets — operate on timescales of years to decades. The gap between the speed of displacement and the speed of institutional response is not a minor coordination problem. It is a structural failure that determines whether the transition produces broadly distributed expansion or concentrated gain with widespread suffering.

Keynes's insistence on the short run was not an intellectual preference. It was a moral commitment. The economist who dismisses the short run — who tells the displaced worker that her grandchildren will benefit from the transition — is performing a specific act of moral evasion. The act consists of treating the worker not as a person with finite time and pressing needs but as a data point in a historical trend. The trend is real. The person is also real. And the person cannot live in the trend.

The Luddites documented in The Orange Pill's eighth chapter are the permanent exhibit in this argument. They were right about the short run. Their wages collapsed. Their communities dissolved. Their children grew up in poverty that the prior generation's skill and industry had been designed to prevent. Their grandchildren, eventually, participated in an industrial economy that offered opportunities the pre-industrial world could not have imagined. The long-run expansion was real. And the Luddites, who lived in the short run, did not benefit from it.

The historical record is specific about what distinguished transitions that produced broadly distributed benefit from transitions that produced concentrated gain. The distinguishing variable was not the technology. It was the institutional response. The transitions that produced broad benefit were accompanied by deliberate institutional construction: labor protections, educational expansion, progressive taxation, social insurance. The transitions that produced concentrated gain were accompanied by institutional absence — the laissez-faire conviction that the market would manage the transition on its own.

The market did not manage the transition on its own. It never has. Markets produce transitions. They do not manage them. Managing transitions — distributing the costs, protecting the vulnerable, building the bridges between the old economy and the new — is the work of institutions, and institutions must be deliberately designed, funded, and maintained.

The AI transition demands institutional construction at a speed and scale for which there is no close precedent. The displacement is occurring in months. The institutional response is occurring, where it is occurring at all, in years. Educational reform is debated in committee while students are already using tools that render the committee's curriculum obsolete. Labor policy is drafted in regulatory bodies while workers are already experiencing the displacement the policy is meant to address. Retraining programs are designed for a job market that will have changed again by the time the first cohort completes the program.

The Keynesian prescription is direct: governments must act counter-cyclically. When the private sector contracts — when firms convert productivity gains into headcount reductions, when displaced workers reduce spending, when the aggregate demand that sustains the economy weakens — government must expand. Not as charity. As macroeconomic management. The spending that maintains demand during the transition is not a cost to be minimized. It is an investment in the economic stability that makes the long-run expansion possible.

The specific instruments will vary by country, by sector, by the character of the displacement. Income support for workers during the transition interval — the period between the dissolution of the old skill set and the acquisition of the new. Investment in educational institutions that cultivate the judgment and integrative thinking the new economy rewards. Public procurement that creates demand for judgment-based work, demonstrating to the private sector that such work has economic value. Regulatory frameworks that prevent the race to the bottom — the competitive pressure on firms to convert every productivity gain into headcount reduction because their competitors are doing the same.

Robert Skidelsky, drawing on a career spent interpreting and extending Keynesian thought, framed the challenge in terms that Keynes himself might have used: "Can an economic system in which the means of production are largely privately owned ensure that the gains of productivity are shared sufficiently widely to enable the future that Marx and Keynes both wanted?" The question is not whether private ownership is legitimate. It is whether the institutional framework within which private ownership operates is adequate to the distributional challenge that AI poses.

The long run will arrive. The expansion will probably come. The new categories of work will probably emerge. The grandchildren will probably inherit a world of greater capability and greater possibility than the one their grandparents knew.

Probably.

But the generation that must navigate the transition — the workers, the parents, the communities that exist now, in the short run, in the tempestuous season — cannot eat probably. They need institutions. They need support. They need the deliberate, intelligent, sustained institutional action that converts the long-run possibility into a short-run reality.

Keynes understood that the ocean will eventually be flat. He also understood that the people on the ship need to survive the storm. And the storm is here.

---

Chapter 10: The Social Philosophy Toward Which This Tends

The final chapter of the General Theory carries a title that reveals, perhaps more than any other passage in Keynes's work, the scope of his ambition. It is called "Concluding Notes on the Social Philosophy Towards Which the General Theory Might Lead." Not the economic policy. Not the fiscal recommendations. The social philosophy — the vision of the good society that the entire preceding analysis, with its multipliers and liquidity preferences and marginal efficiencies, was designed to serve.

Keynes was not, in the end, interested in economics for its own sake. He was interested in economics as an instrument of human flourishing. The General Theory was not written to advance the discipline of economics. It was written to save a civilization that was failing, to demonstrate that the failure was not inevitable, and to prescribe the institutional response that would convert failure into something better. The technical apparatus — the demand curves, the investment functions, the consumption equations — was the scaffolding. The building was a vision of a society in which economic management was intelligent enough to provide the material foundation for a genuinely good life.

"I see us free, therefore," Keynes wrote in his 1930 essay, "to return to some of the most sure and certain principles of religion and traditional virtue — that avarice is a vice, that the exaction of usury is a misdemeanour, and that the love of money is detestable." He was not being sentimental. He was being diagnostic. The love of money, the confusion of wealth with well-being, the identification of the economic problem with the human problem — these were, in Keynes's view, pathologies of scarcity. When scarcity was solved, the pathologies would lose their justification. The challenge would be building a society wise enough to recognize that the justification had expired.

The pathologies did not expire. They intensified. The love of money did not diminish with abundance. It metastasized. The confusion of wealth with well-being did not resolve with rising GDP. It deepened. The identification of the economic problem with the human problem — the conviction that more production, more growth, more output is the answer to every question — survived every evidence that it was wrong and became, if anything, more entrenched as the evidence accumulated.

The AI transition brings this paradox to its terminal expression. The tools now exist to solve, in principle, every production problem that remains. Not today, not perfectly, but the trajectory is unmistakable: the multiplication of productive capacity that Keynes predicted has reached a stage where the constraint is no longer what can be produced but what should be produced, for whom, and to what end. The economic problem, in the narrow Keynesian sense, is on the threshold of solution. And the society that stands on that threshold shows no sign of being prepared for what lies beyond it.

The social philosophy that Keynes called for — a philosophy adequate to the conditions of abundance — has never been constructed. Neither the political left, which has focused on the distribution of material resources within the existing framework of production, nor the political right, which has focused on maximizing production within the existing framework of distribution, has offered a coherent vision of what a society organized around something other than the economic problem might look like. The question that Keynes posed — "how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well" — remains unanswered, and the AI transition has made the absence of an answer no longer a philosophical curiosity but a practical emergency.

The Orange Pill arrives at a parallel conclusion through a different path. Segal's question — "Are you worth amplifying?" — is the individual version of Keynes's social question. The amplifier does not care what signal you feed it. It amplifies carelessness and care with equal fidelity. The individual who approaches AI with genuine judgment, genuine taste, genuine care for what deserves to exist in the world, will produce amplified judgment, amplified taste, amplified care. The individual who approaches AI with confusion, with the compulsive productivity that substitutes activity for purpose, will produce amplified confusion.

Keynesian analysis extends this insight from the individual to the institutional. Institutions, like individuals, feed signals into the amplifier. An educational institution that teaches students to execute rather than to judge will produce, at AI-amplified scale, workers who execute brilliantly and judge poorly. A corporate institution that rewards quarterly performance rather than long-term value creation will produce, at AI-amplified scale, speculative manias rather than durable enterprises. A governance institution that regulates the supply of AI tools without addressing the demand for human development will produce, at AI-amplified scale, an economy saturated with capability and starved of the judgment to use it.

The social philosophy that the AI transition requires is not a novel invention. Its elements have been articulated across centuries of moral and political thought. What is novel is the urgency. The tools that make the philosophy necessary are here, and the institutions that would implement it are not.

The first element is the recognition that the economic problem and the human problem are not the same. Solving the economic problem — producing enough to meet material needs — is a necessary condition for human flourishing but not a sufficient one. A society that has solved the economic problem and continues to organize itself around production is a society that has confused the medicine with the health. The medicine was necessary. Continuing to take it after the illness has passed is a different kind of sickness.

The second element is the institutional capacity for what Keynes, had he lived to see it, might have called demand management for meaning. Just as Keynesian fiscal policy manages aggregate demand for goods and services — expanding it when the market contracts, contracting it when the market overheats — the institutions of the AI age must manage the aggregate supply of meaningful activity. This means educational institutions that cultivate judgment, curiosity, and the capacity for genuine leisure. It means labor institutions that protect workers during the transition and support the identity reconstruction that the transition demands. It means cultural institutions that value contribution over output and quality over speed. It means governance institutions that measure not just GDP but the indicators that GDP obscures: well-being, social connection, the capacity for sustained attention, the quality of the questions a society asks.

The third element is the aesthetic dimension that Keynes, as a member of the Bloomsbury Group and a founder of the Arts Council, understood viscerally. Good institutions are not merely functional. They express a vision of the good. The eight-hour day was not merely a labor regulation. It was a cultural statement: human beings are more than their productive output, and a society that does not protect time for rest, for family, for the activities that have no economic justification but make life worth living, is a society that has lost its way. The institutions of the AI age must carry the same expressive weight. They must not merely prevent exploitation and distribute resources. They must articulate, in their design and operation, a vision of what human life is for when the economic problem has been solved.

Keynes was an optimist who spent his career arguing with the consequences of misplaced optimism. He believed that abundance was coming and that abundance would be good. He also believed, with increasing conviction as his career progressed, that abundance unmanaged by intelligent institutions would produce not the good life but a new and more insidious form of suffering — the suffering of people who have everything they need and no idea what to do with it.

The AI transition has delivered the abundance. The tools are extraordinary. The productive capacity they provide is, by any historical measure, miraculous. And the society that possesses these tools faces exactly the challenge Keynes identified: not the challenge of production, which is solved, but the challenge of purpose, which is not.

The framework that Keynes spent his career constructing — the insistence that markets do not automatically produce good outcomes, that institutions must be deliberately designed to convert productive capacity into broadly distributed flourishing, that the short run matters as much as the long run, that animal spirits must be channeled rather than suppressed, that the paradox of individually rational behavior producing collectively irrational outcomes is a permanent feature of economic life requiring permanent institutional management — this framework is not a historical artifact. It is the operating system that the AI economy requires and that the AI economy, in its current configuration, lacks.

The social philosophy toward which the AI transition tends is neither technophilic nor technophobic. It is institutional. It holds that the quality of the tools is less important than the quality of the institutions through which the tools operate. It holds that abundance is a necessary condition for flourishing but that converting abundance into flourishing is a design problem, requiring the same intelligence, care, and aesthetic judgment that the best tools embody and that the worst institutions betray.

Keynes closed the General Theory with an observation that has lost none of its force: "The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist."

The defunct economist whose ideas currently govern the AI transition is not Keynes. It is the ghost of classical equilibrium — the conviction that markets will adjust, that supply will create its own demand, that the long run will take care of itself. Keynes demolished this conviction ninety years ago. The AI economy is learning, at considerable cost, why the demolition was necessary.

The alternative is not to resist the tools or to slow the technology. The alternative is to build the institutions — educational, corporate, governmental, cultural — that convert the extraordinary productive capacity of AI into the broadly distributed human flourishing that the capacity makes possible and that the market, left to itself, will not produce.

The economic problem is solved. The institutional problem has barely been stated. And the distance between the two is where the next chapter of human civilization will be written — not by the machines, which are indifferent to the outcome, but by the people and the institutions that decide what the machines are for.

---

Epilogue

The board conversation keeps returning. Not the specific one I describe in The Orange Pill — though that one does return, quarterly, with the same arithmetic and the same temptation — but the structure of it. The shape. On one side of the table sits the long-run argument: keep the team, invest in judgment, build for the ecosystem that sustains everyone downstream. On the other side sits the short-run arithmetic: one engineer does the work of twenty, the margin is right there, take it.

What Keynes gave me — what this entire journey through his framework installed in my thinking — is a name for why the arithmetic feels so seductive and why following it would be a disaster. He called it the fallacy of composition. What is rational for one firm is catastrophic for all firms. What is efficient for one quarter is ruinous for the system that produces the quarters. The nineteen engineers I keep paying are not a cost I am sentimentally refusing to cut. They are aggregate demand. They are the economy that buys what every other company builds. They are, multiplied across every firm making the same decision, the difference between expansion and contraction.

I did not understand this viscerally before writing this book. I understood the intuition — keep the team, build the pool, tend the dam. But the intuition had no structural foundation. Keynes gave it one. The paradox of thrift gave it one. The liquidity trap of capability gave it one. The distinction between enterprise and speculation — between building for the long term and riding the quarter — gave it the clearest one of all.

The concept that rewired my thinking most was the one I expected least: sticky identities. I had spent months telling my engineers to "ascend" — to move from execution to judgment, from coding to creative direction, from building to deciding what deserves to be built. Good advice, I still believe. But Keynes's framework on wage stickiness, extended to identity, showed me what I had been asking of them without quite seeing it. I was asking them to dissolve who they were. Not their skills — skills can be updated. Their selves. The architecture of competence and belonging they had built across years and decades. And I was asking them to do it at the speed the market demanded, which is to say, immediately, without institutional support for the interval between the old self and the new one.

That interval is where the suffering lives. Keynes knew this about wages. It is even more true about identity. And the institutions that would support people through it — the retraining, the income bridges, the communities of practice for people in transition — barely exist. We are asking millions of people to emigrate to a new professional country overnight, and we have built no embassies, no language schools, no welcoming structures of any kind.

The number I carry from this journey is not an economic statistic. It is the ninety-six years between Keynes's essay and now. He predicted abundance by 2030. Abundance arrived. He predicted that abundance would free us to live wisely and agreeably and well. That part remains, as of this writing, a prediction — the most generous and the most unfulfilled prediction in the history of economic thought. The machines are here. The abundance is here. The freedom to live well is, technically, within reach for hundreds of millions of people.

And we are working harder than ever, on more things, at a higher pitch, with less certainty about what any of it is for.

The permanent problem — Keynes's phrase for the question of how to live when the economic problem is solved — has not become easier with AI. It has become inescapable. The tool that amplifies everything amplifies the question too. What are you building? Why? For whom? Is it worth the hours it consumes — not just your hours, but the hours of everyone downstream?

I do not have Keynes's confidence that we will answer well. But I have his conviction that the answer is institutional, not individual. That no amount of personal worthiness compensates for structures that concentrate the gains and distribute the costs. That the market will not manage this transition on its own, because the market has never managed a transition on its own, and the belief that it will is the residue of a defunct economics that Keynes demolished ninety years ago.

Build the institutions. Support the people in the interval. Measure what matters, not merely what grows. And remember, when the long-run argument sounds most persuasive, that the people living through the transition cannot wait for the long run.

In the long run we are all augmented. In the short run, we are all choosing.

-- Edo Segal

In 1930, John Maynard Keynes predicted that his grandchildren would live in material abundance. He was right. He also predicted they would work fifteen-hour weeks and devote themselves to the art of l

In 1930, John Maynard Keynes predicted that his grandchildren would live in material abundance. He was right. He also predicted they would work fifteen-hour weeks and devote themselves to the art of living well. He was catastrophically wrong. The abundance arrived on schedule. The freedom never did -- and artificial intelligence is about to make the gap between what we can produce and how well we actually live wider than at any point in human history.

This book applies Keynes's most powerful frameworks -- the paradox of thrift, the liquidity trap, the fallacy of composition, the distinction between calculable risk and radical uncertainty -- to the AI revolution unfolding now. What emerges is a structural explanation for why trillion-dollar productivity gains do not automatically become broadly shared prosperity, why individually rational decisions aggregate into collectively irrational outcomes, and why markets have never managed a major technological transition on their own.

The tools are extraordinary. The institutions that would convert their power into genuine human flourishing barely exist. Keynes spent his career building the intellectual foundation for exactly this moment -- when capability outstrips wisdom, and the question is no longer what we can produce but what kind of society we choose to build with the surplus.

-- John Maynard Keynes, The General Theory of Employment, Interest and Money (1936)

John Maynard Keynes
“Economic Possibilities for Our Grandchildren,”
— John Maynard Keynes
0%
11 chapters
WIKI COMPANION

John Maynard Keynes — On AI

A reading-companion catalog of the 21 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that John Maynard Keynes — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →