By Edo Segal
The shelf I kept reaching for had no technology books on it.
Every framework I built in *The Orange Pill* — the river of intelligence, the beaver's dam, the ascending friction — was an attempt to make sense of a transformation that moves faster than any single discipline can track. I wrote about philosophy, about psychology, about the history of computing. But there was a gap I could feel without being able to name it. The gap was economic — not in the narrow sense of GDP forecasts and labor statistics, but in the deeper sense that Robert Heilbroner spent his entire career insisting on: the sense in which every economic arrangement is also a moral arrangement, and every technological transition is also a referendum on what a society believes human life is for.
Heilbroner did something that almost no economist of his stature attempted. He treated economic ideas as human dramas. Adam Smith was not a theorem. Marx was not a system of equations. Keynes was not a macroeconomic model. They were people — flawed, brilliant, historically situated people — whose theories could not be separated from the crises that produced them and the moral convictions that animated them. Heilbroner's method was biographical because he understood that ideas do not arrive from nowhere. They arrive from lives lived inside specific pressures, and if you strip away the life, you lose the idea's meaning.
That method is what the AI discourse is missing. We talk about productivity curves and adoption rates and trillion-dollar repricings. We do not talk enough about the fact that every one of those numbers represents a choice about who captures the gains and who bears the costs — and that this choice is not technical. It is moral, institutional, political. It is the oldest question in economics, and Heilbroner is the thinker who refused to let the discipline forget it.
What haunts me most is his observation about the gap. Every major technological transition produces a period between the technology's arrival and the institution's response. The eight-hour day came decades after children worked the mills. Social insurance came after the Depression had already done its damage. The institutional imagination always lags. The question is whether we can close the gap faster this time — and Heilbroner's honest answer, drawn from five centuries of evidence, is that we never have.
That honesty is what I needed. Not reassurance. Not alarm. The clear-eyed recognition that the dams must be built, that they have never been built fast enough, and that the building is still the only thing worth doing.
— Edo Segal ^ Opus 4.6
1919–2005
Robert Heilbroner (1919–2005) was an American economist, historian of economic thought, and public intellectual whose work bridged the gap between technical economics and moral philosophy. Born in New York City, he studied at Harvard under Joseph Schumpeter and later earned his doctorate at the New School for Social Research, where he taught for nearly five decades. His most celebrated work, *The Worldly Philosophers: The Lives, Times, and Ideas of the Great Economic Thinkers* (1953), became one of the best-selling economics books of the twentieth century, translated into more than thirty languages and read by millions of non-economists who encountered Smith, Marx, Keynes, and Schumpeter as human characters rather than abstract theorists. His subsequent works — including *The Making of Economic Society*, *An Inquiry into the Human Prospect*, *The Nature and Logic of Capitalism*, *Marxism: For and Against*, *Behind the Veil of Economics*, *21st Century Capitalism*, and *Visions of the Future* — examined the moral and institutional foundations of capitalist civilization with a rigor that the mathematical mainstream of the profession largely abandoned. Heilbroner insisted throughout his career that economics is inescapably a moral discipline — that questions of production cannot be separated from questions of distribution, power, and human dignity — and that the institutional arrangements societies construct around their technologies determine whether those technologies serve human flourishing or undermine it.
In 1953, a young economist published a book that should not have worked. It had no equations. It offered no policy prescriptions. It did not predict interest rates or model labor markets or derive supply curves from first principles. What Robert Heilbroner did in The Worldly Philosophers was something the economics profession had largely abandoned and would spend the next half-century pretending was beneath it: he told the story of economic ideas as a human drama, populated by characters whose theories could not be separated from the lives, temperaments, and historical catastrophes that produced them.
Adam Smith was not a theorem. He was a peculiar, absentminded Scottish professor who once fell into a tanning pit while lecturing a friend about the division of labor. Karl Marx was not a system of equations. He was a man who lived in grinding poverty in Soho, whose children died of malnutrition, and whose rage at the industrial order was inseparable from his experience of its cruelties. John Maynard Keynes was not a macroeconomic model. He was a Bloomsbury aesthete who believed that the purpose of economic growth was to make possible a civilization in which people could devote themselves to art, friendship, and contemplation — and who spent his career trying to rescue capitalism from the capitalists so that this civilization might arrive.
Heilbroner's method — treating economic theories as biographies of ideas, embedded in history, shaped by circumstance, animated by moral conviction — was dismissed by the mathematical economists who dominated the postwar academy. It was also read by more people than any economics book of the twentieth century. The reason was simple: Heilbroner understood something the formalists missed. Economic ideas are not discovered the way chemical elements are discovered — as preexisting facts waiting to be observed. They are invented in response to specific historical crises, by specific human beings, carrying specific fears and hopes about the world they inhabit. To understand the idea, Heilbroner insisted, one must understand the crisis that called it into being.
This method — the biographical, historical, morally serious study of economic thought — is the instrument through which the present book examines the artificial intelligence transition. The crisis is real. A technology has arrived that promises to transform the relationship between human labor and economic production more profoundly than any technology since the steam engine. The economists who grappled with previous transformations of this magnitude — Smith with the commercial revolution, Marx with industrialization, Keynes with the instability of mature capitalism, Schumpeter with the dynamics of innovation — each produced a partial vision that illuminates a different dimension of what is now unfolding. None of them saw the whole picture. Taken together, applied with historical rigor and contemporary evidence, their partial visions produce a diagnostic framework more powerful than any single theory could provide.
The first partial vision belongs to Smith. His great insight, illustrated by the famous pin factory of The Wealth of Nations, was that the division of labor multiplies productivity to an almost miraculous degree. Ten workers, each performing one step of the pin-making process, could produce forty-eight thousand pins in a day. A single worker attempting all eighteen steps alone could scarcely produce one. The productivity gain was not marginal. It was transformative — and it depended entirely on the fragmentation of work into specialized tasks.
But Smith saw the cost even as he celebrated the gain. In a passage that Heilbroner quoted with particular emphasis, Smith acknowledged that the worker who spends his entire life performing one or two simple operations "has no occasion to exert his understanding, or to exercise his invention." The division of labor that creates wealth also creates a specific form of impoverishment — the narrowing of the worker's experience, the atrophy of faculties that are never called upon, the reduction of a human being to a function. Smith's framework poses a question that the AI transition makes urgently contemporary: what happens when the division of labor changes character — when the machine takes over the specialized functions and the human is left with... what, exactly?
The second partial vision belongs to Marx. Where Smith saw the division of labor as a source of wealth, Marx saw machinery as a source of power — specifically, the power of those who own the machinery over those who must sell their labor to survive. The machine does not merely displace the worker. It transforms the social relations of production. The handloom weaver was an independent craftsman, owning his tools, controlling his pace, selling his product. The power loom operator was a wage laborer, owning nothing, controlled by the rhythm of the machine, selling not his product but his time. The technology did not simply change what was produced. It changed who had power over whom.
Marx's framework, as Heilbroner analyzed it with characteristic balance in Marxism: For and Against, contains both genuine analytical power and significant predictive failure. The analytical power lies in his insistence that technology is never socially neutral — that every machine embodies a set of social relations, and to ask "What does this machine do?" without asking "Who owns it, who controls it, and who bears the costs of its deployment?" is to miss the most important part of the story. The predictive failure lies in Marx's conviction that the concentration of power in the hands of capital owners would inevitably produce revolutionary consciousness among workers. It did not. The institutional adaptations of the twentieth century — labor unions, social insurance, progressive taxation, public education — absorbed enough of the shock to prevent the revolution Marx considered inevitable.
The question Marx's framework poses for the AI transition is whether the institutional adaptations that absorbed previous technological shocks are adequate to absorb this one. The AI infrastructure is owned by a small number of companies — Anthropic, OpenAI, Google, Meta — whose market power exceeds that of any industrial corporation in history. The forty-seven million developers worldwide who use AI tools are not, in any traditional sense, workers in these companies' factories. But they are dependent on these companies' infrastructure in a way that would have been immediately legible to Marx: the means of production have been concentrated, and the producers depend on access to means they do not own.
The third partial vision belongs to Keynes, and it is perhaps the most poignant. In 1930, as the Great Depression was gathering force, Keynes published an essay called "Economic Possibilities for Our Grandchildren" in which he predicted that within a century, technological progress would produce enough wealth to satisfy all material needs, and that his grandchildren's generation would face a novel problem — not scarcity but abundance. The working week would shrink to fifteen hours. Humanity would at last be free to pursue the good life: art, philosophy, friendship, contemplation.
Keynes was right about the wealth. Global GDP per capita has increased roughly sixfold since 1930. The productive capacity exists to provide a decent standard of living for every human being on the planet. Keynes was wrong — spectacularly, consequentially wrong — about what humanity would do with the surplus. The fifteen-hour workweek never arrived. Working hours in advanced economies have declined only modestly since the 1930s, and for many professionals they have increased. The surplus was channeled not into leisure but into more production, more consumption, more work. The culture, it turned out, had no framework for leisure as a dignified mode of existence. Work was not merely an economic necessity. It was an identity, a social position, a source of meaning, and the culture could not imagine giving it up even when the economic justification for it had been substantially eroded.
Heilbroner, in Visions of the Future, provided the theoretical framework for understanding Keynes's failure. The problem was not economic but cultural — a matter of what Heilbroner called "vision," the image a society holds of its own future. A society that envisions the future as progress will invest its surplus in growth. A society that envisions the future as liberation might invest its surplus in leisure. But a society that has internalized work as the measure of human worth will channel every productivity gain back into more work, because it literally cannot imagine what else to do with the time.
The AI transition is recapitulating Keynes's failure with painful fidelity. Researchers at UC Berkeley embedded themselves in a technology company for eight months and found that AI tools did not reduce working hours. They intensified them. Workers took on more tasks, expanded into adjacent domains, filled every moment of freed time with additional production. The Jevons Paradox — the nineteenth-century observation that improvements in the efficiency of coal use increased rather than decreased total coal consumption — applies to cognitive labor with the same iron regularity. Efficiency in the use of a resource increases the total consumption of that resource, because the demand for the resource was never fixed. It was constrained by cost. Reduce the cost, and demand expands to absorb every gain.
The fourth partial vision belongs to Schumpeter, and it is the one that speaks most directly to the experience of the people living through the AI transition at this moment. Schumpeter's great concept — creative destruction, the "perennial gale" that sweeps through capitalist economies as new innovations destroy old industries while creating new ones — was advanced with a confidence that bordered on cheerfulness. The destruction was necessary. The creation was inevitable. The process was, in the aggregate, beneficial. The horse-and-buggy industry was destroyed by the automobile, but the automobile industry created more jobs, more wealth, more human capability than the horse-and-buggy industry could have imagined.
Heilbroner treated Schumpeter with the same nuanced sympathy he brought to all his worldly philosophers, recognizing the analytical power while noting the blind spot. The blind spot was the gap. Between the destruction of the old and the creation of the new, there is a period — sometimes a generation, sometimes longer — during which the destroyed have not yet found their place in the new order. Schumpeter's framework accounts for this gap in the aggregate: the economy adjusts. It does not account for it at the level of the individual human being who must live through the adjustment, who must find new work, retrain, relocate, rebuild an identity around different skills, often in middle age, often without institutional support.
The AI transition has made this gap newly visible and newly urgent. Previous waves of creative destruction targeted manual skills, manufacturing capacity, routine cognitive tasks — the lower floors of the economic edifice. AI targets the higher floors. It targets the judgment, the synthesis, the creative capacity that knowledge workers spent decades developing and that were supposed to be destruction-proof. The software engineer who invested fifteen years in mastering backend architecture, the lawyer who spent a decade building expertise in contract analysis, the radiologist whose diagnostic skill was built through thousands of hours of patient study — these are not buggy-whip manufacturers. These are the most highly trained, most expensively educated members of the workforce, and the gap between the destruction of their current value proposition and the creation of whatever replaces it is the space in which the human cost of the AI transition will be paid.
Each of these four thinkers saw a piece of the truth. Smith saw that technology multiplies productivity at the cost of human breadth. Marx saw that technology concentrates power in the hands of those who own it. Keynes saw that productivity gains need not produce leisure if the culture cannot imagine what leisure is for. Schumpeter saw that new industries rise from the wreckage of old ones — but did not adequately reckon with the generation that inhabits the wreckage.
None of them saw the whole. The synthesis requires holding all four partial visions simultaneously, which is precisely what Heilbroner's biographical method makes possible. The AI transition is a productivity revolution (Smith). It is a transformation of social relations (Marx). It is a test of cultural imagination (Keynes). It is a creative destruction of expertise (Schumpeter). And it is occurring in a period that Heilbroner, in his final decades, identified with particular precision: a period in which "the forces of technical change have been unleashed, but when the agencies for the control or guidance of technology are still rudimentary."
That sentence was written in 1967, about computers in general. It reads as though it were written yesterday, about artificial intelligence in particular. The forces have been unleashed. The agencies of control are rudimentary. And the worldly philosophers — the thinkers who spent their lives trying to understand how technology reshapes human welfare — are the guides whose partial visions, combined, might illuminate the terrain.
The chapters that follow take each vision in turn, apply it to the evidence, and ask what it reveals about a technological transition that none of these thinkers lived to see but all of them, in some sense, anticipated. The drama is not over. The characters are still arriving on stage. And the question that animated Heilbroner's entire career — whether the material conditions of human life can be organized in a way that serves human flourishing rather than undermining it — has never been more urgently in need of an answer.
---
The pin factory appears on the very first page of The Wealth of Nations, and it appears there because Adam Smith understood something that most economists since have forgotten: that the most profound economic truths are best demonstrated through the most concrete examples. The example is a small workshop in which ten workers, each performing one specialized operation — drawing the wire, straightening it, cutting it, pointing it, grinding the head — produce approximately forty-eight thousand pins per day. A single worker, attempting the entire process alone, could produce perhaps one. The productivity difference is not ten percent or fifty percent or even fivefold. It is a factor of nearly five thousand.
This was Smith's great discovery: that the division of labor, the fragmentation of a complex task into simple, repeatable components, multiplies productive capacity by orders of magnitude. The discovery was simultaneously economic and anthropological — it explained not just how wealth is produced but how human societies organize themselves around the production of wealth. The pin factory was not merely an efficient arrangement. It was a social order in miniature, a hierarchy of specialized functions coordinated by the discipline of the market. And the social order it represented — commercial capitalism, the system of exchange and specialization that was, in Smith's time, displacing the older orders of feudal obligation and mercantilist regulation — was, Smith argued, the most productive arrangement human beings had yet devised.
Heilbroner, in his treatment of Smith in The Worldly Philosophers, emphasized both the brilliance of the insight and the ambivalence that accompanied it. Smith was not naive about the costs of specialization. The passage in Book V of The Wealth of Nations — in which Smith describes the worker who performs the same few simple operations all day as becoming "as stupid and ignorant as it is possible for a human creature to become" — is one of the most devastating indictments of the industrial order ever written, and it was written by the man who explained why the industrial order works. Smith saw the productivity miracle and the human degradation as two aspects of a single process. The pin factory produces forty-eight thousand pins. It also produces workers who have been reduced, by the very efficiency that makes them productive, to appendages of a process they can neither comprehend nor control.
This tension — between the productivity of specialization and the impoverishment of the specialist — has defined economic life for two and a half centuries. Every subsequent thinker grappled with it. Marx radicalized it into a theory of alienation. Keynes hoped that productivity would eventually grow large enough to make specialization unnecessary. Schumpeter argued that creative destruction would continuously generate new forms of specialization, keeping the system dynamic. But none of them dissolved the tension. The choice seemed structural: you could have the productivity of the factory or the breadth of the craftsman, but not both.
Artificial intelligence has altered this calculus in a way that Smith's framework anticipated but could not quite imagine. The evidence is specific and recent. In February 2026, as documented in The Orange Pill, a team of engineers in Trivandrum, India, underwent a transformation that would have fascinated Smith. These were experienced technical workers — backend specialists, frontend developers, database architects — each occupying a narrow lane in the modern software pin factory, each performing their specialized function within a larger production process they individually comprehended only in part.
Within days of adopting AI coding assistants, these specialists began reaching across the boundaries of their specialization. A backend engineer built user-facing interfaces. A designer implemented complete features. The boundaries that had seemed structural — the inevitable consequence of the complexity of modern software, the necessary division of a task too large for any single practitioner — turned out to be artifacts not of the work itself but of the cost of translation between domains. When the cost of moving between specializations dropped to the cost of a natural-language conversation, people moved.
What emerged was something genuinely novel in the history of economic organization: the generalist operating at the specialist's productivity level. The AI-assisted engineer could produce complete features — conceived, designed, implemented, tested — at a rate that previously required a team of specialists working in sequence. The division of labor had not been eliminated. It had been internalized. The machine performed the specialized operations; the human performed the integration — the judgment about what to build, the architectural sense of how components fit together, the taste that separated a feature users would love from one they would merely tolerate.
Smith would have recognized the significance immediately, because the arrangement resolves — or appears to resolve — the tension that haunted his own analysis. The productivity of the factory and the breadth of the craftsman, united in a single practitioner. The worker is no longer "stupid and ignorant," reduced to one or two simple operations. The worker is a generalist whose generalism is made economically viable by a machine that handles the specialized execution.
But Smith, who was above all an honest observer of economic life, would also have asked the harder question: Is this arrangement stable? Or does it produce its own form of degradation, subtler than the pin factory's but no less real?
The concern is not speculative. Heilbroner, in The Making of Economic Society, traced a pattern that recurs across the history of capitalist production: every arrangement that appears to liberate the worker from the constraints of the previous arrangement eventually generates its own constraints. The factory liberated the craftsman from the limitations of cottage industry — the inability to produce at scale, the dependence on local markets, the vulnerability to fluctuations in demand. But the factory then imposed its own constraints: the discipline of the clock, the fragmentation of the task, the subordination of the worker's rhythm to the machine's. The office liberated the factory worker from physical toil, but it imposed the constraints of bureaucratic hierarchy, specialization of function, and the particular numbness of work that engages the mind without challenging it. Each liberation, examined over sufficient time, reveals itself as a migration — from one set of constraints to another.
The AI artisan, viewed through this historical lens, is migrating from the constraints of specialization to the constraints of something else. The question is what that something else turns out to be. Three possibilities suggest themselves, and each has evidence in its favor.
The first possibility is that the constraint becomes judgment itself. When the machine handles execution, the human's contribution is reduced to deciding what should be executed. This sounds liberating — who would not prefer to be the director rather than the laborer? — but direction is a skill that most workers have never been required to develop, because the demands of specialized execution consumed their bandwidth. The senior engineer described in The Orange Pill, who spent his first days with AI tools oscillating between excitement and terror, was experiencing the vertigo of a worker whose specialized competence had been rendered abundant while his judgment — the faculty he had never been systematically trained to develop — was suddenly the only thing that mattered.
Smith understood this dynamic. In The Theory of Moral Sentiments, his earlier and in some ways more profound work, he argued that human capacities develop through exercise and atrophy through disuse. The worker who exercises judgment daily develops judgment. The worker whose judgment is never called upon — because the specialized task requires only execution — loses the capacity for it. For two and a half centuries, the pin factory and its descendants trained workers in execution and left judgment to the managers. The AI transition inverts this arrangement overnight and asks the execution-trained worker to become a judgment-trained director without providing the intermediate experience that would make the transition organic.
The second possibility is that the constraint becomes attention. The AI artisan can do everything, but "everything" is not a strategy. The craftsman in Smith's era was constrained by the narrowness of his task. The AI artisan is constrained by the breadth of possibility. When the cost of execution approaches zero, the scarce resource is no longer the capacity to build but the capacity to decide what is worth building — and this decision requires a quality of sustained, discriminating attention that the tools themselves tend to erode. The same technology that enables the artisan to work across domains also generates an infinite queue of things to work on, and the evidence from the Berkeley study suggests that most practitioners respond to this infinity not by choosing wisely but by doing more.
The third possibility — the most interesting, and the one most consistent with Heilbroner's own analysis of how capitalist economies evolve — is that the AI artisan is a transitional figure. Just as the independent craftsman was a transitional figure between feudal production and factory production, the AI artisan may be a transitional figure between the era of human execution and the era of... something that does not yet have a name. The arrangement in which a human directs an AI tool may itself be disrupted, as the tools grow more capable of directing themselves, as AI agents begin to perform not just execution but judgment, not just implementation but design. The artisan's liberation from the pin factory may prove temporary — not because the artisan is sent back to the factory but because the factory learns to operate without the artisan.
Heilbroner's method demands honesty about what cannot yet be known. He was scathing about economists who presented predictions as certainties, who mistook the extrapolation of current trends for the revelation of future states. The AI artisan exists now, and the arrangement is producing real value for real practitioners. Whether it endures depends on variables that are not yet determined — the pace of AI capability development, the institutional choices that societies make about education and labor, the cultural decisions about what kinds of work are considered valuable and what kinds of workers are considered dignified.
Smith's pin factory demonstrated that specialization produces wealth at the cost of human breadth. The AI artisan suggests that the cost can be reduced — that technology can return breadth to the worker while preserving the productivity of specialization. But the history of capitalist production, as Heilbroner narrated it across a dozen books and a half-century of scholarship, counsels caution about arrangements that appear to resolve structural tensions. The tensions tend not to be resolved. They tend to migrate. And the migration is visible only in retrospect, to the historian who examines the arrangement after it has been superseded by whatever comes next.
The pin factory is not a metaphor. It is a diagnostic instrument. Applied to the AI artisan, it reveals both the genuine novelty of the arrangement — the first time in the history of capitalist production that a single practitioner can operate across the full breadth of a complex production process at industrial productivity levels — and the genuine uncertainty about whether this novelty represents a permanent expansion of human capability or a transitional phase in a process whose destination is not yet visible.
Smith, who fell into a tanning pit while explaining the very phenomenon that would reshape the world, would have appreciated the irony: the technology that liberates the worker from the narrowness of specialization may eventually liberate production from the worker altogether. Whether that liberation serves human flourishing depends entirely on what happens in the space between now and then — the institutional choices, the cultural decisions, the political struggles that determine who captures the gains and who bears the costs of the transition.
Those questions belong to the economists who came after Smith. The next chapter takes up the first and most formidable of them.
---
In the spring of 1856, at a banquet in London celebrating the anniversary of the Chartist newspaper The People's Paper, Karl Marx rose to give a speech that contained, in compressed form, the argument that would occupy his life's work. "There is one great fact, characteristic of this our nineteenth century," he said, "a fact which no party dares deny. On the one hand, there have started into life industrial and scientific forces, which no epoch of the former human history had ever suspected. On the other hand, there exist symptoms of decay, far surpassing the horrors recorded of the latter times of the Roman Empire." The speech concluded with a metaphor that Heilbroner, in The Worldly Philosophers, singled out as the key to Marx's entire vision: "In our days, everything seems pregnant with its contrary."
That formulation — everything pregnant with its contrary — is the most precise description available of the AI transition's distributional character. The technology that empowers also disempowers. The tool that democratizes also concentrates. The capability that liberates also enslaves. And the distribution of empowerment and disempowerment, democratization and concentration, liberation and enslavement follows contours that Marx identified in the 1850s with an analytical precision that has survived every subsequent attempt to consign his work to the dustbin of discredited prophecy.
Heilbroner, who devoted an entire book — Marxism: For and Against — to disentangling Marx's analytical successes from his predictive failures, was characteristically balanced on this point. Marx got the mechanism right and the outcome wrong. The mechanism — that machinery transforms not just what is produced but the social relations under which production occurs, concentrating power in the hands of those who own the means of production and reshaping the experience of those who labor with them — is as operative today as it was in the textile mills of Manchester. The outcome Marx predicted — that this concentration would inevitably produce revolutionary class consciousness and the overthrow of the capitalist order — did not materialize, because institutional adaptations (labor unions, social insurance, progressive taxation, public education) absorbed enough of the shock to prevent the revolutionary rupture Marx considered both inevitable and desirable.
The question the AI transition poses to Marx's framework is whether those institutional adaptations, developed across a century of industrial and post-industrial capitalism, are adequate to absorb a technological shock of the magnitude now underway. The question is not abstract. It has a specific empirical form, and the evidence available in 2026 permits at least a preliminary answer.
Consider first the concentration of ownership. Marx's central insight about machinery was that it is not merely a tool — it is capital, and capital has a logic. The logic of capital is accumulation: the conversion of surplus into more capital, the reinvestment of profit into expanded productive capacity, the relentless drive to grow that Heilbroner, in The Nature and Logic of Capitalism, identified as the defining characteristic of the capitalist order. The machine, in Marx's framework, is the physical embodiment of this logic — capital made tangible, productive, self-expanding.
The AI infrastructure of 2026 embodies this logic with a purity that Marx might have found both vindicating and terrifying. The large language models that power Claude, GPT, Gemini, and their successors represent perhaps the most concentrated investment of capital in productive capacity in human history. The training runs alone cost hundreds of millions of dollars. The data centers that house the models require billions in infrastructure. The research teams that develop them represent the most expensive assemblage of human talent ever directed at a single productive purpose. And the resulting capability — the capacity to perform cognitive labor across virtually every domain of human expertise — is owned by a handful of companies whose market capitalization exceeds the GDP of most nations.
This is not incidental to the technology. It is the technology's economic structure. The forty-seven million developers who use AI coding tools are not, in any traditional sense, employees of Anthropic or OpenAI. They are independent contractors, freelancers, startup founders, employees of other companies. But they are dependent on infrastructure they do not own, cannot replicate, and increasingly cannot work without. The relationship between the individual developer and the AI platform bears a structural resemblance to the relationship Marx identified between the handloom weaver and the factory owner: the weaver owned his loom; the factory worker depended on the owner's machinery. The developer once owned her skills; the AI-augmented developer depends on the platform's capability.
Marx would have called this a transformation of social relations. He would have been right. The developer's relationship to her work, her employer, her profession, and her sense of her own competence has been altered by the introduction of a tool she depends on but does not control. When Anthropic changes its pricing, her economics change. When OpenAI updates its model, her workflow changes. When Google alters its terms of service, her legal exposure changes. She is more productive than she has ever been. She is also more dependent — on infrastructure, on access, on the continued goodwill and business viability of companies whose interests may not align with hers.
Heilbroner's treatment of Marx in Marxism: For and Against provides the framework for understanding why this matters. Heilbroner argued that Marx's most durable contribution was not his prediction of revolution but his insistence that economic analysis must account for power — that the question "How much is produced?" cannot be separated from the question "Who controls the means of production, and what does that control enable them to do?" Mainstream economics, in its pursuit of mathematical rigor, had largely abandoned this question, treating the distribution of power as exogenous to the economic model — a political fact to be noted, perhaps, but not an economic variable to be analyzed.
The AI transition makes this abandonment untenable. The distribution of power between AI platform owners and AI platform users is not a political footnote to the productivity story. It is the central economic fact of the transition. The productivity gains are real. A developer using Claude Code can produce in hours what previously required weeks. But the gains flow through infrastructure owned by companies that capture value at every layer: subscription fees, API charges, data that improves the models, network effects that deepen the dependency.
The developer's increased productivity does not automatically translate into increased economic independence. It may translate into increased dependency — higher output, certainly, but output that is contingent on continued access to a platform whose terms the developer did not set and cannot negotiate.
Consider now the other side of Marx's machinery question: empowerment. Marx was not wrong about machinery's capacity to increase productive power. He was, in fact, the most articulate celebrant of capitalism's productive achievements in the entire canon of economic thought. The Communist Manifesto contains passages of almost breathless admiration for the bourgeoisie's productive accomplishments — "wonders far surpassing Egyptian pyramids, Roman aqueducts, and Gothic cathedrals." Marx did not deny that machinery expanded human capability. He denied that the expansion was distributed in a way that served the humans who operated the machinery.
The AI transition presents the same duality. For the developer in Lagos — whose situation is documented in The Orange Pill — the AI coding assistant represents genuine empowerment. Capabilities that were previously available only to well-resourced teams in wealthy economies are now accessible to an individual with an internet connection and a monthly subscription. The imagination-to-artifact ratio — the distance between a human idea and its realization — has collapsed for this developer in a way that is historically unprecedented. Ideas that would have died for lack of resources can now be realized. Products that would never have been built can now be shipped. The floor of who gets to participate in the global economy of software production has risen measurably.
This is real. It matters. And Marx would have recognized it immediately — not as a refutation of his analysis but as a confirmation of it. The machinery is productive. The question is who captures the value of that productivity. The developer in Lagos builds a product. The product runs on infrastructure owned by Amazon (AWS), powered by AI owned by Anthropic, distributed through platforms owned by Apple and Google. At every layer, the infrastructure owner extracts value. The developer's increased capability is real. The developer's share of the value her capability creates is determined by the power relations embedded in the infrastructure stack — power relations she did not choose, cannot alter, and may not even be aware of.
Everything pregnant with its contrary. The tool that empowers the individual developer simultaneously deepens her dependence on the platform. The technology that democratizes access simultaneously concentrates ownership. The productivity gain that benefits the worker simultaneously benefits — disproportionately, structurally, inevitably — the owner of the capital that makes the productivity gain possible.
Heilbroner would have insisted on one further point, the point he made with greatest force in 21st Century Capitalism: that the resolution of this tension is political, not technical. Markets do not distribute power equitably. They distribute power according to bargaining position, and bargaining position is determined by ownership of assets that are difficult or impossible to replicate. The AI platform owners possess such assets — the models, the data, the infrastructure, the research teams. The AI platform users, however skilled, however productive, however indispensable their judgment and creativity, do not possess equivalent bargaining power.
The institutional adaptations that absorbed the shock of previous technological transitions — labor unions, progressive taxation, antitrust regulation, public education — were responses to this asymmetry. They did not eliminate it. They moderated it, enough to prevent the revolutionary rupture Marx predicted, enough to create the broad-based prosperity that characterized the postwar decades in advanced economies. Whether the existing institutional toolkit is adequate to moderate the asymmetry of the AI transition, or whether new institutional forms are required, is the question that will define economic policy for the next generation.
Marx would have insisted that the question is urgent. Heilbroner would have insisted that the question is open — that the outcome depends on choices, not on the logic of the technology. The machinery question in the digital age is the same question Marx asked in the industrial age: Does the machine serve the worker, or does the worker serve the machine? The answer, as Heilbroner understood better than anyone, is not determined by the machine. It is determined by the institutions that human societies build around it.
---
In the autumn of 1928, John Maynard Keynes traveled to Cambridge to deliver a lecture he had been thinking about for years. The Great Depression had not yet arrived — the crash was still a year away — and Keynes was in an unusually optimistic frame of mind. He had been running numbers on the rate of technological progress, and the numbers pointed toward a conclusion that he found both exhilarating and deeply unsettling. At the current pace of improvement, within a century — by 2028 — the productive capacity of advanced economies would be sufficient to satisfy all material human needs with a fraction of the labor currently required. His grandchildren, Keynes announced, would work perhaps fifteen hours a week. The rest of their time would be devoted to the problem that humanity had never before had the luxury of facing: what to do with freedom.
The essay, published in 1930 as "Economic Possibilities for Our Grandchildren," is one of the most prescient and most spectacularly wrong predictions in the history of economic thought. Prescient because Keynes was right about the productive capacity. GDP per capita in the United Kingdom has increased roughly sixfold since 1930. The wealth exists. The technological capability exists. The material conditions for a fifteen-hour workweek are more than satisfied. Spectacularly wrong because the fifteen-hour workweek never arrived. Working hours in advanced economies have declined only modestly — from roughly forty-seven hours per week in 1930 to roughly thirty-eight in 2020. And for the professional class — the lawyers, financiers, technologists, and consultants who constitute the upper stratum of the knowledge economy — hours have, if anything, increased. The sixty-hour week is not unusual. The eighty-hour week is not unheard of. Keynes's grandchildren are not lounging in gardens, reading philosophy, attending concerts. They are answering emails at midnight.
Heilbroner understood the significance of Keynes's failure — understood it not as an error of economic forecasting but as a revelation of something deeper about the relationship between productive capacity and human self-understanding. In Visions of the Future, Heilbroner argued that a society's capacity to absorb technological change depends not on its economic resources but on its cultural vision — its image of what the future is for. A society that envisions the future as material progress will channel every productivity gain into more production. A society that envisions the future as human liberation might channel those gains into leisure, education, contemplation, art. Keynes assumed that once material needs were satisfied, the cultural vision would naturally shift from production to liberation. It did not. The culture had internalized work — not merely as an economic necessity but as a moral imperative, a marker of social worth, an identity.
Heilbroner's diagnosis was structural, not psychological. The problem was not that individuals were incapable of enjoying leisure. The problem was that the institutions of capitalist society — the labor market, the education system, the tax code, the social expectations embedded in every performance review and college application and dinner party conversation — were organized around the assumption that productive work is the purpose of human life. To choose leisure in such a system is not merely to forgo income. It is to forgo status, identity, social legibility. The person who works fifteen hours a week in a society organized around sixty-hour weeks is not free. That person is suspect.
This diagnosis illuminates the AI transition with a clarity that no purely technical analysis can provide. The Berkeley study's central finding — that AI tools intensify rather than reduce cognitive labor — is not a paradox. It is the latest expression of a pattern that has been operating since well before Keynes wrote his essay and that Heilbroner identified as a structural feature of capitalist civilization. Productivity gains are channeled into more production because the institutions of society are organized to channel them there, and the individuals who inhabit those institutions have internalized the channeling so completely that it feels not like a constraint but like a choice.
The mechanism operates with particular force in the domain of cognitive labor, because cognitive labor — unlike physical labor — has no natural stopping point. A ditch can only be so deep. A field can only be plowed so many times. Physical labor encounters physical limits that force rest. Cognitive labor encounters no such limits. There is always another email, another analysis, another meeting, another prompt. The body tires, but the mind can be kept running — by caffeine, by anxiety, by the particular stimulation of a tool that generates responses in seconds and creates the illusion that every moment of non-engagement is a moment of waste.
William Stanley Jevons, writing in 1865, observed that improvements in the efficiency of steam engines had not reduced coal consumption but increased it. More efficient engines made coal-powered production cheaper, which expanded the range of activities for which coal-powered production was economically viable, which increased total demand for coal. The paradox — that efficiency increases consumption — operates because demand is not fixed. It is elastic, constrained by cost. Reduce the cost, and demand stretches.
The Jevons Paradox applies to cognitive labor with uncomfortable precision. AI tools make cognitive work more efficient. A task that previously required four hours now requires forty minutes. The naive expectation is that the worker will complete the task in forty minutes and spend the remaining three hours and twenty minutes at leisure. The actual outcome is that the worker completes the task in forty minutes and then takes on additional tasks — either voluntarily, because the tool makes it possible and the internal imperative makes it compelling, or involuntarily, because the manager, observing the efficiency gain, adjusts expectations upward. The freed time is not freed. It is reallocated — to more production, more output, more work.
Heilbroner, had he lived to see the Berkeley data, would not have been surprised. He would have been saddened but not surprised, because the data confirms the argument he spent his career making: that the economic problem is never purely economic. It is cultural, institutional, and — at its deepest level — a matter of what a society believes human life is for. A society that believes human life is for production will convert every efficiency gain into more production, regardless of the technology, regardless of the wealth, regardless of the fact that the material justification for the intensity evaporated decades ago.
The cultural dimension becomes even more striking when examined through the lens of what Heilbroner called, in Visions of the Future, the three temporal orientations. In "the distant past" — the pre-capitalist era — societies imagined the future as essentially continuous with the present. The son would live as the father lived. Change was cyclical, not progressive. In this orientation, the question of what to do with technological surplus simply does not arise, because technological surplus is not expected.
In "yesterday" — the era of capitalist modernity — societies imagined the future as progress. Tomorrow would be better than today, and the mechanism of improvement was productive growth. In this orientation, every technological gain is absorbed into the project of further growth, because growth is the definition of progress. Keynes inhabited this orientation, and his failure was the failure of the orientation itself: he assumed that the culture would recognize the point at which growth had satisfied material needs and would then shift to a different project. The culture did not recognize the point, because within the orientation of "yesterday," there is no point at which growth has been sufficient. Growth is its own justification.
In "today" — the era Heilbroner described as one of apprehension and uncertainty — societies no longer have a confident image of the future. The future is not imagined as continuous with the present (too much is changing too fast) or as progress (the side effects of progress — environmental destruction, inequality, nuclear risk — have undermined the narrative). The future is a genuine unknown, and the response to that unknown is a kind of productive anxiety — a compulsion to do more, faster, not because the doing leads anywhere in particular but because the alternative, sitting with the uncertainty, is intolerable.
The AI transition is occurring in this third period, and this may be its most consequential feature. A society that does not know what the future is for — that has lost the confident narrative of progress without replacing it with an alternative vision — responds to increased productive capability not with liberation but with intensification. The tool makes more work possible. The anxiety about the future converts possibility into compulsion. The worker works harder not because the work leads toward a known destination but because the alternative — pausing, reflecting, asking what all the production is actually for — is too frightening to contemplate.
This is not a technology problem. It is a vision problem, and Heilbroner's framework makes clear why the standard remedies — better work-life balance policies, corporate wellness programs, mandatory vacation days — are inadequate. They treat the symptom while leaving the cause untouched. The cause is not that workers lack the opportunity to rest. The cause is that the culture — the institutions, the incentive structures, the internalized beliefs about what a life well-lived looks like — converts every opportunity for rest into an opportunity for more production.
The economic concept that captures this dynamic most precisely is what economists call "induced demand." Build a new highway lane, and traffic increases to fill it. The new lane does not reduce congestion. It induces new driving that fills the new capacity. Build an AI tool that frees three hours of a knowledge worker's day, and new tasks expand to fill those three hours. The tool does not reduce work. It induces new work that fills the new capacity.
Induced demand, like the Jevons Paradox, operates because the constraint was never on the supply side. It was on the demand side — but the demand was latent, waiting for the constraint to be lifted. There were always more emails to write, more analyses to run, more products to ship. The constraint was the cost — in time, in effort, in the finite capacity of the human mind to perform cognitive labor. AI reduces that cost. Demand expands. Hours fill.
Keynes believed that humanity would solve the economic problem and then face the real problem — the problem of freedom, of meaning, of what to do with a life that is no longer defined by the struggle to survive. He was right that this would be the real problem. He was wrong about when it would arrive. It has not arrived because the economic problem, in a culture organized around production, is never solved. It merely changes form. First there is not enough. Then there is enough but the culture cannot accept that there is enough. Then there are tools that produce even more, and the question of whether there was already enough recedes further into the background.
Heilbroner, characteristically, offered no easy solution. He was not in the business of solutions. He was in the business of diagnosis — of seeing the economic arrangements of the present with sufficient clarity that the choices embedded in those arrangements become visible. The choice embedded in the AI transition is the same choice Keynes identified in 1930: whether humanity will use its productive capacity to liberate itself from toil or whether it will use its productive capacity to produce more toil. The choice is not being made consciously. It is being made by default — by institutions that channel productivity gains into growth, by incentive structures that reward output over reflection, by a culture that has forgotten what Keynes was trying to tell it: that the purpose of economic progress is not more progress. It is the creation of conditions under which human beings can live lives worthy of their capacities.
The AI tools are productive. The question Keynes asked — and the question that Heilbroner's framework renders inescapable — is productive for what. The answer, if the current trajectory holds, is: productive for more productivity. The cycle that defeated Keynes's prediction is cycling faster than ever, and the agencies that might break the cycle — the educational institutions, the cultural norms, the political movements that could articulate an alternative vision of what productive capacity is for — are, as Heilbroner warned in 1967, still rudimentary.
Joseph Alois Schumpeter arrived at Harvard in 1932 carrying two things that would define his American career: an exquisite wardrobe and an absolute conviction that capitalism's greatest virtue was its capacity for self-destruction. He had been, in rapid succession, a professor in Czernowitz, Austria's youngest finance minister, the president of a private bank that subsequently failed, and a professor again in Bonn. He had lost his mother, his wife, and his newborn son within weeks of each other. He dressed impeccably, spoke five languages, and held the settled belief that the economist who could not ride a horse, seduce a woman, and read a balance sheet was no economist at all. Heilbroner, who relished the biographical detail that revealed the temperament behind the theory, treated Schumpeter with the particular attentiveness he reserved for thinkers whose personalities were inseparable from their ideas.
The idea that defined Schumpeter's legacy — creative destruction, the "perennial gale" that sweeps through capitalist economies as innovation renders existing industries obsolete while conjuring new ones into existence — was advanced in Capitalism, Socialism and Democracy with a confidence that bordered on exuberance. The destruction was not a flaw in the system. It was the system's animating force. The horse-and-buggy industry was destroyed by the automobile, the sailing ship by the steamer, the icebox by the refrigerator. In each case, the destruction was real — jobs were lost, investments were wiped out, communities built around the old industry were devastated. And in each case, the creation that followed exceeded the destruction — more jobs, more wealth, more human capability in the new industry than the old one had ever contained.
Schumpeter's framework carried an implicit temporal promise: the gap between destruction and creation would close. The transition would be painful but brief. The new industries would absorb the workers displaced from the old ones, not immediately but within a period short enough that the costs could be borne without permanent damage to the social fabric. This promise was supported by the historical record Schumpeter drew upon — the transitions from agriculture to manufacturing, from artisanal production to factory production, from the first industrial revolution to the second. In each case, the creation eventually exceeded the destruction. The long arc bent toward expansion.
Heilbroner, in The Worldly Philosophers, acknowledged Schumpeter's analytical power while noting the gap — not a logical gap but a moral one — between the elegance of the framework and the reality it described. Creative destruction is experienced differently depending on where you stand in the gale. For the entrepreneur launching the new industry, it is exhilarating — the very essence of capitalist vitality. For the worker in the industry being destroyed, it is catastrophic — the loss not just of income but of skill, identity, community, the daily structure that organized a life. Schumpeter's framework accounts for both experiences in the aggregate. It does not account for them at the level of the human being who must live through the transition.
The AI transition has made this gap between aggregate and individual experience newly visible, because the destruction is targeting a class of worker that previous waves of creative destruction largely spared. The textile workers of Manchester, the buggy-whip manufacturers, the switchboard operators — these were workers whose skills, however genuinely developed, were classified by the economic hierarchy as routine, manual, replicable. The knowledge workers targeted by AI — software engineers, lawyers, financial analysts, radiologists, designers, writers — are workers whose skills were classified as complex, cognitive, irreplaceable. They invested years, sometimes decades, in developing expertise that the market rewarded handsomely precisely because it was difficult to acquire and difficult to replicate.
The destruction of routine skills, however painful for the individuals involved, fits comfortably within Schumpeter's framework. The skills were routine. The work was, by definition, susceptible to mechanization. The creative destruction was predictable, even if the timing was not. But the destruction of complex cognitive skills — the skills that were supposed to be destruction-proof, the skills that represented the highest form of human capital, the skills that entire educational systems were designed to produce — challenges the framework in a way that Schumpeter did not anticipate and that his intellectual heirs have not adequately addressed.
The challenge is not that AI can perform complex cognitive tasks. Schumpeter's framework can accommodate that: the gale blows harder, the destruction reaches higher, but the creation will follow. The challenge is the speed of the destruction relative to the speed of the creation. Previous waves of creative destruction unfolded over decades. The transition from horse to automobile took roughly thirty years. The transition from typewriter to word processor took twenty. The transition from physical retail to e-commerce has been ongoing for a quarter century and is still incomplete. These timescales, while painful for the individuals caught in the transition, were long enough for institutional adaptation — for retraining programs to be developed, for new industries to mature, for the labor force to be gradually reallocated from declining sectors to growing ones.
The AI destruction is operating on a timescale measured in months. The software industry's trillion-dollar repricing — the phenomenon described in The Orange Pill as the "death cross" — occurred in less than eight weeks. The skill set of a senior software engineer, built over fifteen years, was devalued in a single quarter. The gap between the destruction of old expertise and the creation of new forms of value is not closing at the pace Schumpeter's framework assumed. It is widening, because the technology that destroys is the same technology that accelerates the pace of subsequent destruction, creating a compounding effect that the historical analogies cannot capture.
Heilbroner's analysis of Schumpeter, filtered through his broader argument about institutional capacity, illuminates why the speed matters so much. In The Making of Economic Society, Heilbroner traced the development of the institutional infrastructure — labor laws, educational systems, social insurance, professional licensing — that absorbed the shocks of previous technological transitions. This infrastructure was built over decades, often in response to crises that had already caused enormous human suffering. The eight-hour day was legislated after generations of sixteen-hour shifts. Unemployment insurance was created after the Great Depression had already destroyed millions of livelihoods. The institutional response lagged the technological shock by years, sometimes by generations — and the gap between shock and response was the space in which the human cost of creative destruction was paid.
The AI transition is accelerating the shock while the institutional response operates at its historical pace. The technology moves in months. The institutions — education systems, labor regulations, professional licensing bodies, corporate governance structures — move in years or decades. The gap between the pace of destruction and the pace of institutional adaptation is arguably wider today than at any previous point in the history of capitalist economies, and the width of that gap is the measure of the human cost that will be paid during the transition.
Consider what this means in practice. A university curriculum in computer science takes four years to complete. The skills taught in the first year of that curriculum may be substantially obsolete by the time the student graduates — not because the education was poor but because the pace of technological change has exceeded the pace at which educational institutions can adapt. The student has invested four years and, in many cases, substantial debt in acquiring skills that the market valued when the investment began and may not value when the investment matures. This is not a failure of the student. It is a failure of the institutional infrastructure to keep pace with the technology.
The same temporal mismatch operates in the corporate context. Companies invest in training programs that take months to develop and deploy. By the time the training is delivered, the tools it trains workers to use may have been superseded. Retraining programs assume a stable target — a defined set of skills that the retrained worker will need. The AI transition does not provide a stable target. The skills that are valuable today may not be valuable next quarter, and the skills that will be valuable next year are not yet identifiable with confidence.
Schumpeter's response to this concern would have been characteristic: the market will sort it out. Entrepreneurs will identify the new opportunities. Capital will flow to the new industries. Workers will retrain, relocate, adapt. The gale blows, and what emerges from the wreckage is stronger, more productive, more capable than what came before. This response is not wrong in the aggregate. Over sufficient time, the creation does tend to exceed the destruction. The question is whether "sufficient time" is a period that the current generation of displaced workers can survive without institutional support — and the answer, based on every previous technological transition Heilbroner examined, is no.
The institutional response to creative destruction has never been automatic. It has always been fought for — by labor movements, by reformers, by political actors who insisted that the aggregate efficiency of the market was not a sufficient justification for the suffering of the individuals caught in the gale. The eight-hour day was not a market outcome. It was a political achievement, won against the resistance of employers who preferred the sixteen-hour shift that the market, left to its own devices, produced. Universal public education was not a market outcome. It was an institutional innovation, designed to ensure that the capabilities the new economy required would be broadly distributed rather than concentrated in the children of the already wealthy.
The AI transition requires institutional innovations of comparable ambition — and comparable political will. The nature of these innovations is the subject of later chapters. Here, the point is more fundamental: Schumpeter's gale is blowing. The destruction is real, visible, and operating at a speed that exceeds the historical precedent on which Schumpeter's confidence was based. The creation will follow — it always has — but the gap between destruction and creation is the space in which the human cost is paid, and the width of that gap is determined not by the technology but by the quality and speed of the institutional response.
Heilbroner, who spent his career insisting that economic life is always also moral life, would have added one further observation. The workers caught in the gale are not abstractions. They are the senior engineer who spent fifteen years building expertise that the market no longer rewards at its previous rate. They are the lawyer whose knowledge of contract analysis, built through thousands of hours of patient study, can now be approximated by a tool that costs a hundred dollars a month. They are the radiologist, the financial analyst, the designer, the writer — the entire stratum of knowledge workers who believed, with good reason, that their cognitive skills represented a durable form of human capital.
They were not wrong to believe this. Their skills were genuinely valuable, genuinely hard to acquire, genuinely the product of sustained intellectual effort. The gale did not destroy their skills. It destroyed the economic premium that those skills commanded — which is, in a market society, effectively the same thing. The skill remains. The market for the skill has changed. And the individual who possesses the skill must now navigate a landscape in which the investment of a decade or more in developing a specific form of expertise may yield diminishing returns, while the capacity to direct an AI tool — a capacity that requires judgment, breadth, and adaptability rather than deep specialization — commands the premium that expertise once held.
Schumpeter would have seen this as the system working. Heilbroner would have seen it as the system working the way it always works — productively in the aggregate, brutally at the individual level — and would have insisted that the brutality is not a necessary cost of progress but a failure of institutional imagination. The cost can be reduced. The gap can be narrowed. The gale cannot be stopped, but the people in its path can be given shelter, if the society possesses the political will and the institutional creativity to build it.
The question of whether this society possesses that will is the question that the next chapter addresses — not as an economic question but as something prior to economics, something Heilbroner spent his last decades trying to name. The question of vision. The question of what kind of future a society is capable of imagining for itself.
---
Robert Heilbroner spent the last two decades of his career grappling with a question that most economists regarded as outside their discipline: What happens to a society that loses its capacity to imagine the future?
The question emerged from a lifetime of studying how societies provision themselves — how they produce, distribute, and consume the material necessities of life. But it went deeper than provisioning. Heilbroner had observed, across five decades of scholarship, that every economic arrangement rests on something that is not itself economic: a set of assumptions about what human life is for, what the future will look like, and what role human agency plays in shaping that future. These assumptions — which Heilbroner called "visions" — are not decorative. They are structural. They determine the institutional choices that a society makes, and the institutional choices determine the material outcomes.
In Visions of the Future, published in 1995, Heilbroner organized the history of these assumptions into three periods, each characterized by a distinct temporal orientation — a distinct way of relating to the future.
The first period, which Heilbroner called "the distant past," encompasses the entirety of human history prior to the emergence of capitalist modernity — roughly everything before the seventeenth century. In the distant past, the future was imagined as essentially continuous with the present. The son would live as the father lived. The rhythms of agricultural production, the hierarchies of feudal obligation, the theological certainties of the medieval worldview — all of these created a temporal orientation in which change was cyclical, not progressive. Seasons turned. Dynasties rose and fell. But the fundamental structure of economic and social life was assumed to be permanent, ordained by God or nature or custom. In such a world, the question of what to do with technological surplus does not arise, because technological surplus is not expected, and the institutions of society are organized to reproduce the existing order rather than to transform it.
The second period, "yesterday," corresponds to the era of capitalist modernity — roughly the seventeenth century through the late twentieth. Here, the temporal orientation shifted decisively. The future was no longer imagined as continuous with the present. It was imagined as progress. Tomorrow would be better than today — more productive, more wealthy, more technologically capable, more materially abundant. This vision was not merely an attitude. It was the animating force of an entire civilization. The institutions of capitalist modernity — the corporation, the research university, the patent office, the stock exchange — were designed to produce progress, to channel human energy and capital toward the expansion of productive capacity. Growth was not merely desirable. It was the definition of health. An economy that did not grow was sick. A society that did not progress was stagnant.
The third period — "today" — is the one Heilbroner diagnosed as genuinely new. Beginning in the latter decades of the twentieth century and accelerating through the early decades of the twenty-first, the confident vision of progress began to fracture. The side effects of progress — environmental destruction, nuclear risk, persistent inequality despite growing wealth, the anomie of affluent societies — undermined the narrative without replacing it. The future was no longer imagined as continuous with the present (too much was changing too fast for that) or as a story of progress (the evidence that progress carried catastrophic risks was too abundant to ignore). The future became a genuine unknown — a source of apprehension rather than confidence, a space of threat as much as promise.
Heilbroner's diagnosis was not that "today" represented a failure of nerve or a temporary loss of confidence that the next economic expansion would restore. It was that the temporal orientation of "today" reflected a genuine change in the objective conditions of human life — a change in which the forces of technical transformation had outrun the institutional and cultural capacity to absorb them. The society that had unleashed extraordinary productive forces no longer possessed a coherent vision of what those forces were for. And without such a vision, the institutional choices that would determine whether the forces served human flourishing or undermined it were being made by default rather than by design.
The AI transition exists squarely within Heilbroner's third period, and this is arguably its most consequential feature. The technology has arrived in a society that does not know what the future is for — that has lost the confident narrative of progress without replacing it with an alternative framework for relating to what comes next.
The evidence for this temporal disorientation is ubiquitous. The discourse that erupted in the winter of 2025, documented in The Orange Pill, is a nearly perfect specimen of a society unable to form a coherent vision of a technological future. The triumphalists and the elegists occupy positions that map precisely onto Heilbroner's framework. The triumphalist envisions the AI future as progress — the continuation and acceleration of the growth narrative that animated capitalist modernity. More productivity, more wealth, more capability, more of everything that "yesterday's" institutions were designed to produce. The elegist envisions the AI future as loss — the destruction of depth, craft, meaning, the things that "yesterday's" growth narrative failed to account for. The two positions are incompatible, and the discourse oscillates between them without achieving synthesis.
But the most significant group — the one Heilbroner's framework illuminates most powerfully — is the silent middle: the people who feel both things simultaneously and cannot articulate a coherent position because the cultural resources for holding both things at once are missing. The parent at the kitchen table who uses AI tools at work and feels the genuine expansion of capability, and who also lies awake wondering what her child's education is preparing the child for, and who cannot answer the question because the answer depends on a vision of the future that does not exist.
This is Heilbroner's "today" in its purest form: the temporal orientation of apprehension. Not despair — the silent middle is not despairing. Not optimism — the evidence for unalloyed optimism has been thoroughly complicated. Apprehension: the state of a consciousness that recognizes the future as genuinely open, genuinely uncertain, genuinely dependent on choices that have not yet been made and whose consequences cannot be predicted with confidence.
The critical point in Heilbroner's analysis — the point that elevates it above mere diagnosis into something approaching prescription — is that the vision itself is constitutive. A society that envisions AI as liberation builds different institutions than a society that envisions AI as displacement. The triumphalist vision leads to deregulation, market-driven deployment, the assumption that the productivity gains will distribute themselves through the normal mechanisms of competition and labor market adjustment. The elegist vision leads to protectionism, resistance, the attempt to preserve existing institutional arrangements against the pressure of technological change. The builder's vision — the vision that The Orange Pill articulates as the Beaver's position — leads to a specific kind of institution-building: structures designed to capture the productivity gains while redistributing their benefits, to preserve the conditions for deep work while expanding the floor of who gets to participate.
The contest between these visions is not a parlor game. It is the most consequential political debate of the present moment, because the vision that prevails will determine the institutional landscape that shapes the transition. And the institutional landscape will determine, over the next generation, whether the AI transition produces broad-based flourishing or concentrated wealth alongside widespread precarity.
Heilbroner was clear-eyed about the difficulty of sustaining the builder's vision. It requires holding contradictory truths simultaneously — the truth that AI expands capability and the truth that it concentrates power; the truth that the technology is genuinely liberating and the truth that the culture converts every liberation into a new form of servitude. Holding contradictory truths requires a kind of intellectual and moral stamina that is rare in individuals and rarer still in political movements. The triumphalist vision is simpler, more energizing, more compatible with the institutional structures that already exist. The elegist vision is simpler too, in its way — it has the clarity of refusal, the moral legibility of saying no. The builder's vision is harder because it says yes and no simultaneously, because it insists that the technology must be embraced and constrained, that the gains must be celebrated and redistributed, that the future must be shaped and accepted as genuinely uncertain.
Heilbroner would have recognized this difficulty. He spent his career studying societies that faced analogous choices and that mostly failed to sustain the hard position, defaulting instead to the simpler visions that the moment's dominant interests favored. The Industrial Revolution's institutional response came decades after the damage — the labor laws that moderated capitalism's excesses arrived only after generations of workers had already paid the cost of unmoderated capitalism. The question for the AI transition is whether the institutional response can be accelerated — whether the dams can be built before the flood rather than after.
The answer depends on whether the society can sustain a vision complex enough to encompass both the technology's promise and its peril. Heilbroner's framework suggests that this is possible but not automatic — that visions are not given but constructed, through political argument, institutional experimentation, cultural conversation. The vision of a society in which AI serves human flourishing rather than undermining it will not arrive spontaneously. It will be built, deliberately, by people who understand both the power of the current and the fragility of the ecosystem downstream.
The construction has barely begun. The contest between visions is ongoing. And the outcome — as Heilbroner insisted throughout his career — depends on choices that are being made now, in the absence of certainty, under conditions of genuine apprehension, by people who must act before they can be sure that their actions are right.
---
There is a question that appears at every major technological transition with the regularity of a recurring illness, and it is always the same question, and it is never adequately answered before the transition is well underway, and the failure to answer it in time is always, in retrospect, the most consequential failure of the period. The question is: Who gets what?
Heilbroner posed this question in nearly every book he wrote, because he understood that the question of distribution — who captures the gains of economic activity and who bears the costs — is not a secondary consideration that can be addressed after the primary business of growth has been attended to. Distribution is the primary business. Growth without distribution is accumulation, and accumulation without distribution is the concentration of power, and the concentration of power is the structural precondition for every social catastrophe that Heilbroner, as a student of economic history, had occasion to chronicle.
The question has a specific form in the AI transition, and the specific form is revealing. The gains are concentrated along three axes: in the companies that build the AI infrastructure, in the individuals who possess the judgment and adaptability to use the tools most effectively, and in the geographies that host the research, the capital, and the institutional ecosystems that support AI development. The costs are distributed along different axes: across the workers whose expertise is devalued, the communities whose economic bases are disrupted, the educational systems that must prepare students for a world that does not yet exist, and the nations that lack the capital, connectivity, and institutional capacity to participate in the transition on favorable terms.
The asymmetry between the concentration of gains and the distribution of costs is not incidental. It is structural — embedded in the logic of the technology and the economic system within which the technology is deployed. Heilbroner, in The Nature and Logic of Capitalism, argued that capitalism is best understood not as a system of free markets but as a regime — a system organized around the drive to accumulate capital, supported by institutions that facilitate accumulation and constrain the forces that might redistribute the accumulated wealth. The regime metaphor is illuminating because it captures something that the language of "markets" obscures: that the outcomes of market economies are not natural. They are produced by a specific set of institutional arrangements that favor certain outcomes over others, and the arrangements can be changed.
The AI infrastructure exemplifies the regime's logic with unusual clarity. The development of frontier AI models requires capital investments measured in billions of dollars — for compute, for data, for the research talent that commands salaries in the hundreds of thousands or millions. These investments are recoverable only at scale — the model must serve millions of users to justify the investment — which means that the economics of AI development favor a small number of very large companies over a large number of smaller ones. The market structure that results is not competitive in the textbook sense. It is oligopolistic: a handful of companies — Anthropic, OpenAI, Google DeepMind, Meta — control the frontier models, and the forty-seven million developers who depend on those models are price-takers rather than price-makers.
This concentration is not the result of market failure in the conventional sense. It is the result of the market working as designed — channeling capital toward the investments with the highest expected returns, which happen to be investments that require enormous scale and therefore favor enormous firms. The invisible hand, in this case, is pushing wealth upward with the same efficiency that Smith observed in the pin factory. The mechanism is different. The direction is the same.
The historical pattern is clear and Heilbroner documented it across multiple works. Every major technological transition begins with a period in which the gains are captured primarily by capital — by the owners of the new machinery, the new infrastructure, the new productive capacity. The steam engine enriched the factory owners before it enriched the factory workers. Electrification enriched the utility companies and the manufacturers before it enriched the households that eventually received affordable power. Computerization enriched the technology companies before it enriched the businesses and consumers who eventually adopted the technology.
The eventual distribution of the gains — the "eventually" that separates the initial concentration from the broad-based prosperity that technological transitions have historically produced — was not automatic. It was achieved through institutional intervention: progressive taxation that redistributed wealth from the winners to the public goods that benefited everyone; labor legislation that established minimum standards for wages, hours, and working conditions; public education that ensured the skills required by the new economy were broadly available rather than concentrated in the children of the already wealthy; antitrust regulation that prevented the most successful companies from using their market power to eliminate competition.
These institutions were not designed in advance. They were developed in response to crises — the labor crises of the industrial revolution, the financial crises of the early twentieth century, the social crises of the Great Depression. In each case, the institutional response lagged the technological shock by years or decades, and the gap was the period during which the human cost of the transition was highest. The generation that inhabited the gap — the generation that lived through the transition without the institutional infrastructure that would eventually moderate its effects — bore the cost of a social experiment whose benefits would accrue primarily to subsequent generations.
The AI transition is following this pattern with a fidelity that would be instructive if it were not so concerning. The gains are flowing to capital. The costs are flowing to labor. The institutional response is lagging the technological shock. And the generation that inhabits the gap — the workers, students, and communities that are navigating the transition in real time, without adequate institutional support — is bearing the cost.
The specific dimensions of this cost are worth enumerating, because abstract discussions of "distribution" can obscure the concrete reality of what maldistribution means for actual human beings.
The first dimension is wage compression. When AI tools enable a junior worker to produce output that previously required a senior worker, the wage premium for seniority compresses. This is visible in the software industry, where the productivity gap between junior and senior developers has narrowed significantly since the widespread adoption of AI coding assistants. The senior developer's wage reflected, in part, the scarcity of the skills she possessed. When those skills become less scarce — not because more humans have acquired them but because a machine can approximate them — the scarcity premium erodes.
The second dimension is geographic concentration. The gains from AI development are concentrated in a small number of metropolitan areas — the San Francisco Bay Area, Seattle, New York, London, and a handful of others — where the research talent, the venture capital, and the institutional ecosystems that support AI development are located. The costs are distributed across every community in which knowledge workers live, including communities that lack the capital, connectivity, and institutional capacity to benefit from the transition.
The third dimension, and the one Heilbroner's framework illuminates most powerfully, is the distribution of risk. In the old economy, the risk of technological disruption was distributed across the labor force in rough proportion to the routineness of the work. Routine workers faced higher displacement risk. Knowledge workers faced lower displacement risk. The implicit social contract — invest in education, develop complex skills, and the market will reward you with a degree of security — was imperfect but real. AI has broken this contract. The knowledge workers who made the largest investments in complex skills are now facing displacement risks comparable to those that routine workers have always faced. The distribution of risk has become more equal — not because routine workers face less risk, but because knowledge workers face more.
Heilbroner would have recognized this as a moment of political significance. When the risk of displacement extends to the professional class — the class that historically possessed the political influence, the institutional access, and the cultural authority to shape the institutional response to technological change — the political dynamics shift. The factory worker's displacement, however devastating to the individual, rarely mobilized the institutional resources necessary for a systemic response. The professional's displacement, by contrast, mobilizes different political energies — the energies of a class that is accustomed to being heard, accustomed to having its interests reflected in policy, accustomed to occupying the positions from which institutional innovation is directed.
Whether this political mobilization produces adequate institutional innovation is an open question. The historical record offers both precedent and caution. The New Deal was, in part, a response to the displacement of the middle class during the Great Depression — a political mobilization of people who had expected to be secure and who discovered, catastrophically, that they were not. The institutional innovations of the New Deal — Social Security, unemployment insurance, deposit insurance, securities regulation — were designed by and for a class that had experienced the failure of existing institutions to protect them. They worked, imperfectly but substantially, for four decades.
The AI transition may produce an analogous political mobilization. The professional class that is experiencing the devaluation of its expertise possesses the resources — educational, institutional, political — to demand an institutional response. Whether the response will be adequate depends on factors that Heilbroner's framework identifies but cannot predict: the quality of the institutional imagination, the political will to implement innovations that redistribute the gains of productivity, and the cultural capacity to articulate a vision of the future in which AI serves human flourishing rather than concentrated accumulation.
The distribution question is never settled permanently. It is renegotiated at every major technological transition, and the terms of the renegotiation depend on the balance of power between those who capture the gains and those who bear the costs. The AI transition has shifted this balance, concentrating gains in a smaller number of hands than any previous transition while distributing costs across a broader swath of the labor force. The historical pattern suggests that the institutional response will eventually arrive — that the dams will eventually be built. The question is whether "eventually" comes soon enough for the generation that is standing in the floodwaters now.
---
Heilbroner told a story, in one of his later essays, about visiting a factory in the 1970s and watching a man on an assembly line perform the same operation — attaching a component to a chassis — every forty-five seconds, eight hours a day, five days a week. The operation required no thought, no judgment, no variation. It was, by any economic measure, efficient: the worker produced at a rate that maximized the factory's output per labor hour. And it was, by any human measure, devastating: the worker's face had the particular blankness of a person who has ceased to expect anything from the hours between arrival and departure.
Heilbroner used the anecdote not to argue against factory production — he was too sophisticated an economist for that — but to illustrate a point that mainstream economics had systematically avoided. The point was that labor is not merely a factor of production. It is, for the human being who performs it, something altogether more consequential. It is the activity through which most people organize their days, define their competence, locate themselves in a social hierarchy, and derive — or fail to derive — a sense of purpose. The economist who measures labor only as an input that combines with capital to produce output, who evaluates a job only by its wage and its productivity, has missed the most important thing about what it means to work.
This insistence — that economic life is always also human life, and that the human dimensions of economic arrangements are not externalities to be noted in footnotes but central features to be examined with the same rigor applied to prices and quantities — runs through Heilbroner's entire body of work. It is what makes him a worldly philosopher rather than a technical economist. It is what made The Worldly Philosophers a book that millions of people read for pleasure rather than assignment. And it is what makes his framework indispensable for understanding the AI transition, because the AI transition is transforming not just what workers produce but what work feels like — and the feeling, from the standpoint of the human being who must inhabit it, matters at least as much as the output.
The evidence from the early months of the AI transition suggests that the transformation of work's character is at least as significant as the transformation of work's productivity. The senior engineer described across multiple accounts in The Orange Pill — the one who oscillated between excitement and terror during his first days with AI tools — was not experiencing a productivity problem. His output was increasing dramatically. He was experiencing a meaning problem. The activity through which he had understood himself — the careful, patient, struggle-built work of writing code by hand, debugging by intuition, building systems through the accumulation of hard-won understanding — was being transformed into a fundamentally different kind of activity. He was now directing rather than building. Evaluating rather than creating. His hands, which had spent fifteen years on a keyboard translating thought into code, were now idle while the machine did the translation. His mind was active — more active, perhaps, than it had ever been — but the specific, embodied relationship between his effort and its product had changed in a way that felt, to him, like a loss.
Economists have a term for what this engineer was experiencing, though they rarely use it with the seriousness it deserves. The term is "deskilling" — the process by which the introduction of new technology reduces the skill requirements of a job, transforming complex work into simple work, craft into operation, expertise into supervision. The concept was developed by Harry Braverman in Labor and Monopoly Capital, published in 1974, and it drew on the same Marxian framework that Heilbroner analyzed in Marxism: For and Against. Braverman argued that deskilling was not an accident of technological progress but a deliberate strategy of management — a way of reducing labor costs by reducing the skill, and therefore the bargaining power, of the worker.
The AI transition complicates Braverman's framework in an important way. The engineer is not being deskilled in the traditional sense. His work has not been simplified. It has been elevated — from the mechanical execution of code to the architectural judgment about what code should exist. The difficulty has not decreased. It has migrated upward, to a higher cognitive floor. The struggle has not been eliminated. It has changed character — from the struggle of implementation to the struggle of vision.
And yet. The engineer's experience of the change is, phenomenologically, one of loss. Not because the new work is less valuable — he recognizes, intellectually, that it may be more valuable — but because the old work carried a specific quality of embodied engagement that the new work does not. The satisfaction of debugging a complex system by hand — of tracing the fault through layers of code, feeling the logic with something like tactile intuition, arriving at the fix through a process that was simultaneously intellectual and physical — this satisfaction does not transfer to the activity of reviewing AI-generated code for correctness. The review may be more important, in some abstract economic sense. It does not feel the same. And the feeling matters, because it is the feeling that makes work meaningful rather than merely productive.
Heilbroner would have recognized this distinction immediately, because it was the distinction he spent his career insisting on. The productivity of work and the meaningfulness of work are not the same thing, and they do not move in the same direction. Work can become more productive and less meaningful simultaneously — the assembly-line worker producing at maximum efficiency while experiencing maximum alienation is the paradigmatic case. The AI transition is producing a new version of this disjunction: workers who are more productive than they have ever been and who experience, alongside the productivity, a subtle but persistent erosion of the qualities that made work feel worthwhile.
The qualities are specific and worth naming. The first is mastery — the sensation of having developed, through sustained effort, a competence that is genuinely difficult and genuinely one's own. Mastery is not merely a psychological preference. It is, as Csikszentmihalyi demonstrated in his research on flow states, a fundamental component of human well-being — the condition under which people report the highest levels of satisfaction, engagement, and meaning. Mastery requires friction — the resistance of a material or a system to the practitioner's will, the failures that teach, the gradual accumulation of understanding through repeated encounter with difficulty. When AI eliminates the friction of implementation, it eliminates the substrate on which mastery is built. The practitioner may become more capable in the sense of producing more output. She does not become more masterful in the sense of having earned her capability through struggle.
The second quality is authorship — the experience of having made something that bears one's mark, that is recognizable as the product of a specific intelligence engaging with a specific problem. Authorship is not merely a matter of credit, though credit matters. It is a matter of the relationship between the maker and the made — the intimate connection between the person who shaped the code and the code that was shaped. When the machine writes the code and the human reviews it, the relationship changes. The human is no longer the author. She is the editor — a role that carries its own dignity but not the same phenomenological weight.
The third quality is community — the social bonds formed through shared struggle. The team that debugged a production crisis together at two in the morning, the pair of developers who spent a week wrestling with an architectural problem and emerged with both a solution and a friendship, the informal mentorship that occurs when a senior practitioner walks a junior one through a problem, not by providing the answer but by modeling the process of finding it — these social bonds are formed through shared difficulty, and they do not form with the same texture when the difficulty is eliminated.
Heilbroner, in Behind the Veil of Economics, argued that the discipline of economics had committed a fundamental error by separating the study of production from the study of human experience. The error was methodological — it made the mathematics simpler — but it was also moral, because it allowed economists to evaluate economic arrangements without accounting for the most important thing about them: what they feel like to the people who inhabit them. A job that pays well and produces efficiently but that strips the worker of mastery, authorship, and community is, by Heilbroner's standard, a failure — not an economic failure but a human one, and the distinction between economic and human failure is one that a discipline calling itself the study of human welfare cannot afford to maintain.
The policy implications are significant and largely unaddressed. The conversation about AI and labor is dominated by two frameworks: the employment framework (will people have jobs?) and the productivity framework (will the economy grow?). Both frameworks are important. Neither is sufficient. A world in which everyone has a job but no one's job is meaningful is not a world that serves human welfare, regardless of the GDP figures. A world in which the economy grows at unprecedented rates while the workers who produce the growth experience their work as empty — as supervision without mastery, as review without authorship, as coordination without community — is a world that has solved the wrong problem.
The harder problem — the problem that neither the employment framework nor the productivity framework addresses — is how to design work arrangements in which AI enhances rather than erodes the qualities that make work meaningful. This is a design problem, not a market problem. Markets optimize for productivity, not for meaning. The meaning must be designed in — through organizational structures that preserve the conditions for mastery, through collaborative practices that maintain the social bonds of shared effort, through educational investments that prepare workers not just for productivity but for the kind of judgment-intensive, vision-oriented work that carries intrinsic satisfaction.
Heilbroner, who ended nearly every book with an open question rather than a confident prescription, would likely have ended this chapter with the observation that the design has barely begun. The tools are powerful. The productivity is real. The institutional imagination required to ensure that the productivity serves human flourishing rather than undermining it is, as Heilbroner warned fifty years ago, still rudimentary. The rudiments are not nothing — the conversations about AI Practice, about protected mentoring time, about organizational structures that preserve space for slow, friction-rich, deeply human work alongside the accelerated AI-assisted work, these are genuine institutional experiments. They are also, measured against the scale of the challenge, barely a beginning.
The assembly-line worker whose blankness haunted Heilbroner's memory was productive. He was also, in every way that matters to a human being, diminished by his productivity. The question for the AI transition is whether a technologically augmented workforce will be productive and enhanced, or productive and diminished in new ways that the productivity figures do not capture and that the economics profession, in its persistent tendency to separate production from experience, is not equipped to see. The answer will not be found in the data. It will be found in the institutions — the organizational structures, the educational practices, the cultural norms — that determine whether the work that remains for human beings after AI has claimed the mechanical parts is work that a human being can perform with dignity, with craft, and with the sense that the effort matters.
The eight-hour day did not exist until someone imagined it.
This is an obvious statement, and like most obvious statements, it conceals something profound. Before the eight-hour day was a law, before it was a demand, before it was a slogan painted on a banner and carried through the streets of Melbourne in 1856 — where stonemasons first won the concession — it was an idea in someone's head. An idea that did not correspond to any existing reality. An idea that the prevailing institutional framework had no category for, because the prevailing institutional framework assumed that the length of the working day was determined by the employer's needs and the worker's desperation, and that any attempt to impose an external limit on this arrangement was an interference with the natural order of economic life.
The stonemasons who marched in Melbourne were not requesting an adjustment to the existing system. They were proposing a new institution — a structural constraint on the organization of work that had no precedent in the history of capitalist production. The eight-hour day was not a reform. It was an invention. And the faculty that produced it — the capacity to envision a form of social organization that does not yet exist but that the moment demands — is the faculty that Heilbroner valued above all others and that the AI transition requires with an urgency that has no close historical parallel.
Heilbroner called this faculty "the institutional imagination," though he never coined it as a formal term. The concept pervades his work — in The Making of Economic Society, where he traces the evolution of economic institutions from feudal obligation through market exchange; in 21st Century Capitalism, where he examines the specific institutional requirements of a humane market system; in Visions of the Future, where he argues that a society's capacity to build new institutions depends on its capacity to envision a future that differs from the present. The institutional imagination is not the same as policy expertise. Policy expertise operates within existing institutional frameworks, optimizing their parameters, adjusting their mechanisms. The institutional imagination operates at a different level: it asks whether the existing frameworks are adequate to the moment, and when the answer is no, it invents new ones.
Every major technological transition in the history of capitalism has required institutional invention of this kind. The transitions that produced broad-based prosperity were the ones in which the institutional imagination was exercised with sufficient ambition and speed. The transitions that produced concentrated wealth and widespread suffering were the ones in which the institutional imagination failed — where the technology arrived faster than the institutions could adapt, and the gap was filled by the default mechanisms of the market, which distribute gains according to bargaining power rather than according to need or desert.
The Industrial Revolution is the canonical example. The technology arrived in the late eighteenth century. The institutional response arrived in the mid-to-late nineteenth century — labor laws, public education, factory regulation — and in some cases not until the twentieth. The gap between technology and institution was the period in which children worked in mills, families lived in industrial slums, and the life expectancy of a Manchester textile worker was seventeen years. The institutions that eventually closed the gap — the eight-hour day, compulsory education, workplace safety standards — were not incremental adjustments to the pre-industrial order. They were inventions. They had no precedent. They required someone to look at the existing arrangement and say: this is not adequate. Something new is needed. And then to imagine, in sufficient detail to be actionable, what the new thing might look like.
The AI transition is following the same pattern — technological shock outrunning institutional capacity — but with two features that distinguish it from every previous instance. The first is speed. The Industrial Revolution unfolded over decades. The AI transition is unfolding over months. The repricing of the software industry, the transformation of engineering workflows, the emergence of AI-assisted research and writing and legal analysis and medical diagnostics — these developments occurred within a single year. The institutional infrastructure — the educational curricula, the labor regulations, the professional licensing standards, the corporate governance frameworks — was designed for a world that changes on the timescale of decades. It is being confronted with a world that changes on the timescale of quarters.
The second distinguishing feature is that the AI transition targets the institutional class itself. Previous technological transitions displaced workers — factory hands, typists, switchboard operators — whose political influence was limited by their economic position. The AI transition is displacing knowledge workers — the lawyers, engineers, analysts, educators, and administrators who constitute the professional class from which institutional innovators have historically been drawn. The people who would normally design the institutional response are the people whose own positions are being disrupted. This creates a peculiar double bind: the professionals who possess the expertise to imagine new institutions are simultaneously experiencing the disorientation of having their own expertise devalued, which is not a mental state conducive to bold institutional design.
Heilbroner, in Behind the Veil of Economics, warned against the tendency of economic thought to treat institutional arrangements as given — as part of the landscape rather than as human constructions that can be reconstructed. The market, the corporation, the employment relationship, the educational system — each of these is an institution that was invented at a specific historical moment to solve a specific historical problem, and each can be reinvented when the problem changes. The tendency to treat them as permanent features of economic life rather than as provisional arrangements subject to revision is what Heilbroner considered the most dangerous form of intellectual conservatism — not conservatism in the political sense but in the cognitive sense, the inability to see the constructed as constructable.
Three domains demand institutional invention with particular urgency, and in each domain the existing institutional framework is not merely inadequate but actively counterproductive — designed for a world that has already ceased to exist.
The first is education. The modern university system was designed in the nineteenth century to produce specialists — to train students in specific disciplinary competencies that the industrial and post-industrial economy required. The assumption was that specialization, once acquired, would retain its value for decades — that the investment in a four-year degree would yield returns over a forty-year career. The AI transition has shattered this assumption. The skills taught in the first year of a computer science curriculum may be substantially altered in relevance by the time the student graduates. The four-year timescale of the degree is mismatched with the quarterly timescale of technological change.
The institutional invention required is not merely curricular reform — though curricular reform is necessary. It is a reconceptualization of what education is for. If AI can perform the specialized operations that the current educational system trains students to perform, then the educational system must train students in the capacities that AI cannot perform: judgment, integration, the capacity to formulate questions rather than execute answers, the ability to synthesize across domains rather than drill within one. This represents not an adjustment to the existing educational model but a reversal of its fundamental orientation — from depth-first to breadth-first, from specialization to integration, from the production of experts to the cultivation of generalists capable of directing expertise.
Some institutional experiments are underway. The teacher who grades questions rather than essays — documented in The Orange Pill — is conducting an experiment in institutional invention at the classroom level. The company that reorganized around "vector pods" — small teams whose function is to determine what should be built rather than to build it — is conducting an experiment at the organizational level. These experiments are genuine and valuable. They are also, measured against the scale of the educational transformation required, barely visible. The gap between the experiments and the systemic institutional change that the moment demands is the space in which a generation of students is being educated for a world that will not exist by the time they graduate.
The second domain is labor regulation. The existing framework of labor law was designed for an economy in which the employment relationship was the primary mechanism through which work was organized, compensation was delivered, and benefits were distributed. The AI transition is eroding the employment relationship from multiple directions simultaneously. Freelance and contract work, already growing before AI, is accelerating as companies discover that AI tools allow smaller teams to accomplish what previously required larger ones. The gig economy, already contentious, is being transformed by AI agents that can perform tasks previously outsourced to human contractors. And the most fundamental assumption of labor law — that the worker sells time to the employer, and the employer compensates the worker for that time — is being undermined by a technology that makes the value of time radically unequal. An hour of work by a developer using AI tools is worth vastly more, in productive terms, than an hour by a developer without them. The time-for-money exchange that labor law assumes is becoming incoherent.
The institutional invention required is a framework that accounts for the value of output rather than the duration of input — that recognizes the AI-augmented worker as a fundamentally different economic actor than the pre-AI worker, and that adjusts the mechanisms of compensation, benefits, and labor protection accordingly. This is not a minor adjustment. It is a reconceptualization of the employment relationship itself — the most fundamental institution of capitalist economic life.
The third domain is governance — not in the narrow sense of AI regulation, which is the subject of an already extensive policy conversation, but in the broader sense of the institutional infrastructure that enables citizens to navigate the transition. Heilbroner distinguished, throughout his work, between supply-side institutions (which constrain the behavior of producers) and demand-side institutions (which empower the behavior of citizens). The existing AI governance conversation is overwhelmingly supply-side: what AI companies should be permitted to build, what disclosures they must make, what safety standards they must meet. These are important questions. They are also insufficient, because they address the technology without addressing the people who must live with it.
The demand-side institutional gap is where the most urgent invention is needed. Citizens need frameworks for understanding what AI is and what it is not — not technical frameworks but conceptual ones, the kind that enable informed decision-making about when to use the tools and when to resist them. Parents need guidance — not prohibitions, which are as ineffective as they are tempting, but frameworks for cultivating in their children the capacities that the AI economy will reward: judgment, curiosity, the ability to sit with uncertainty, the discipline to ask whether a question is worth pursuing before pursuing it. Workers need not just retraining programs but reconceptualization programs — institutional support for the cognitive transition from execution-oriented work to judgment-oriented work, a transition that is not merely a matter of learning new skills but of reformulating one's understanding of what professional competence means.
Heilbroner would have insisted, with the quiet stubbornness that characterized his intellectual style, that these institutional inventions will not emerge from the market. Markets are efficient at many things. Institutional imagination is not among them. Markets optimize within existing institutional frameworks. They do not redesign the frameworks themselves. The redesign requires political action — the deliberate, collective, democratically legitimate construction of new institutional forms by societies that have decided what kind of future they want and are willing to build the structures necessary to make that future possible.
The stonemasons who won the eight-hour day in Melbourne in 1856 did not wait for the market to deliver it. They organized. They articulated a vision — a vision of what work should look like, of how much of a human life should be consumed by labor, of what kind of society they wanted to inhabit. And they built an institution that corresponded to that vision. The institution was imperfect. It was contested. It took decades to spread from Melbourne to the rest of the industrialized world. But it was built, and the building of it transformed the conditions of work for hundreds of millions of people.
The AI transition requires institutional invention of comparable ambition. The scale is different. The technology is different. The speed is different. But the fundamental requirement is the same: someone must look at the existing arrangements and say, this is not adequate. Something new is needed. And then imagine, with sufficient clarity and sufficient courage, what the new thing might be.
The imagination has barely begun. The experiments are scattered, underfunded, disconnected from each other. The systemic institutional response that the moment demands is not visible on any policy horizon. And the generation that inhabits the gap between the technology's arrival and the institution's response is paying the cost — in anxiety, in disorientation, in the particular suffering of people who were trained for a world that no longer exists and who must find their way in a world that has not yet been institutionally furnished for their habitation.
Heilbroner spent his career studying the gap. He knew its contours. He knew its costs. And he knew that the gap closes only when the institutional imagination rises to meet the technological reality — not before, never automatically, always through struggle, always imperfectly, always too late for those who needed it most, but eventually, because the alternative, a society permanently unequipped for the forces it has unleashed, is not a stable arrangement. It is a prelude to crisis. And crisis, as Heilbroner understood better than most, is the condition under which institutional imagination finally, belatedly, and at enormous cost, gets to work.
---
In 1974, Robert Heilbroner published An Inquiry into the Human Prospect and answered his own question with the bleakest assessment of his career. The book examined three challenges — environmental degradation, nuclear proliferation, and the population explosion — and concluded that the political and institutional capacity of human civilization was almost certainly insufficient to meet them. The tone was not angry. It was sorrowful — the sorrow of a man who had spent twenty years studying the ingenuity of the worldly philosophers and who now doubted whether ingenuity would be enough.
The book's most famous phrase was also its most unsettling: "the will to survive in a form worthy of survival." Heilbroner was not asking whether humanity would persist as a biological species. He was asking whether it would persist as a civilization capable of the qualities that made civilization valuable — the capacity for justice, for beauty, for the organization of collective life around principles more elevated than mere survival. He was asking, in other words, whether the form would be worthy. And his answer, delivered with the reluctance of a man who wished he could be more hopeful, was: probably not.
Half a century later, the question returns — not because the specific challenges Heilbroner identified have been resolved (they have not) but because a new challenge has arrived that reshapes the terms of the inquiry. Artificial intelligence does not threaten human survival in the way that nuclear weapons or environmental collapse threaten it. The threat is different in character, subtler, and in some ways more insidious, because it operates not on the physical conditions of human life but on the cognitive and moral conditions — on the qualities that constitute the "form" whose worthiness Heilbroner questioned.
The qualities at risk are precisely those that Heilbroner spent his career defending: the capacity for sustained thought, for moral reasoning, for the kind of institutional imagination that previous chapters have examined. These are the qualities that distinguish a civilization from a population — that make the difference between a society that organizes its collective life around principles and a society that merely optimizes its collective output. AI does not eliminate these qualities. It creates conditions under which they atrophy — not through suppression but through disuse, not through prohibition but through the subtler mechanism of making them unnecessary for the daily business of economic life.
The diagnosis has been established across the preceding chapters with, it is hoped, sufficient rigor. Smith's pin factory demonstrated that productive efficiency and human breadth are in tension, and AI has intensified the tension by making efficiency so abundant that breadth becomes a luxury. Marx's machinery question revealed that technology concentrates power, and AI has concentrated it with unprecedented speed and scale. Keynes's prediction failed because the culture could not imagine what to do with freedom, and AI is deepening the failure by making productive busyness even easier and reflective stillness even harder. Schumpeter's gale is blowing at a speed that outstrips institutional adaptation, and the generation caught in the gap is paying the cost.
Against this diagnostic weight, is there ground for something other than Heilbroner's pessimism?
The honest answer — the answer that Heilbroner himself would have demanded — is: conditionally. Not the unconditional optimism of the triumphalist, who sees only the gains and ignores the costs. Not the unconditional pessimism of the elegist, who sees only the losses and ignores the possibilities. A conditioned hopefulness — hope that depends on specific conditions being met, that acknowledges the severity of the challenges without conceding defeat, that insists the outcome is genuinely open and genuinely dependent on choices that have not yet been made.
The conditions are three, and each corresponds to a faculty that Heilbroner's work identifies as essential to what he called the human prospect.
The first condition is institutional invention at the speed the moment demands. The pattern of every previous technological transition is clear: the institutional response lags the technological shock, and the gap is the period of greatest human cost. The AI transition has compressed the timescale of the shock while the institutional response operates at its historical pace. The eight-hour day took decades to spread from Melbourne to the rest of the industrialized world. The AI transition cannot wait decades for its institutional equivalent. The demand-side institutions — the educational reforms, the labor framework adaptations, the governance structures that empower citizens rather than merely constraining companies — must be built now, imperfectly, experimentally, with the understanding that imperfect institutions built in time are infinitely more valuable than perfect institutions built too late.
The evidence that such invention is possible — that the institutional imagination can operate at the speed the moment requires — is mixed but not negligible. The experiments documented across the literature on AI and work — the organizational restructuring around integrative teams, the educational innovations that prioritize questioning over answering, the governance frameworks emerging in multiple jurisdictions simultaneously — are real institutional inventions, and they are occurring at a pace that exceeds the institutional response to previous technological transitions. They are also occurring in a fragmented, disconnected, inadequately resourced way that suggests the systemic response is not yet commensurate with the systemic challenge. The experiments exist. The system that would connect them, scale them, and ensure that their benefits reach the people who need them most does not yet exist.
The second condition is a vision of what the productive capacity is for. This is Heilbroner's deepest contribution to the understanding of the AI transition — the insistence that the material capacity of a society is only as valuable as the vision that directs it. A society that possesses extraordinary productive capacity but no coherent vision of what the production is for will channel every gain into more production, as Keynes predicted and the Berkeley researchers confirmed. The AI transition will serve human flourishing only if the society that deploys it can articulate, with sufficient clarity and conviction, what human flourishing means — and this articulation cannot be delegated to economists, technologists, or AI systems. It is a cultural and political achievement, requiring the kind of moral seriousness that Heilbroner brought to every page of his work.
The contest between visions — the triumphalist vision, the elegist vision, the builder's vision — is ongoing, and its outcome is not determined. What can be said, on the basis of the evidence examined across these chapters, is that the builder's vision — the vision that embraces the technology while insisting on institutional structures that distribute its benefits and protect its most vulnerable subjects — is both the most difficult and the most adequate to the complexity of the moment. It holds contradictory truths simultaneously. It refuses the comforts of simple narratives. It demands sustained engagement with a situation that offers no easy resolution.
The third condition is the preservation of the qualities that make the form worthy. This is the condition that connects the economic analysis to the larger question that Heilbroner posed in 1974. The qualities at stake — the capacity for sustained thought, for moral reasoning, for genuine questioning, for the kind of institutional imagination that builds new structures when old ones fail — are not luxuries. They are the qualities that distinguish a society capable of self-governance from a society that merely administers itself. AI can administer. It can optimize, allocate, coordinate, produce. It cannot — as far as anyone can currently determine — wonder why the administration matters, or whether the optimization serves a purpose worthy of the effort, or whether the production is making anyone's life genuinely better.
These questions belong to consciousness — to the specific, irreplaceable, agonizingly finite capacity of human beings to care about things, to weigh competing goods, to choose among possible futures on the basis of values that cannot be reduced to utility functions. Heilbroner understood, as few economists have, that this capacity is not merely instrumentally valuable — useful for making good policy — but constitutively valuable — part of what makes human civilization worth preserving. A society that loses the capacity for moral reasoning has not merely lost a tool. It has lost a piece of its humanity. And the loss, once sustained, is not easily reversed.
The AI transition places this capacity under pressure — not through attack but through neglect. When the machine can answer any question, the incentive to develop the capacity for questioning diminishes. When the tool can produce any output, the incentive to develop the judgment that distinguishes worthy outputs from trivial ones erodes. When efficiency is abundant, the tolerance for the inefficiency that genuine thought requires — the slow, messy, friction-rich process of working through a problem without knowing in advance where the work will lead — contracts.
This is not a speculative concern. It is observable now, in the educational institutions that are struggling to teach questioning to students who have access to unlimited answers, in the workplaces where productivity metrics are crowding out the reflective space that judgment requires, in the cultural conversation where the speed of AI-generated content is overwhelming the capacity for careful, critical engagement with that content.
Heilbroner's human prospect was, in 1974, a question about whether civilization could survive the physical threats it had created. In 2026, the question has a different form but the same moral weight: whether civilization can preserve the cognitive and moral qualities that make it worth preserving, in the face of a technology that makes those qualities economically unnecessary while leaving them humanly indispensable.
The answer — Heilbroner would insist on this point with the same quiet stubbornness he brought to every argument — is not determined. It depends on choices. On the institutional imagination that designs new structures for a world the old structures were not built to accommodate. On the political will that insists the gains of productivity be distributed rather than concentrated. On the cultural vision that articulates what all the productive capacity is actually for. And on the individual human beings who must decide, every day, whether to exercise the capacities that make them more than instruments of production — the capacity to question, to judge, to care about things that do not appear on any balance sheet.
Heilbroner's final works — written in the last years of a career that spanned the second half of the twentieth century — returned repeatedly to the observation that the economic problem is never merely economic. It is, at its deepest level, a question about what kind of life a society considers worth living, and whether the material arrangements it constructs serve or undermine that life. The AI transition has made this question inescapable. The tools are powerful. The productivity is real. The institutional and cultural choices that will determine whether the power and productivity serve human flourishing or erode it are being made now — in the absence of certainty, under conditions of genuine apprehension, by people who must act before the evidence is complete.
The worldly philosophers — Smith, Marx, Keynes, Schumpeter, and their interpreter Heilbroner — cannot tell this generation what to do. They can tell it what to watch for. The concentration of gains and the distribution of costs. The substitution of busyness for meaning. The atrophy of institutional imagination under the pressure of technological speed. The temptation to let the market sort it out, when the history of every previous transition demonstrates that the market, left to its own devices, sorts in favor of those who already possess the most.
The human prospect, reconsidered in the light of the AI transition, is neither as bleak as Heilbroner feared in 1974 nor as bright as the triumphalists of 2026 proclaim. It is open — genuinely, terrifyingly, exhilaratingly open — and the form it takes will be determined by the choices that human institutions, human cultures, and individual human beings make in the years immediately ahead. The will to survive in a form worthy of survival is not a given. It is an achievement. And the achievement begins with the recognition — Heilbroner's lifelong insistence — that the economic arrangements we inhabit are not natural laws but human constructions, and that what has been constructed can be constructed differently.
The worldly philosophers have spoken. The question now is whether anyone is listening carefully enough to hear what they said — and imaginative enough to build what they knew was needed.
---
The phrase that kept stopping me was not a phrase at all. It was a number: seventeen years. The average life expectancy of a textile worker in Manchester in the 1840s, living in the gap between the technology's arrival and the institution's response.
Seventeen years. A child grows up in that gap. A life is lived and ended in that gap. And the worldly philosophers, writing from their studies, could see the pattern with perfect clarity and still could not close the gap faster. That is the thing about Heilbroner's work that will not let me rest — not the elegance of the framework or the rightness of the diagnosis, but the stubborn, maddening fact that seeing the pattern has never been sufficient to prevent its repetition.
We are in the gap again. The forces have been unleashed. The agencies of control are rudimentary. Heilbroner wrote that sentence in 1967, about computers in general, and it reads as though someone wrote it this morning about Claude Code. The precision of the recurrence is what unsettles me. Not that history repeats — everyone knows that — but that it repeats with the same lag between the shock and the response, and the lag is measured in human lives.
The engineers I sat with in Trivandrum are in the gap. The parents at dinner tables asking what to tell their children are in the gap. The senior developer watching her fifteen years of expertise approach commodity pricing is in the gap. And the institutions that might cushion the fall — the reimagined educational systems, the labor frameworks designed for judgment-work rather than time-work, the governance structures that empower citizens rather than merely regulate companies — are being designed at the speed of committees while the technology moves at the speed of inference.
What Heilbroner gave me, across these chapters, was not comfort. It was the opposite. He gave me the pattern, laid bare across five centuries of economic history, and the pattern says: the dams always come too late for the first generation. The eight-hour day arrived after the children had already worked the mills. Social insurance arrived after the Depression had already hollowed out the middle class. The institutional imagination, however powerful, however necessary, has never once in the historical record outrun the technological shock it was responding to.
And yet.
He also gave me this: the pattern bends. Not automatically. Not inevitably. But it bends because people build — because stonemasons in Melbourne refused to accept that the length of the working day was a natural law, because reformers looked at existing arrangements and said this is not adequate, something new is needed, because the institutional imagination, however slow, eventually produces structures that the previous generation could not have conceived.
The building is the point. Not the arrival. The building.
The river of intelligence does not consult economists before opening new channels. But economists — the worldly philosophers, the ones who insist that economics is moral philosophy, that every allocation embodies a judgment about what human life is for — can tell us where the current runs dangerous. Where the concentration pools. Where the costs accumulate without anyone noticing, because the gains are so bright that the losses become invisible.
Heilbroner spent his career in that work. The least I can do is continue it — imperfectly, experimentally, with the understanding that imperfect dams built in time are infinitely more valuable than perfect dams built too late.
Seventeen years. The gap between the technology and the institution. The space in which everything is decided.
We are in it now. Build accordingly.
** Every major technology in the history of capitalism -- the steam engine, the power loom, electrification, the computer -- produced a gap between its arrival and the institutional response. In that gap, the human cost was paid. Robert Heilbroner spent five decades studying that gap: who bears the cost, who captures the gains, and why the dams always arrive too late for the first generation. His worldly philosophers -- Smith, Marx, Keynes, Schumpeter -- each saw a piece of the AI transition a century before it arrived. This book reassembles their partial visions into a diagnostic framework for the most consequential economic transformation since industrialization, and asks whether the institutional imagination can finally outrun the technological shock.

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Robert Heilbroner — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →