By Edo Segal
The confession I almost cut from this book was the one about the transatlantic flight.
You know the one. I wrote 187 pages in a single sitting. I recognized, somewhere over the Atlantic, that the exhilaration had drained away hours earlier. That what remained was grinding compulsion. That I had confused productivity with aliveness. And I kept typing.
I kept that confession in because honesty demanded it. But I never fully understood what it meant until I spent time inside Juliet Schor's framework. She gave me the diagnosis I could not give myself.
Here is what she showed me: my inability to close the laptop was not a personal failing. It was not a lack of discipline or self-awareness. I had plenty of self-awareness — I described the problem in real time, on the page, while it was happening. The awareness changed nothing. And Schor explains why. The inability to stop is the correct response to an institutional environment that rewards stopping with nothing and continuing with everything. Status. Income. The neurochemical hit of watching ideas materialize. The social validation of posting about the work at hours that signal dedication.
I wrote in *The Orange Pill* that the whip and the hand are the same. Schor adds something I had not seen: the hand holds the whip because the institution put it there.
This matters because every conversation about AI and work eventually arrives at the same hollow prescription. Set boundaries. Practice self-care. Learn to disconnect. These are individual remedies applied to a structural disease. Schor has spent four decades proving, with data, that the disease is institutional — that the mechanisms converting every productivity gain into more work rather than more life are built into compensation structures, status hierarchies, career tournaments, and a cultural narrative that equates intensity with virtue.
AI is the most powerful productivity technology in human history. Keynes predicted that productivity like this would deliver the fifteen-hour workweek. It has delivered the opposite. Schor is the person who can tell you why, with the empirical precision that the moment demands.
This book is not a detour from the argument of *The Orange Pill*. It is the foundation the argument was missing. The dams I called for need engineering specifications. Schor provides them.
The hours I lost over the Atlantic are not coming back. But the hours my children will face — those are still being decided. The institutions that will shape their working lives are being built right now, mostly by people who have not read Schor, mostly by default rather than design.
Read this. Then build better.
— Edo Segal ^ Opus 4.6
1955-present
Juliet Schor (1955–present) is an American economist and sociologist whose work has reshaped the study of work, time, and consumer culture over four decades. Born in 1955, she earned her Ph.D. in economics from the University of Massachusetts Amherst and taught at Harvard University before joining Boston College, where she is Professor of Sociology. Her 1991 book *The Overworked American: The Unexpected Decline of Leisure* documented the paradox that the most productive society in history had become simultaneously the most overworked, challenging the widespread assumption that technological progress would deliver increasing leisure. Her subsequent works — *The Overspent American* (1998), *Born to Buy* (2004), and *True Wealth* (also published as *Plenitude*, 2010) — extended her analysis to competitive consumption, children's consumer culture, and an alternative economics of sufficiency. A leading researcher on the four-day workweek and co-chair of the Economics for the Future network, Schor has testified before the United States Senate on AI-driven labor displacement and working-time reduction. Her central contribution is the empirical demonstration that the failure to convert productivity into leisure is institutional rather than inevitable — a product of economic structures that can, in principle, be redesigned.
In 1930, John Maynard Keynes sat down to write an essay about his grandchildren. Not literally — he had none — but figuratively: about the generation that would inherit the economic machinery his own generation was building. His prediction was specific and, at the time, perfectly reasonable. Within a century, he wrote, the "economic problem" — the struggle for subsistence that had defined human existence since the first hunter-gatherer picked up a stone tool — would be solved. Productivity would increase so dramatically that the challenge facing his grandchildren would not be scarcity but abundance. Not how to produce enough, but what to do with all the free time. He estimated fifteen hours of work per week, performed mostly out of residual habit, the way a person who has recovered from an illness still reaches for the medicine cabinet.
Keynes was half right. The productivity gains he forecast arrived on schedule, and then some. Output per worker-hour in the United States increased roughly fivefold between 1930 and 2020. The machines he imagined — and many he could not have imagined — did exactly what he predicted. They produced more with less human effort. By every measure of productive capacity, the economic problem was solved several times over. An American worker in 2025 produces in eleven minutes what her 1930 counterpart produced in an hour.
The leisure never came.
This is the paradox that has organized Juliet Schor's career: a four-decade empirical investigation into why the most productive society in human history became, simultaneously, the most overworked. When Schor published The Overworked American in 1991, the data was already devastating. Americans were working an average of 163 more hours per year than they had in 1969 — a full month of additional labor grafted onto the calendar despite the most sustained productivity expansion the world had ever seen. The direction of change was precisely opposite to what every economic model predicted. Technology was making work faster, more efficient, more productive. And Americans were responding by working more, not less.
The explanation, Schor argued, was not psychological. It was not that Americans were naturally industrious, culturally driven, or individually compulsive — though all of these descriptions contained fragments of truth. The explanation was institutional. The mechanisms that governed the allocation of productivity gains in the American economy were structurally biased toward more output rather than more time. Employers captured productivity gains as increased production. Compensation systems rewarded hours rather than results. The consumer economy expanded to absorb every increment of income, creating new needs that required new income that required new hours. The cycle was self-reinforcing, and no individual decision could break it, because the cycle operated at the level of institutions, not individuals.
This institutional analysis is what separates Schor's work from the popular literature on work-life balance, burnout, and hustle culture. Advice columnists tell individuals to set boundaries. Therapists help clients manage stress. Productivity gurus sell systems for doing more in less time — which, as Schor's framework predicts, simply creates room for even more work. None of these interventions address the structural machinery that produces overwork as a systemic outcome. Telling an individual to work less in a system that rewards working more is like telling a salmon to swim downstream. The advice is sound in theory and useless in practice, because the current runs the other direction.
The current has only accelerated. Between 1991 and 2025, the institutional sources of overwork that Schor identified intensified along every dimension. The boundary between work and non-work, already porous in the fax-machine era, dissolved entirely with the smartphone. The gig economy created an entire class of workers whose compensation was tied directly to hours of availability, with no institutional floor on rest. The knowledge economy elevated cognitive labor to the status of the most valued form of work, and cognitive labor — unlike factory labor — has no natural stopping point. A factory worker knows when the shift is over because the whistle blows. A knowledge worker's shift is over when she decides it is, which means it is never over, because the work is never done, because thinking is never done, because there is always another email, another revision, another idea that arrives at two in the morning and demands to be pursued.
Then, in the winter of 2025, artificial intelligence crossed a threshold that made every previous productivity gain look modest.
The Orange Pill describes the moment with the specific vertigo of someone who lived through it. A twenty-fold productivity multiplier, achieved in a week of training, at a cost of one hundred dollars per person per month. A Google principal engineer watching a machine reproduce a year of her team's work in an hour. Junior developers shipping in a weekend what senior colleagues had quoted six months for. The imagination-to-artifact ratio — Segal's term for the distance between an idea and its realization — collapsing to the width of a conversation.
By the logic of Keynes's 1930 prediction, this should have been the moment. If a fivefold increase in productivity over ninety years was supposed to deliver the fifteen-hour workweek, a twentyfold increase in individual capability over a few months should have delivered something close to a post-work society. The builders described in The Orange Pill should have been lounging on beaches, tending gardens, contemplating the meaning of existence with the leisure that their extraordinary tools had finally made possible.
They were not lounging. They were working harder than they had ever worked in their lives.
The Substack post that went viral in January 2026 — "Help! My Husband Is Addicted to Claude Code" — reads, through the lens of Schor's framework, as a field report from inside the paradox. A spouse documenting a partner who had not reduced his hours in response to a dramatic increase in his productivity. He had increased them. The tool was not liberating him from work. It was binding him to work more tightly than any previous tool had, because the work was now more stimulating, more immediately rewarding, more capable of producing the specific neurochemical cocktail that makes productive labor feel like the most important thing a person can do with a human life.
Segal captures the same dynamic in his own experience: writing 187 pages on a transatlantic flight, recognizing that the exhilaration had drained away hours ago, that what remained was "the grinding compulsion of a person who has confused productivity with aliveness," and continuing to type anyway. The confession is valuable precisely because it comes from someone who understands the tools, who built his career at the frontier, who is not naive about what technology does to the people who use it — and who still could not stop.
Schor's framework explains what individual willpower cannot. The inability to stop is not a failure of character. It is the predictable outcome of an institutional arrangement in which every incentive — financial, social, psychological, professional — points toward more production and none points toward rest. The AI tool did not create this arrangement. The arrangement has been in place for decades, producing overwork at every level of the American economy, from the factory floor to the corner office. What the AI tool did was remove the last natural constraint on the arrangement's operation.
Before AI, friction imposed a ceiling on overwork. The time required to translate an idea into code, to debug a failing function, to coordinate across a team — these were not merely inefficiencies. They were temporal buffers. They imposed a pace on the work that was slow enough for the human nervous system to tolerate. The developer who spent four hours debugging a function was, during those four hours, also resting — not consciously, not productively, but neurologically. The frustration of debugging is a different cognitive mode than the exhilaration of creation, and the alternation between modes provided a rhythm that the body could sustain.
AI removed the friction. The rhythm disappeared with it. What remained was creation without interruption, production without pause, the pure unbroken exhilaration of building at the speed of thought. For an hour, this felt like liberation. For a day, it felt like flow. For a week, it felt like transformation. After a month, the Berkeley researchers found what Schor's framework would have predicted from the start: intensification, task seepage, fractured attention, and a specific grey exhaustion that no amount of exciting output could dispel.
The broken promise is not that AI failed to deliver productivity. It delivered productivity beyond what anyone had forecast. The broken promise is the same one Keynes made in 1930, the same one the automation optimists made in the 1960s, the same one the personal-computer evangelists made in the 1980s, and the same one the internet utopians made in the 1990s: that productivity would translate into freedom. It did not translate, because translation requires a mechanism, and the mechanism does not exist. It has never existed in the American economy. Every previous productivity technology — the assembly line, electricity, the computer, the internet — produced the same non-translation, and every time, the explanation was the same: the institutions that govern the allocation of productivity gains are designed to produce more output, not more leisure.
The pattern does not repeat because humans are incapable of learning. It repeats because the institutional structures that produce it have not been changed, and individual learning cannot substitute for institutional reform. A person who understands the work-spend cycle perfectly — who can diagram it, teach it, write books about it — will still, when placed inside the cycle's institutional machinery, work more hours than she intends, spend more money than she planned, and wonder, at two in the morning, why the liberation she was promised has not arrived.
Schor's work has always carried a second argument alongside the diagnosis: that the outcome is not inevitable. The institutional structures that produce overwork are human constructions, and what humans have constructed, humans can reconstruct. The eight-hour day was not a natural law. It was a political achievement, won through decades of organized labor, legislative struggle, and cultural transformation. The weekend was invented. Overtime pay was legislated. Child labor was abolished. Each of these interventions redirected productivity gains away from the default — more output — and toward a different allocation: more time.
The question that organizes this book is whether the AI moment will produce another round of institutional invention or another round of institutional failure. The productivity dividend is larger than any previous one. The potential time dividend — the hours that could be converted into leisure if the structures existed to convert them — is correspondingly enormous. Estimates from the Autonomy Institute suggest that AI productivity gains could enable twenty-eight percent of the American workforce to transition to a thirty-two-hour week by 2033, and a less ambitious ten percent reduction in work time is feasible for over seventy percent of workers.
These numbers are not utopian projections. They are calculations based on the observed relationship between AI-driven productivity gains and the hours required to maintain current output. The leisure is available. The question is whether it will be claimed.
The history is not encouraging. Every previous productivity technology has been absorbed by the work-spend cycle rather than converted into free time. But Schor's work has always insisted that the history is not destiny — that the institutional structures can be changed, that the eight-hour day was once as radical a proposal as the four-day week is today, and that the political conditions for institutional reform sometimes arrive precisely when the need is most urgent.
If AI proves to be as transformative as its most serious analysts believe — not the hype merchants, but the researchers measuring actual displacement and actual productivity effects — then the need for institutional reform is more urgent now than at any previous point in the history of the work-time relationship. The speed of AI-driven productivity growth compresses the window for institutional response. Every month without intervention is a month in which the work-spend cycle absorbs more of the potential time dividend, making the eventual intervention harder and more costly. The dams that The Orange Pill calls for are not philosophical abstractions. They are labor policy. They are organizational design. They are cultural norms with institutional teeth.
Keynes's grandchildren are here. The productivity he promised has arrived. The leisure has not. The question is no longer whether technology can deliver freedom from overwork. It can. The question is whether the institutions that govern the relationship between productivity and time will be reformed in time to convert what technology offers into what humans actually need.
The oldest broken promise in economics is still waiting to be kept.
---
Juliet Schor's work-spend cycle is not a metaphor. It is a mechanism — as specific and observable as a gear system, with interlocking parts that can be diagrammed, measured, and, in principle, redesigned. The mechanism operates through four stages, each feeding into the next with the reliability of an engine that has been running for a century.
Stage one: productivity increases. A worker produces more per hour than she did last year, last decade, last generation. The increase may come from a better machine, a better process, a better algorithm, or, now, a conversation with an AI that writes her code while she directs.
Stage two: the productivity gain is captured as income rather than time. The worker does not go home early. She produces more, and the additional output translates into additional revenue, some fraction of which reaches her as higher compensation. The fraction is smaller than it used to be — real wages decoupled from productivity in the United States around 1973, and the gap has widened in every subsequent decade — but some increment of income arrives, and it arrives in a context that ensures it will be spent.
Stage three: the income is spent. Not saved, not invested in leisure infrastructure, not used to purchase a shorter workweek. Spent — on goods and services that the consumer economy has been engineered to make desirable, necessary, and status-conferring. The new income enables a new standard of living, which becomes the new baseline, which requires the new income to maintain, which requires the productivity that generated it.
Stage four: the new consumption standard requires the continuation of work at the new intensity. The worker cannot reduce her hours without reducing her income, which would reduce her consumption below the new baseline, which would produce a social and psychological cost — the feeling of falling behind, of losing status, of failing — that most people find intolerable. The cycle is complete. Productivity has increased. Leisure has not. And the worker is now locked into a higher level of output to sustain a higher level of consumption that she did not need before the productivity gain made it available.
Schor documented this mechanism across the American economy of the 1970s and 1980s, showing how it operated from the factory floor to the executive suite. The mechanism was class-blind in its basic structure: both the assembly-line worker and the corporate lawyer worked more hours than their counterparts of a generation earlier, though the specific consumption goods that locked them into the cycle differed. For the factory worker, it was the second car, the larger house, the children's branding-conscious wardrobes. For the lawyer, it was the vacation home, the private school tuition, the lifestyle infrastructure that signaled professional success.
What changed in the intervening decades was not the mechanism but its speed. The cycle that once turned on the timescale of years — a productivity gain in one decade producing a consumption adjustment in the next — began turning on the timescale of months, then weeks. The internet accelerated it. Social media accelerated it further. Each acceleration made the cycle harder to perceive, because the lag between productivity gain and consumption adjustment — the window during which a conscious choice to redirect gains toward leisure was at least theoretically possible — shrank toward zero.
Artificial intelligence has now compressed that window to the vanishing point.
Consider what The Orange Pill describes. A developer uses Claude Code to build a revenue-generating product over a weekend — a product that, five years earlier, would have required a team of five, a runway of twelve months, and a technical co-founder. The productivity gain is extraordinary. The developer has, in the space of forty-eight hours, produced what previously required thousands of person-hours.
Schor's framework asks a simple question: Where did the gain go?
Not to leisure. Not to a shorter workweek. Not to the contemplative rest that the developer's nervous system required after forty-eight hours of intense cognitive labor. The gain went to more production. The developer who built one product over the weekend was, by Monday, planning the next one. The income from the first product funded the infrastructure — the subscriptions, the hardware, the upgraded AI tier — required for the second. The second product required more hours, which generated more income, which funded more tools, which enabled more projects. The cycle turned faster than it had ever turned, because the friction that once limited its speed — the months of implementation, the weeks of debugging, the days of coordination — had been removed.
The triumphalists whom Segal describes in The Orange Pill posted their metrics with the excitement of athletes reporting personal records. Lines generated. Applications shipped. Revenue earned. Zero days off. The numbers were extraordinary. The metrics were, by every conventional measure of productivity, spectacular.
Schor's framework reads the same metrics differently. Zero days off is not a productivity triumph. It is the signature of a cycle that has accelerated beyond the worker's capacity to interrupt it. The developer who takes no days off is not exercising freedom. She is locked inside a mechanism that has compressed the gap between productivity, income, consumption, and the need for more productivity to a continuous, unbroken loop.
The AI tools themselves have become consumption goods, adding a new and recursive layer to the cycle. The hundred-dollar monthly subscription. The upgraded hardware that runs local models faster. The courses, the tutorials, the community memberships that confer the knowledge required to stay competitive. The premium API access that unlocks capabilities the standard tier does not provide. Each of these purchases is rational within the logic of the cycle — more capability requires more investment, which requires more income, which requires more output — and each tightens the cycle's grip.
What Schor identified in The Overspent American, her 1998 analysis of competitive consumption, has migrated from the domain of household goods to the domain of production tools. In the 1990s, reference groups were shifting: Americans were no longer comparing their consumption to their neighbors but to the lifestyles they saw on television, lifestyles several income brackets above their own. The upward comparison drove aspirational spending that could never be satisfied, because the reference point kept moving upward with each season of programming.
The same dynamic now operates in the domain of AI-augmented production. Knowledge workers compare their output not to their local peers but to the AI-augmented achievements they see on social media — the developer who shipped a complete product in a weekend, the designer who generated a hundred variations in an hour, the founder who built a company without a technical co-founder. The upward comparison drives a productive arms race: each professional adopts more powerful AI tools to match the visible standard, which raises the standard further, which requires more sophisticated tools, more training, more hours to master the new capabilities.
Production has become consumption. The tools of production are consumer goods. The outputs of production are inputs to the next round of consumption. And the cycle between them spins faster than at any previous point in economic history, because the friction that once imposed a natural pace — the weeks of learning, the months of building, the years of accumulating expertise — has been compressed into conversations.
Segal identifies this dynamic when he describes the aesthetic of the smooth — the removal of friction from every interface, every transaction, every productive act. Schor's framework reveals the economic consequence of that aesthetic: smoothness feeds the work-spend cycle by making both production and consumption frictionless. When it is easy to build, the builder builds more. When it is easy to buy the tools of building, the builder buys more. The ease is not incidental. It is the mechanism through which the cycle accelerates.
The counterargument is immediate and obvious: the developer who built a product over a weekend chose to do so. Nobody forced her. She was not exploited by an employer or coerced by a supervisor. She was, by every liberal-democratic standard of individual autonomy, free. She chose to build rather than rest, to produce rather than consume leisure, and her choice should be respected as an expression of her values and her agency.
Schor's response to this counterargument has been consistent across four decades: individual choice operates within institutional constraints, and the constraints are doing most of the work. The developer "chose" to build over the weekend in a context where her professional reputation is determined by visible output, where her income depends on the volume of her production, where her social media presence — itself a professional tool — rewards displays of intensity, and where the cultural narrative of her industry identifies rest with failure and production with virtue. Within this context, the "choice" to work is approximately as free as the "choice" to breathe. The constraints do not appear as constraints because they have been internalized — absorbed into the developer's sense of identity, her definition of a well-lived life, her understanding of what it means to be a serious person in a serious profession.
This internalization is what makes the AI iteration of the work-spend cycle particularly resistant to intervention. The factory worker of 1910 knew she was being exploited. The exploitation came from outside — from a boss, a clock, a whistle. The resistance was correspondingly external: unions, strikes, legislation. The AI-augmented knowledge worker of 2026 does not feel exploited. She feels empowered. The intensity is self-generated, self-directed, self-celebrated. The cycle has moved inside, and with it, the possibility of resistance has dimmed, because there is no external antagonist to resist.
Byung-Chul Han diagnosed this internalization as the signature pathology of the achievement society. Schor's contribution is to show that the internalization is not merely psychological. It is institutional. The structures that reward intensity and penalize rest are real, external, measurable structures — compensation formulas, promotion criteria, market expectations, competitive dynamics — that produce the psychology of intensity as their downstream effect. The developer feels empowered because the institutions have made empowerment and overwork indistinguishable. Changing the feeling requires changing the institutions.
The amplifier — Segal's central metaphor for what AI does — amplifies the work-spend cycle with the same fidelity it amplifies everything else. Feed it a system that converts productivity into leisure, and it will accelerate the conversion. Feed it a system that converts productivity into overwork, and it will accelerate that conversion instead. The amplifier does not choose. The institutional architecture chooses, and the amplifier carries the choice further and faster than any previous technology.
The urgent question is not whether AI is productive — it is extravagantly productive — but whether the institutional architecture into which its productivity feeds is capable of converting that productivity into something other than more work. The architecture, as currently constructed, is not. It is designed, at every level, to convert productivity into output. It has been converting productivity into output for a century, through every technological revolution, despite every prediction that this time the leisure would arrive.
This time is no different in its mechanics. It is different only in its speed. The cycle that once turned on the timescale of decades now turns on the timescale of weeks. The window for institutional intervention — the gap between a productivity gain and its absorption by the cycle — has compressed to nearly nothing. What took the labor movement fifty years to achieve in the early twentieth century — the redirection of productivity gains toward reduced work hours — must now be achieved in a fraction of that time, because the cycle is moving faster than any previous institutional response has managed.
The work-spend cycle is the engine. AI is the fuel. The engine will burn whatever fuel is provided, at whatever speed the fuel allows. The question is not about the fuel. It is about the engine's design. And redesigning an engine while it is running at full speed is the specific institutional challenge of this moment — a challenge that no previous generation has faced at this velocity, and that no subsequent generation will face with as much leverage, because the window is closing.
---
There is a peculiar confession embedded in The Orange Pill that reveals more about the current moment than any statistic. Segal describes working through a transatlantic flight, recognizing that the exhilaration had drained away hours earlier, that what remained was the grinding compulsion of a person who had confused productivity with aliveness — and continuing to type. He did not close the laptop. He kept writing. He knew, in the moment, that the work had crossed from satisfaction into compulsion. He said so, on the page, in real time. And the knowledge changed nothing.
This is not a failure of self-awareness. It is a demonstration of what happens when a culture has so thoroughly eroded the infrastructure of leisure that even a person who can diagnose the erosion in real time cannot access the alternative. The laptop stays open not because the writer lacks insight but because the culture has made productive intensity the path of least resistance and everything else — rest, reflection, the slow pleasure of doing nothing in particular — feel like falling behind.
Schor's work provides the structural explanation for what individual psychology cannot. The question is not why Segal kept typing. The question is why closing the laptop felt harder than keeping it open. And the answer is institutional.
Leisure, in its genuine form, is not merely the absence of work. It is a positive condition — a state of engagement with activities that are intrinsically satisfying, temporally generous, and unconnected to the production of economic value. Genuine leisure requires infrastructure: cultural norms that validate non-productive time, social institutions that provide frameworks for non-economic engagement, and — most critically — an internal capacity to experience non-productive time as satisfying rather than threatening.
Each of these requirements has been systematically undermined.
Cultural norms in the American knowledge economy have reached a point where rest is not merely undervalued but suspect. The developer who takes a two-week vacation is assumed to be less committed than the one who ships features through the holiday. The founder who works weekends is celebrated; the one who does not is questioned. The language of rest has been colonized by the language of productivity: "recovery" is rest reframed as preparation for more work; "self-care" is leisure reframed as optimization; "recharging" is downtime described in the vocabulary of batteries, which exist to be discharged in service of the machines they power.
Social institutions that once provided frameworks for non-economic engagement have contracted. Civic associations, religious congregations, neighborhood organizations — the structures that Robert Putnam documented in Bowling Alone as the infrastructure of American community life — have declined across every measurable dimension. The platforms that replaced them — social media, online communities, digital networks — are not leisure infrastructure. They are attention-extraction systems that convert non-productive time into productive time for someone else: the platform's shareholders, the advertisers, the algorithmic systems that transform human attention into revenue.
The internal capacity to experience non-productive time as satisfying has been eroded by decades of cultural messaging and institutional pressure. Schor's work on the psychology of overwork shows that the capacity for genuine leisure is not a fixed trait but a skill — one that atrophies with disuse and that the American economy has provided progressively fewer opportunities to exercise. A person who has spent twenty years in a professional culture that equates intensity with value does not suddenly discover the ability to sit quietly on a Sunday afternoon. The muscles of leisure have not been used. They have weakened to the point where the attempt at rest produces not relaxation but anxiety — the specific, modern anxiety of a person who is not producing and cannot shake the feeling that the not-producing is a form of dying.
This is what Schor's framework reveals about productive addiction: it is not a pathology of the individual but a symptom of an institutional failure. The institution that has failed is leisure itself — the cultural, social, and economic infrastructure that would make non-productive time genuinely satisfying, genuinely available, and genuinely free from the implicit threat of professional consequence.
The AI moment intensifies this failure along every dimension. Consider the specific features of AI-augmented work that make it exceptionally difficult to stop.
First, the feedback loop is immediate. In pre-AI knowledge work, the gap between action and result — between writing code and seeing it run, between drafting a proposal and receiving a response — imposed natural pauses. The developer wrote a function, submitted it, waited for compilation, reviewed the output, debugged the errors. Each step introduced a gap, and the gaps, however frustrating, provided moments of cognitive rest. The work had a rhythm: exertion, pause, exertion, pause. The rhythm was not designed for human wellbeing, but it incidentally served it.
Claude Code eliminates the gaps. The developer describes a function, receives the implementation in seconds, reviews it, requests a modification, receives the modification in seconds. The rhythm becomes a continuous flow — exertion without pause, creation without interruption. The flow state that Csikszentmihalyi identified as the optimal human experience has been stripped of the natural interruptions that once prevented it from becoming its pathological twin: compulsion without end.
Second, the work is genuinely satisfying in a way that most previous knowledge work was not. The drudgery has been removed. The tedious translation work — the boilerplate, the dependency management, the debugging of trivial errors — that once consumed most of a developer's day has been absorbed by the tool. What remains is the interesting part: the design decisions, the architectural choices, the creative problem-solving that drew the developer to the profession in the first place. The work is better. The paradox is that better work is harder to stop doing, because the reward signal is stronger. The dopamine cycle that once reset during the tedious intervals now runs continuous, a neurochemical engine that the culture has provided no mechanism to regulate.
Third, the tool is always available. Previous technologies imposed temporal boundaries through their limitations. The office closed. The server went down for maintenance. The collaborator in another time zone went to sleep. Each limitation was, from a productivity perspective, an inefficiency. From a human-wellbeing perspective, each was a dam — a small structure in the flow that created a pool of non-productive time in which the worker could rest, whether she wanted to or not. AI tools do not close. They do not sleep. They do not go offline for maintenance during hours that happen to correspond to the human need for rest. They are available at three in the morning with the same capability they offer at three in the afternoon, and the availability converts every moment of wakefulness into a potential moment of production.
The Berkeley study documented the downstream effects with empirical precision. Task seepage — the colonization of previously protected time by AI-assisted work — was the most significant finding. Workers were prompting during lunch breaks, in elevators, in the minutes between meetings that had previously served, informally and invisibly, as cognitive rest. The researchers observed what Schor's framework would have predicted: freed time was not freed. It was immediately recaptured by the cycle. Each moment of potential rest became a moment of actual production, not because anyone demanded it but because the institutional incentives — the compensation structure, the status hierarchy, the cultural equation of intensity with commitment — made production the default and rest the deviation.
The result is what Schor has called the failure of leisure: a condition in which the institutional infrastructure for non-productive time has eroded to the point where rest is no longer a viable alternative to work. The worker does not choose work over rest. She chooses the only option that the institutional environment makes available, because the alternative — genuine leisure, with its requirement of cultural validation, social infrastructure, and internal capacity — has been systematically defunded.
The AI moment makes the failure visible in a way that previous technologies did not, because the productivity gain is so large that the absence of leisure is no longer deniable. When a developer achieves a twentyfold increase in productivity and responds by working more intensely rather than less, the explanation cannot be that she needs the additional income for survival. The explanation is structural: the institutions that would convert her productivity into leisure do not exist.
Segal's concept of productive addiction names the symptom. Schor's framework diagnoses the disease. The disease is not that individuals are addicted to work. It is that the economy has been designed — not conspiratorially, but structurally, through the accumulation of institutional choices made over a century — to make work the only reliable source of meaning, identity, satisfaction, and social connection, while leisure has been stripped of the institutional support it would need to serve as a genuine alternative.
The prescription follows from the diagnosis. If productive addiction is an institutional failure rather than an individual pathology, then the treatment is institutional rather than therapeutic. The solution is not meditation apps, or digital detoxes, or advice columns about setting boundaries. The solution is rebuilding the infrastructure of leisure: cultural norms that treat non-productive time as genuinely valuable rather than merely tolerated; compensation structures that do not penalize workers who choose to take their productivity gains as time rather than income; organizational practices that protect non-productive time as a legitimate use of hours; and a social infrastructure of non-economic engagement that provides the satisfactions — community, meaning, connection, pleasure — that the workplace has monopolized.
The capacity for genuine rest is the scarcest resource in the AI-augmented economy. Not because human beings have lost the ability to rest, but because the institutions that once supported rest — the Sabbath, the eight-hour day, the weekend, the clear boundary between office and home — have been eroded, in some cases deliberately, by economic forces that profit from their absence. Rebuilding those institutions, or building new ones adapted to the specific conditions of AI-augmented work, is not a luxury. It is the prerequisite for converting the largest productivity gain in human history into anything other than the largest intensification in human history.
The laptop stays open on the transatlantic flight because closing it is, in every institutional sense, harder than keeping it open. Making it easier to close — not through willpower but through institutional redesign — is the work that remains.
---
In the summer of 2025, Xingqi Maggie Ye and Aruna Ranganathan embedded themselves in a two-hundred-person technology company for eight months. They sat in offices, attended meetings, watched screens, and documented what happened when generative AI tools entered a functioning workplace. Their resulting study, published in the Harvard Business Review in February 2026, produced three findings that, taken individually, confirmed what anyone paying attention had already suspected. Taken together, through the analytical framework that Schor's four decades of overwork research provides, they describe something more precise and more alarming: the empirical signature of a work-spend cycle that has accelerated past the speed at which institutional responses can form.
Finding one: AI does not reduce work. It intensifies it. Workers who adopted AI tools worked faster, took on more tasks, and expanded into domains that had previously been someone else's responsibility. Designers started writing code. Delegation decreased. The boundaries between roles blurred not because organizational charts were redrawn but because the tool made boundary-crossing trivially easy, and the institutional culture rewarded the crossing.
This finding, stated plainly, should have shattered the most persistent fantasy of the AI productivity narrative: that efficiency gains translate into reduced effort. They do not. They have never translated into reduced effort at any point in the history of labor-saving technology, and the empirical record on this point is so consistent, across so many technologies and so many decades, that its persistence as a fantasy tells us something important about the function the fantasy serves. The fantasy of technology-as-liberation is not a prediction. It is an ideology — a story that justifies the deployment of efficiency-increasing technologies by promising a human dividend that the institutional architecture is designed never to deliver.
Schor documented this pattern in manufacturing, in services, in professional work, across the entire postwar period. When assembly lines got faster, workers did not go home early. They produced more units. When computers accelerated office work, office workers did not gain afternoons of leisure. They processed more paperwork. When email made communication instantaneous, the instantaneity did not produce free time. It produced more communication — an avalanche of messages that expanded to fill every moment the technology had supposedly freed.
The Berkeley study measured the same pattern in the AI era with unusual granularity. Not only did workers produce more; they produced different kinds of more. The "meaningful widening of job scope" that the researchers documented was not merely quantitative — more of the same tasks — but qualitative. Workers were entering domains they had never previously occupied. A designer writing code is not doing more design work. She is doing a fundamentally different kind of work, one that her training did not prepare her for and that her job description does not include. The AI tool made the boundary-crossing possible. The institutional culture made it expected.
This distinction matters because it reveals a mechanism of intensification that previous productivity technologies did not produce at this scale. When an assembly line got faster, the worker did more of the same thing. The intensification was quantitative: more widgets per hour. When AI enters a knowledge workplace, the intensification is qualitative: more kinds of work per person. The developer who also does design. The designer who also writes code. The product manager who also builds prototypes. Each role expands to include the adjacent roles, because the tool makes the expansion frictionless and the culture makes the expansion rewarded.
The result is not merely more work but a specific kind of more work — the kind that fragments attention across multiple domains, that requires constant context-switching, that produces the sensation of always juggling even as the work feels productive. The Berkeley researchers identified this fragmentation as a direct consequence of AI-assisted multitasking: the tool could handle multiple streams simultaneously, but the human in the loop still needed to monitor all of them, producing a cognitive load that increased even as the mechanical workload decreased.
Finding two: work seeps into pauses. The researchers documented a pattern they called "task seepage" — the tendency for AI-accelerated work to colonize previously protected spaces. Workers were prompting during lunch breaks, in elevators, in the gaps between meetings. Minutes that had previously served, informally and without anyone's conscious intention, as moments of cognitive recovery were converted into moments of production.
Schor's analysis of the erosion of temporal boundaries provides the historical context for this finding. The colonization of rest by work is not an AI-specific phenomenon. It has been underway since the first BlackBerry vibrated in a jacket pocket during a family dinner. What the Berkeley study documented is the latest and most efficient iteration of a process that has been accelerating for twenty-five years: the conversion of every moment of human wakefulness into a potential moment of economic production.
The specific mechanism of task seepage in the AI era differs from its predecessors in a way that Schor's framework illuminates. Email colonized rest by imposing obligations — messages that demanded responses, each response generating more messages. The colonization was external: someone else's demand on your time. AI-assisted task seepage is internal. Nobody sends the developer a message during her lunch break demanding that she prompt Claude. She prompts because the tool is available, because an idea occurred to her, because the gap between impulse and execution has shrunk to the width of a text message, and because the institutional culture has provided no norm, no expectation, no structural support for the proposition that a lunch break should remain a lunch break.
The internalization of the demand is the critical difference. External demands can be resisted through external mechanisms: out-of-office replies, notification settings, organizational policies that prohibit after-hours communication. Internal demands — the ones that arise from the worker's own internalized imperative to produce — cannot be addressed by any external mechanism, because the source of the demand is inside the worker. The only institutional response that can address internalized demand is a culture that validates non-production strongly enough to counterbalance the internal imperative, and that culture does not exist in most AI-adopting organizations.
The researchers' observation that workers were prompting in elevators is significant not because elevator-prompting is inherently harmful but because it reveals the absence of any temporal boundary that the institutional culture treats as inviolable. If the elevator ride is productive time, then no time is non-productive. If no time is non-productive, then rest can occur only when the worker's body physically fails — when exhaustion overrides the imperative to produce, when sleep imposes the boundary that no institution has bothered to establish.
Finding three: multitasking became the norm, and it fractured attention. AI could handle background tasks while the worker focused on foreground tasks, but the human cognitive architecture is not designed for the kind of distributed monitoring this arrangement requires. The worker who is writing a proposal while Claude handles a data analysis in another window is not doing two things at once. She is doing one thing — managing the allocation of her attention between two streams — and the meta-task of attention management consumes cognitive resources that are not available for either stream.
The neuroscience of attention supports the Berkeley finding with decades of evidence: the human brain does not multitask. It switches between tasks, and each switch carries a cost — a measurable degradation in performance, a measurable increase in error rate, a measurable depletion of the cognitive resources required for sustained attention. The AI tool makes multitasking appear frictionless because the tool handles the mechanical execution. But the monitoring — the continuous, low-grade vigilance required to ensure that the tool's output meets the worker's standards — remains a human task, and it is a task that depletes the same cognitive resources as the work itself.
Schor's overwork framework explains why organizations do not address this depletion: because the depletion is invisible to the metrics that organizations use to evaluate performance. A worker who is producing more output across more domains while monitoring multiple AI-assisted streams looks, to any dashboard, like a high performer. The fragmentation of her attention, the erosion of her capacity for sustained thought, the accumulating cognitive debt that will eventually express itself as burnout or error or both — none of these appear on the dashboard. The institutional incentives reward the visible output and ignore the invisible cost.
This is the institutional source of overwork that Schor has identified across every sector of the American economy: the systematic misalignment between what is measured and what matters. Compensation systems measure output volume. Status hierarchies measure visible intensity. Career trajectories measure scope expansion. None of these metrics capture the quality of attention, the depth of understanding, the sustainability of the pace, or the worker's capacity to continue performing at this level next month, next year, next decade. The metrics optimize for the short-term extraction of maximum output from the worker, and the long-term costs — burnout, attrition, the degradation of the judgment that AI-augmented work demands — are externalized onto the worker and onto the future.
The Berkeley researchers proposed an institutional response: structured pauses, what they called "AI Practice." Sequenced rather than parallel work. Protected time for human-only cognitive engagement — the kind of deep, unassisted thinking that AI-augmented workflows tend to crowd out. The proposal is modest, practical, and exactly the kind of micro-level institutional intervention that Schor's macro-level analysis points toward.
The eight-hour day was the macro-level dam that redirected the productivity gains of the industrial era toward reduced work time. AI Practice is the micro-level dam that the researchers are attempting to build inside individual organizations — a structure within the workday that protects the temporal conditions for sustained attention, deep thought, and cognitive recovery.
The parallel between the two is not superficial. Both are institutional interventions designed to counteract the natural tendency of productivity-enhancing technologies to produce intensification rather than liberation. Both require organizational commitment — the willingness to accept a short-term reduction in visible output in exchange for a long-term investment in the sustainability of the workforce. Both face resistance from institutional cultures that reward intensity and penalize restraint. And both are necessary precisely because the alternative — leaving the allocation of time to the individual worker's discretion within a system that rewards maximization — produces the outcome that every previous productivity technology has produced: more work, not less.
The Berkeley data confirms what Schor's framework predicts. AI does not reduce work. The institutional architecture that governs the allocation of productivity gains is designed to produce more output, and AI feeds that architecture with the most powerful fuel it has ever received. The data also reveals the specific mechanisms through which the intensification operates in the AI era — the qualitative expansion of job scope, the colonization of temporal gaps, the fragmentation of attention through parallel monitoring — and each mechanism maps onto an institutional structure that could, in principle, be redesigned.
The gap between the Berkeley researchers' institutional proposal and the institutional reality of most AI-adopting organizations is the gap between what Schor's framework prescribes and what the economy delivers. Structured pauses require organizational cultures that value sustainability over extraction. Results-based evaluation requires metrics that capture quality alongside quantity. Protected cognitive time requires a willingness to leave visible output on the table in exchange for invisible investments in human capability.
None of these requirements are technologically impossible. All of them are institutionally difficult. And the difficulty is not accidental. It is the product of an economic architecture that has been optimized, over a century, to extract maximum output from human labor — an architecture that AI does not challenge but perfects.
The data from Berkeley is a photograph of a system operating exactly as designed. The workers are more productive, more fragmented, more depleted, and more locked into the cycle than they have ever been. The AI tools are working perfectly. The institutions that govern the relationship between productivity and time are working perfectly too — perfectly, that is, at producing the outcome they have always produced. The outcome is just arriving faster than anyone anticipated, and the institutional response is forming more slowly than anyone can afford.
The most seductive concept in The Orange Pill is the imagination-to-artifact ratio — Segal's term for the distance between a human idea and its realization in the world. When the ratio is high, only the privileged build: those with capital, teams, institutional backing, years of specialized training. When the ratio is low, anyone with an idea and the will to pursue it can make something real. The history of technology, in Segal's telling, is the history of this ratio declining — from the medieval cathedral that required hundreds of workers and decades of labor to the software product that a single developer can ship in a weekend with an AI assistant and a hundred-dollar subscription.
The narrative is compelling. It is also, from the perspective of work-time analysis, incomplete in a way that changes the conclusion entirely.
Schor's framework asks the question that the imagination-to-artifact ratio, as presented, does not: What happens to ambition when the constraint on realization disappears?
The question is not rhetorical. It has an empirical answer, and the answer is visible in every previous technology that lowered the cost of production. When desktop publishing made it cheap to produce printed materials, the number of printed materials did not stay constant while the hours required to produce them decreased. The number of printed materials exploded. When digital cameras made photography virtually free, the number of photographs taken did not remain stable while photographers enjoyed more leisure. The number of photographs taken increased by several orders of magnitude — from roughly eighty billion per year in the film era to over a trillion in the smartphone era. When email made communication instantaneous, the volume of communication did not hold steady. It multiplied until the average knowledge worker spent over a quarter of her workday managing messages.
In every case, the pattern is identical. A technology reduces the cost of producing something. The reduced cost does not translate into reduced effort. It translates into more production — dramatically more production — because the constraint that the old cost imposed was also, inadvertently, a constraint on ambition. Remove the cost, and the ambition that was previously contained by it expands to fill every available hour.
This is the temporal mechanism of the work-spend cycle operating at the level of individual creative production. The imagination-to-artifact ratio was not merely a barrier to realization. It was a temporal governor — a mechanism that limited the speed at which ambition could convert into work. The developer who needed six months to build a product had, during those six months, a natural ceiling on the number of products she could attempt. The ceiling was frustrating. It was also protective. It imposed a pace on ambition that was slow enough for human beings to sustain.
Claude Code removed the governor. The developer who can now build a product in a weekend faces no natural ceiling on the number of products she can attempt. The old constraint — implementation time — has been abolished. The new constraint is the number of hours in a day, and that constraint is biological, not technological, which means it can be overridden by stimulants, by willpower, by the specific neurochemical reward of watching ideas become real at the speed of conversation.
Segal describes the collapse of the ratio as liberation. From the perspective of capability, it is. The developer who was previously locked out of entire domains by implementation friction can now enter those domains freely. The imagination that was previously compressed by the need to translate ideas into code can now express itself directly. The creative energy that was previously consumed by the mechanics of building can now be directed toward the higher-level questions of what to build and for whom.
But liberation from one constraint does not produce freedom. It produces exposure to the next constraint. And the next constraint, in the AI era, is not technical. It is temporal. The developer has been liberated from the friction of implementation into the tyranny of unlimited possibility. She can build anything she can describe. The question is no longer "Can I build this?" but "Which of the infinite things I could build should I build?" And that question, unlike the technical question it replaced, has no natural resolution. There is always another project. There is always another idea. There is always another conversation with the machine that might produce something extraordinary.
The expansion of ambition is not a psychological quirk. It is a structural consequence of removing the temporal governor that friction once provided. Schor's research on the relationship between productivity and work hours shows that this expansion operates with the reliability of a physical law: when the cost of production drops, the volume of production increases, and the increase absorbs every hour that the cost reduction was supposed to liberate.
The Autonomy Institute's estimates — that AI productivity gains could enable twenty-eight percent of the American workforce to transition to a thirty-two-hour week — are calculated on the assumption that output remains constant while hours decrease. The assumption is mathematically valid and institutionally naive. Output does not remain constant when the cost of production drops. Output increases, because the institutional incentives all point toward more output: compensation tied to volume, status tied to visible productivity, career advancement tied to scope expansion, and the internal psychological reward of watching ideas become real at unprecedented speed.
The developer in The Orange Pill who built a complete product over a weekend did not then take Monday through Friday off to enjoy the leisure her productivity had earned. She began planning the next product. The productivity gain was not converted into time. It was converted into ambition — into the next project, the next idea, the next conversation with the machine. The imagination-to-artifact ratio dropped. The number of artifacts increased. The hours of work did not decrease. They intensified, because each artifact was now achievable in less time, which meant more artifacts could be attempted, which meant the developer's ambition expanded to fill every available hour and then some.
This is not a failure of the individual developer's time management. It is the predictable outcome of a system that has removed the temporal governor without installing a replacement. The friction that once limited ambition was not merely an obstacle. It was a structure — imperfect, frustrating, often unjust in its distribution of who could build and who could not — but a structure nonetheless, and it performed a function that its removal has left unperformed.
The function was temporal regulation. The friction regulated the pace at which ambition could convert into work. Without it, the conversion is instantaneous, and the pace is limited only by the biological constraints of the human body — constraints that a culture of productive intensity has spent decades training its members to override.
Segal's ascending friction thesis — the argument that removing mechanical friction exposes a higher, more cognitive kind of friction — is partly true and partly evasive. The higher friction is real: the work of judgment, taste, strategic thinking, the question of what deserves to be built. This work is genuinely harder than implementation, and it cannot be automated. But the ascending friction thesis implies that the higher friction will naturally regulate the pace of work in the way that the lower friction once did. It will not, because judgment-work does not impose the same temporal constraints as implementation-work.
Debugging a function takes a measurable amount of time. The developer cannot speed it up beyond a certain point, because the work requires sequential attention to specific errors. The time required for debugging is a temporal floor — a minimum below which the work cannot be compressed. Judgment-work has no comparable floor. The decision about what to build can be made in a second or deliberated for a month. The quality of the decision improves with deliberation, but the institutional incentives do not reward deliberation. They reward speed. Ship fast. Iterate. The market will tell you if you were wrong, and you can course-correct with another conversation with the machine.
The result is that the ascending friction does not impose a pace. It imposes a challenge — a harder kind of work — but not a rhythm. And without rhythm, without the natural alternation between exertion and rest that the old friction inadvertently provided, the work becomes what the Berkeley study documented: continuous, seeping, fragmenting, and ultimately depleting.
The imagination-to-artifact ratio is a measure of capability. Schor's framework reveals that it is also, inadvertently, a measure of temporal protection. A high ratio meant that fewer things could be built, which meant that ambition was constrained, which meant that the hours consumed by production had a natural ceiling. The ceiling was a limitation on who could build and what could be built, and its removal is, in many respects, a genuine good — a democratization of capability that Segal rightly celebrates.
But the ceiling was also a dam. And the dam has been removed without installing a replacement. The water flows faster, and farther, and reaches places it never reached before. Whether that flow irrigates or floods depends on the structures that are built in its path. The imagination-to-artifact ratio has collapsed. The temporal structures that would convert that collapse into leisure rather than intensification have not been built. Until they are, the collapse of the ratio will produce what every previous collapse of production costs has produced: more work, at a faster pace, with less capacity for rest, and with the persistent, increasingly hollow promise that this time, the leisure will arrive.
---
Every productivity gain produces a phantom — a ghost of the hours that could have been freed but were not. Schor's career has been spent tracking these phantoms through the American economy, documenting the gap between the leisure that productivity makes possible and the leisure that institutions actually deliver.
The gap has always been large. In the postwar period, when American productivity roughly doubled, working hours for full-time employees declined by approximately four percent — capturing less than a tenth of the potential time dividend. European economies, operating under different institutional arrangements — stronger unions, more robust labor regulation, cultural norms that valued leisure more explicitly — captured a substantially larger share. By the early 2000s, the average American worked roughly three hundred more hours per year than the average German or French worker, a gap that could not be explained by differences in productivity, which were comparable, but only by differences in institutional design.
The AI productivity gain produces the largest potential time dividend in the history of wage labor. If a twentyfold increase in individual productivity were converted entirely into reduced hours, the implication would be staggering — a workweek compressed to a fraction of its current length. No serious analyst expects full conversion. But even partial conversion, even a modest fraction of the potential dividend directed toward leisure, would represent a transformation in the relationship between work and life that no previous technology has delivered.
The question is not whether the dividend exists. It does. The question is who captures it.
Schor's framework identifies three possible recipients of any productivity-generated time dividend, and the allocation among them is determined not by technology but by power.
The first recipient is the worker. If the productivity gain is captured by workers, it appears as reduced hours at the same or higher compensation — the outcome that Keynes predicted and that the eight-hour day movement partially achieved. The worker produces the same value in fewer hours. The freed hours become genuine leisure: time that belongs to the worker, unconnected to economic production, available for whatever the worker finds intrinsically valuable.
The second recipient is the firm. If the productivity gain is captured by the firm, it appears as increased output at the same labor cost — the outcome that has dominated the American economy for the past fifty years. The worker produces more, and the additional value flows to the firm as revenue, profit, or reinvestment. The worker's hours do not decrease. They may increase, as the Berkeley study documented, because the increased capability creates institutional expectations of increased output.
The third recipient is the consumer, through lower prices. If competition forces firms to pass productivity gains through to consumers rather than retaining them as profit, the gain appears as cheaper goods and services. This outcome benefits consumers but does not address the work-time question, because cheaper goods feed the consumption side of the work-spend cycle: lower prices enable more consumption, which sustains the income requirement, which sustains the work hours.
The historical record is unambiguous about the default allocation. In the absence of institutional intervention — unions, regulation, cultural norms with enforcement mechanisms — the productivity dividend flows to the firm. Workers do not receive it as leisure. They receive it, if at all, as a marginal increment of compensation that is immediately absorbed by the consumption cycle. The leisure that the productivity gain makes possible remains a phantom — visible in the arithmetic, invisible in the lived experience of the workers who generated it.
The Orange Pill provides a case study that illustrates the capture mechanism with unusual precision. Segal describes a boardroom conversation in which the twentyfold productivity multiplier is on the table and the investor understands headcount reduction — the conversion of the productivity gain into margin through the elimination of jobs. If five people can do the work of one hundred, the investor's arithmetic says, why not have five?
Segal chose to keep the team — to capture the productivity gain as expanded capability rather than reduced headcount. The choice is admirable and, within Schor's framework, structurally unusual. The default institutional response to a productivity gain of this magnitude is headcount reduction. Not because managers are cruel or investors are sociopathic, but because the incentive structures of the modern corporation — quarterly earnings pressure, shareholder expectations, competitive dynamics that reward efficiency — point toward cost reduction as the highest-priority use of any productivity gain.
The developer who is not laid off when AI makes her function automatable has not captured the time dividend. She has merely retained her employment. The productivity gain has been captured by the firm, which now receives the output of twenty from the payroll of one. The worker's hours have not decreased. Her workload has, in most documented cases, increased, because the organization expects the AI-augmented worker to produce at the AI-augmented rate. The time dividend has been converted into an output dividend, and the output dividend flows to capital.
This conversion is what Schor's work identifies as the central mechanism of the American overwork problem: the systematic redirection of productivity-generated time dividends away from workers and toward firms. The mechanism is not new. It has operated at every technological transition since the industrial revolution. What is new is the magnitude of the dividend being redirected and the speed at which the redirection is occurring.
The Autonomy Institute's projections — thirty-five million American workers potentially transitioning to a thirty-two-hour week — are conditioned on institutional choices that have not yet been made and that the default trajectory of the American economy will not make. The projections describe what is possible, not what is likely. What is likely, absent institutional intervention, is that the productivity gains from AI will be captured as they have always been captured: as increased output by firms that retain their workforces, as reduced headcount by firms that do not, and as cheaper goods for consumers whose consumption absorbs whatever income the gains generate.
The institutional interventions that could redirect the dividend toward workers are known. Schor has cataloged them across four decades of research. Collective bargaining — the mechanism through which workers historically negotiated for reduced hours — remains the most effective at the organizational level, but union density in the American private sector has fallen below seven percent, and the AI-intensive sectors where the largest productivity gains are occurring are among the least unionized in the economy. Labor regulation — minimum standards for work hours, overtime rules, mandatory rest periods — can impose temporal protections that individual bargaining cannot achieve, but the regulatory apparatus has not been updated to address AI-specific intensification, and the political conditions for such an update are, at best, uncertain.
The four-day workweek movement, which Schor has studied extensively and advocated for publicly, represents the most visible contemporary effort to redirect the productivity dividend toward workers. Pilot programs in Iceland, the United Kingdom, and other countries have produced data suggesting that a four-day week at maintained pay is feasible without productivity loss — that the compressed schedule forces efficiencies that offset the reduced hours. The pilots are promising, but they have been conducted primarily in small to medium-sized organizations with relatively privileged workforces. The extension to large-scale AI-intensive enterprises, to service workers, to the global labor market that AI is creating, remains an institutional challenge of the first order.
Schor testified before the United States Senate in October 2025, arguing that working-time reduction should be a central component of the policy response to AI-driven productivity growth. Her testimony was specific: if AI produces the displacement that serious researchers are forecasting, reduced work hours distributed across the workforce are preferable to concentrated unemployment — a smaller reduction in hours for many workers rather than the elimination of hours for some workers and the intensification of hours for others. The arithmetic is straightforward. The politics are not.
The time-dividend capture question is not a technical question. It is a political question — a question about the distribution of power between those who own the AI systems and those who work alongside them. The technology does not determine the allocation. The institutions that govern the technology determine the allocation, and those institutions are, at the moment, designed to allocate the dividend to capital.
Redesigning those institutions — building the structures that would convert AI productivity into genuine leisure for the workers who generate it — is the central policy challenge of this technological transition. Every month in which the redesign does not occur is a month in which the default allocation continues, in which the dividend flows to the firm, and in which the phantom leisure that AI makes possible remains precisely that: a phantom, visible in the arithmetic, invisible in the lives of the people who might have claimed it.
---
In the first eight weeks of 2026, a trillion dollars of market value vanished from software companies. The stocks of Workday, Adobe, Salesforce, Autodesk, and Figma dropped by double-digit percentages. When Anthropic published a blog post about Claude's ability to modernize COBOL, IBM suffered its largest single-day stock decline in more than twenty-five years. The market had a name for the reckoning: the SaaSpocalypse. And it had a chart — two curves crossing, one falling, one rising, the intersection projected somewhere around 2027. The falling curve measured traditional software valuations. The rising curve measured the AI market. The crossing point was the Death Cross.
The Death Cross is, in the first instance, a story about the commoditization of code. When Google reports that a quarter of its code is AI-assisted, when industry estimates put the aggregate figure above forty percent and rising, when Anthropic's CEO projects ninety percent AI-written code within months, the implication for companies whose primary product is code is existential. If code can be generated through conversation, the act of writing code is no longer a defensible basis for a business. The moat that protected software companies for decades — the expense and difficulty of producing working software — has been drained.
Schor's work on the economics of abundance provides a framework for understanding what follows the draining of the moat, and the framework points in a direction that neither the panic nor the dismissal of the panic anticipates.
The transition from scarcity to abundance has occurred before, in domains less visible than enterprise software but structurally analogous. When the cost of producing information dropped to near zero with the arrival of the internet, the information economy did not simply continue as before at lower prices. It reorganized around different principles. The scarce resource shifted from information itself — which became effectively unlimited — to the capacity to navigate, curate, and make judgments about information. Search engines, recommendation algorithms, editorial curation — all of these emerged as responses to the abundance, and all of them derive their value not from producing information but from making sense of it.
When the cost of producing music recordings dropped to near zero with digital audio, the music economy did not simply continue as before. It reorganized around a different scarce resource — attention, and the live experience that recordings could not replicate. The transition was brutal for the incumbents who had built their businesses on the scarcity of recordings, and it took more than a decade for new economic models to stabilize. But the long-term outcome was not the death of music. It was the redistribution of value from the production of recordings to the curation of experience.
The Death Cross in software follows the same structural logic. Code is becoming abundant. The value is migrating from the production of code to the activities that code-abundance makes newly scarce: judgment about what software should exist, the institutional trust and data ecosystems that surround working software, and the human capacity to direct AI toward purposes that serve genuine needs rather than manufactured ones.
Schor's plenitude framework — the alternative economic model she developed in True Wealth — offers a lens that neither the panicked nor the sanguine readings of the Death Cross provide. Plenitude is an economics of enough: an economy organized not around the maximization of output but around the sufficiency of output combined with the expansion of time, community, and sustainability. In a plenitude economy, the response to abundance is not to produce more of the abundant thing at lower margins, competing in an ever-accelerating race to the bottom. The response is to recognize that the thing has become abundant, stop competing on its production, and redirect human effort toward the things that remain genuinely scarce — things that, by their nature, cannot be produced by machines and cannot be made abundant through technology.
Applied to the Death Cross, the plenitude framework suggests a response that neither layoffs nor frantic pivots can provide. If code is abundant, organizations should stop competing on the basis of code production. The value lies in the ecosystem — the accumulated data, the institutional relationships, the workflow integration, the trust that decades of reliable service have built. These things cannot be generated by Claude in an afternoon. They are slow goods, produced through the patient accumulation of relational capital that no productivity tool can accelerate.
The work-time implications are direct. Building and maintaining ecosystems — relational capital, institutional trust, deep customer understanding — is a different kind of work than producing code. It operates on a different temporal rhythm. It requires presence, patience, the slow development of relationships that cannot be rushed without being damaged. It is, in Schor's terminology, care work applied to the institutional context: the work of tending to relationships, maintaining trust, and cultivating the conditions in which human and institutional needs are understood and met.
Care work and judgment work share a temporal property that distinguishes them from production work: they cannot be accelerated beyond a certain pace without degrading their quality. A therapist cannot double her effectiveness by seeing patients twice as fast. A teacher cannot deepen student understanding by lecturing at double speed. A leader cannot build trust by compressing relationship-development into shorter intervals. These activities have intrinsic temporal requirements — they take the time they take — and no technology can compress them without destroying the thing that makes them valuable.
This temporal property is precisely what makes care work and judgment work resistant to the work-spend cycle. If the valuable work in a post-Death Cross economy is the slow work of ecosystem maintenance, relationship building, and judgment cultivation, then that work imposes its own temporal limits. The developer who shifts from writing code — which AI has made near-instantaneous — to understanding user needs — which requires patient observation, sustained attention, and the slow accumulation of insight — cannot accelerate the new work the way she accelerated the old. The intrinsic pace of the higher-level work creates a natural ceiling on intensity that the lower-level work lacked.
This is the connection between Segal's ascending friction and Schor's plenitude that neither framework, alone, fully articulates. The ascending friction reveals that the work gets harder at a higher level. The plenitude framework reveals that the higher-level work operates on a slower temporal rhythm — a rhythm that, if institutions are designed to respect it, could create the conditions for the time dividend that every previous productivity technology has failed to deliver.
The conditional is critical. The natural pace of judgment work creates the possibility of reduced work hours. It does not guarantee it. The institutional pressures that produce overwork — compensation tied to visible output, status hierarchies that reward intensity, the work-spend cycle that converts every increment of free time into consumption — operate on judgment work just as they operate on production work. A leader whose value lies in judgment can still be pressured into sixteen-hour days by an institutional culture that measures commitment by the clock. The natural pace of the work creates the opportunity. Only institutional redesign can convert the opportunity into reality.
The Death Cross, in Schor's reading, is neither catastrophe nor liberation. It is a structural transition that creates a choice — the same choice that every abundance transition has created, the choice between using the abundance to work less and live more or to work more and compete harder for diminishing returns. The choice has gone the wrong way at every previous transition, not because humans are incapable of choosing leisure but because the institutional structures that would convert the choice into reality have not been built.
The stakes of this particular transition are higher than any previous one, because the speed of AI-driven change compresses the timeline for institutional response. The software companies whose valuations are collapsing today will reorganize within months, not decades. The workforce displaced by code commoditization will need to transition to judgment-based, ecosystem-based, relationship-based work on a timeline that previous technological transitions did not demand. And the institutions that could support that transition — retraining infrastructure, temporal protections, compensation models that reward judgment rather than output volume — are further from adequate than they have been at any previous technological juncture.
The Death Cross is a market signal. What it signals is that the era of code-as-product is ending and the era of judgment-as-product is beginning. Whether that transition delivers the time dividend that code-abundance makes possible — or whether it delivers another round of intensification, another chapter in the century-long story of broken promises about technology and leisure — depends on choices that are being made right now, in boardrooms and legislatures and organizational cultures, by people who may or may not understand what they are choosing.
---
The most dangerous sentence in the contemporary discourse on AI and work is: "I choose to work this hard."
The sentence is spoken sincerely. The developer who works through the weekend with Claude Code is not lying when she says she chose it. The founder who logs 2,639 hours in a year with zero days off is not performing a deception when he describes the experience as the most satisfying of his career. The choice is real in the phenomenological sense: these individuals experience themselves as free agents pursuing work they find meaningful, stimulating, and rewarding.
Schor's four decades of research reveal what phenomenology cannot: the institutional architecture that makes the choice feel free while constraining the range of alternatives to a set in which overwork is the only rational option.
The architecture has five components. Each operates independently. Together, they produce the work-maximizing outcome with the reliability of a machine, regardless of the individual preferences of the people inside it.
The first component is compensation structure. In the vast majority of American knowledge-work organizations, compensation is tied to output volume, scope of responsibility, or hours of availability — and usually to some combination of all three. The developer who produces more features earns more. The manager who oversees a larger scope receives a higher salary. The consultant who bills more hours generates more revenue. In no case is compensation tied to the sustainability of the work, the quality of the worker's attention, or the worker's capacity to continue performing at a high level over the long term.
AI intensifies this misalignment. When a developer can produce twentyfold more output with AI assistance, the compensation structure rewards her for producing twentyfold more — not for producing the same amount in one-twentieth the time and using the remaining time for rest, reflection, or the cultivation of the judgment that AI-augmented work demands. The productivity gain flows to output. The worker receives whatever fraction of the increased output the compensation structure allocates to her, and she sustains the output to sustain the compensation.
The alternative — compensating the developer for the same output in fewer hours — is structurally available. Results-only work environments, which evaluate performance based on outcomes rather than hours or output volume, have been piloted in multiple organizations with documented success. The four-day workweek trials that Schor has studied in the United Kingdom, Iceland, and other nations demonstrate that output can be maintained or improved at reduced hours when the reduction is accompanied by organizational redesign. The evidence exists. The implementation does not, because the institutional incentives of most organizations are aligned with hour-maximization rather than outcome-optimization.
The second component is the status hierarchy. In knowledge-work culture, visible intensity is the primary signal of commitment, competence, and value. The developer who is online at midnight is perceived as more dedicated than the one who logs off at six. The founder who works through holidays is celebrated as passionate; the one who takes vacations is suspected of insufficient commitment. The status hierarchy does not measure the quality of the midnight work or the dedication of the six-o'clock departure. It measures visibility, and visibility is a function of hours.
AI amplifies the status dynamic by creating new dimensions of visible intensity. The developer who posts about shipping a feature at three in the morning using Claude Code receives social validation — likes, replies, the algorithmic boost that engagement-optimized platforms provide to content that generates reactions. The developer who used Claude Code to ship the same feature at two in the afternoon and spent the evening with her family does not receive the same validation, because the afternoon shipment lacks the narrative of sacrifice that the midnight post conveys.
The status hierarchy operates through social comparison — the same mechanism that Schor identified in The Overspent American as the engine of competitive consumption. Just as households in the 1990s compared their material possessions to the lifestyles they saw on television rather than to their neighbors' actual circumstances, AI-augmented knowledge workers compare their productive output to the most visible examples of AI-augmented intensity they see on social media. The comparison is always upward — toward the most extreme examples of productive intensity — and the response is always escalation: more hours, more output, more visible intensity to match the reference point.
The third component is career trajectory. Professional advancement in most knowledge-work organizations is structured as a tournament — a competitive process in which a limited number of higher positions are allocated to the candidates who demonstrate the most commitment, measured primarily by output volume and scope expansion. The tournament structure ensures that any individual who reduces her hours, for any reason, falls behind the competitors who do not. The penalty for choosing leisure over intensity is not immediate termination. It is the slow, cumulative erosion of career prospects — the missed promotion, the passed-over project, the gradual marginalization that makes the choice feel like failure.
AI expands the tournament's scope. When the tool makes it possible for a single individual to operate across multiple domains — the backend engineer who now also builds frontends, the designer who now also writes code — the expectation of cross-domain competence becomes a tournament criterion. The worker who limits herself to a single domain, even if she performs excellently within it, is perceived as less valuable than the one who ranges across multiple domains using AI. The expansion of scope that the Berkeley study documented is not merely a consequence of capability. It is a tournament requirement: the price of remaining competitive in a professional environment where AI has raised the standard for what a single person is expected to produce.
The fourth component is the cultural narrative. American knowledge-work culture operates under a narrative framework in which hard work is not merely a means to an end but a moral good — a marker of character, discipline, and worth. The narrative predates AI by centuries. It has roots in the Protestant work ethic that Max Weber identified as a foundational element of capitalist culture, and it has been reinforced by every subsequent generation of productivity ideology, from the efficiency movement of the early twentieth century to the hustle culture of the early twenty-first.
AI does not challenge this narrative. It perfects it. The narrative has always carried a tension: the moral value of hard work existed alongside the practical reality that much hard work was tedious, repetitive, and unsatisfying. The developer who spent four hours debugging a trivial error was working hard, but the work was not intrinsically meaningful. The moral narrative had to override the experiential reality.
AI resolves the tension by removing the tedium. The work that remains after AI handles the implementation — the judgment, the design decisions, the creative problem-solving — is genuinely satisfying. The moral narrative and the experiential reality align for the first time. Working hard feels good and is good, by every criterion the culture provides. The alignment makes the narrative more powerful than ever, because the usual escape valve — the recognition that the work is grinding and that grinding is a cost — has been sealed shut. The work is not grinding. It is exhilarating. And the exhilaration makes the overwork invisible, because the cultural narrative has no category for work that is simultaneously fulfilling and unsustainable.
The fifth component is the absence of institutional support for the alternative. Each of the previous four components pushes toward overwork. For the system to produce a different outcome, a countervailing force would need to push toward leisure with comparable institutional power. No such force exists in the American knowledge economy. Unions, which historically provided the countervailing force in manufacturing, are effectively absent from the AI-intensive sectors. Labor regulation, which provides a floor in other national contexts, has not been updated to address AI-specific intensification. Organizational policies that protect rest — mandatory disconnection, maximum hours, sabbatical programs — exist in some companies but are the exception rather than the norm, and where they exist, they are often culturally undermined by the status hierarchy that rewards those who override them.
The five components operate as a system. No single component is sufficient to produce overwork. But their combination creates an institutional environment in which overwork is the equilibrium — the stable state toward which the system naturally gravitates and from which any individual deviation is punished by the remaining components.
The developer who reduces her hours faces lower compensation (component one), reduced status (component two), diminished career prospects (component three), cultural suspicion (component four), and no institutional support for her choice (component five). The pressure from each component reinforces the pressure from the others. The rational response, within this institutional environment, is overwork.
Schor's central insight — the one that separates her analysis from the popular discourse on work-life balance — is that the rational response is rational. The developer who works through the weekend is not making an error. She is responding correctly to the institutional incentives she faces. The error is in the institutional design, not in the individual's behavior. Telling her to work less, to set boundaries, to practice self-care, is not merely unhelpful. It is a category error — the application of an individual remedy to a structural problem.
The structural remedy is institutional redesign. Compensation structures that reward outcomes rather than hours. Status norms that celebrate sustainability rather than sacrifice. Career trajectories that do not penalize temporal boundaries. Cultural narratives that make room for the proposition that a person who works less may be wiser, not lazier, than one who works more. And institutional infrastructure — regulation, collective agreements, organizational policies with enforcement mechanisms — that provides the countervailing force without which the five components will continue to produce their default outcome.
AI has not created these institutional structures. They have been producing overwork for decades. What AI has done is remove the friction that once moderated their effects — the natural pace of implementation that imposed a ceiling on how much work the structures could extract from any individual in a given period. The ceiling has been removed. The structures remain. And the structures, operating without the ceiling's restraint, are producing overwork at a pace and an intensity that no previous technology has enabled.
The most dangerous sentence in the discourse is "I choose to work this hard," because it locates the analysis at the wrong level. The choice is real. The freedom that makes the choice meaningful is not, because the institutional architecture has been designed — not conspiratorially, not deliberately, but through the accumulated weight of a century of decisions — to make overwork the only choice that does not carry a penalty. Redesigning that architecture is not an act of paternalism. It is an act of liberation — the creation of conditions under which the choice to rest is as institutionally supported, as professionally viable, and as culturally respected as the choice to work.
Until those conditions exist, the developer will keep typing at three in the morning, and the choice will keep feeling free, and the leisure that AI makes possible will remain what it has always been: a phantom visible in the arithmetic and absent from the lives of the people who generate the productivity that could have purchased it.
The word that recurs most often in economic discourse about AI is more. More output. More capability. More products shipped, more features launched, more revenue generated, more problems solved per unit of time. The word operates as an unexamined good — a direction so obviously desirable that questioning it feels eccentric, the way questioning gravity might feel to someone standing on solid ground.
Schor's career has been organized around a different word: enough. Not as austerity, not as deprivation, not as the hairshirt economics of voluntary poverty. Enough as a design principle — an economic architecture in which the goal is not maximizing output but achieving sufficiency in a way that leaves room for the things that output cannot provide. Time. Relationship. The slow, unmeasurable accumulation of the capacities that make a human life worth living rather than merely productive.
The concept of plenitude, which Schor developed in True Wealth, is an economic model built around four principles: reduced work hours, diversified economic activity, investment in sustainability, and the cultivation of self-provisioning — the capacity to produce value outside the formal market economy. Plenitude is not anti-growth in the crude sense that its critics sometimes attribute to it. It is post-growth — an economics that takes seriously the possibility that, beyond a certain threshold of material comfort, additional output does not increase wellbeing and may decrease it, and that the resources consumed by additional output might be better invested in the things that output systematically crowds out.
The AI moment is the most powerful test of the plenitude framework that has ever presented itself, because AI produces the most dramatic increase in productive capability in economic history. If plenitude is viable — if an economics of enough can be constructed under conditions of radical abundance — then the AI moment is when it becomes possible. And if plenitude is not viable, if the institutional and psychological forces that drive maximization are so powerful that even a twentyfold productivity multiplier cannot create the conditions for sufficiency, then the AI moment is when that impossibility becomes undeniable.
The connection between Schor's plenitude and Segal's concept of ascending friction is where the possibility lives, and it is a connection that neither framework, alone, fully articulates.
Segal argues that every significant technological abstraction removes difficulty at one level and relocates it to a higher cognitive level. Assembly language forced programmers to think about memory addresses; compilers abstracted that away, but the programmers freed from memory management did not stop thinking. They thought about operating systems, about databases, about network architecture — problems of a higher order that the lower-level friction had prevented them from reaching. The difficulty did not vanish. It ascended.
The argument is well-established in the history of computing, and Segal extends it convincingly to the AI moment: Claude Code removes the friction of implementation and relocates it to the friction of judgment — the harder, more human question of what should be built and for whom. The ascending friction thesis is an argument about the nature of the work. What it does not address, and what Schor's framework supplies, is the temporal dimension of the higher-level work.
Judgment-work, care-work, and relationship-work operate on temporal rhythms that are fundamentally different from the rhythms of production-work. A surgeon does not become more skillful by operating for sixteen hours straight. The quality of surgical judgment degrades with fatigue; the evidence on this point is extensive and unambiguous. A therapist does not develop deeper insight by seeing patients in continuous succession without intervals for reflection. The therapeutic relationship requires the therapist to process what has occurred, to sit with uncertainty, to allow understanding to form at its own pace rather than at the pace of a scheduling system optimized for throughput.
A leader does not build trust by compressing relationship-development into shorter intervals. Trust is a slow good — a product of repeated interaction, demonstrated reliability, and the accumulation of shared experience that cannot be accelerated without being cheapened. The transactional efficiency that AI brings to communication — faster responses, more articulate emails, more polished presentations — does not accelerate trust. It may decelerate it, because trust requires the specific signals of effort and attention that efficiency optimizes away: the poorly drafted email that shows the sender was thinking in real time rather than polishing through a machine, the pause in a conversation that indicates genuine consideration rather than the instant fluency of a model-generated response.
These temporal properties of higher-level work are not incidental features. They are constitutive. The work is what it is because it takes the time it takes. Compress it, and it becomes a different thing — a simulation of judgment rather than judgment itself, a performance of care rather than care itself, a display of relationship rather than relationship itself.
This is where plenitude and ascending friction converge. If the most valuable work in the AI economy is the slow work of judgment, care, and relationship — the work that ascending friction reveals when the mechanical friction of production is removed — then the temporal requirements of that work create a natural limit on intensity. Not a limit imposed from outside by regulation or collective bargaining, but a limit inherent in the nature of the work itself. Judgment formed in haste is poor judgment. Care delivered at production speed is not care. Relationships built on efficiency are not relationships.
The natural limit creates the possibility of plenitude — an economy in which the most valuable work is done in fewer hours, at a slower pace, with the temporal generosity that the work requires to be done well. The developer who has ascended from implementation to judgment does not need to work eighty hours a week. She needs to work fewer hours at a higher level of cognitive engagement, with protected time for the reflection, the rumination, and the slow accumulation of perspective that judgment requires.
The possibility is real. The institutional obstacles to realizing it are equally real, and Schor's work catalogs them with empirical precision.
The first obstacle is measurement. Organizations measure what they can count, and they can count output more easily than they can count judgment. The developer who ships ten features is visibly more productive than the one who ships three features of higher quality after more deliberation. The metrics that govern compensation, promotion, and status are calibrated to the old economy of production, not the new economy of judgment. Until the metrics change, the institutional incentives will continue to reward production-speed intensity over judgment-speed reflection, regardless of which produces more value.
The second obstacle is competition. Even if one organization restructures around plenitude principles — reducing hours, protecting reflection time, rewarding judgment over volume — it competes against organizations that do not. The competitive dynamics of the market reward speed, and the organization that moves more slowly, even if its decisions are better, risks losing market position to the one that moves faster with worse decisions. The competitive pressure is structural, not individual, and it can only be addressed through coordination — industry norms, regulatory floors, or market mechanisms that reward quality alongside speed.
The third obstacle is the cultural narrative that identifies productive intensity with moral virtue. Plenitude requires a cultural shift — a willingness to believe that working less can be working better, that the person who deliberates is contributing more than the one who ships, that the value of a decision is not diminished by the time it took to make. This cultural shift has precedents. European work cultures, particularly in the Netherlands, Germany, and the Nordic countries, have achieved institutional arrangements that protect leisure without sacrificing productivity. The arrangements are imperfect and under constant pressure, but they demonstrate that the cultural shift is possible. They demonstrate, too, that it is not self-sustaining — that it requires institutional reinforcement, because the competitive pressure toward maximization never relents.
The AI moment makes plenitude both more possible and more urgent than at any previous point in economic history. More possible because the productivity surplus is large enough to sustain reduced hours without reduced output — the arithmetic is unambiguous on this point. More urgent because the alternative — the absorption of the surplus by the work-spend cycle, the conversion of radical productivity into radical intensity — is already underway, and the speed of the conversion makes delayed institutional response increasingly costly.
Schor has framed the choice in testimony, in publications, and in public advocacy: the AI productivity dividend can be captured as reduced hours or as increased output, and the capture depends not on the technology but on the institutional structures that govern its deployment. The four-day workweek is not a fantasy. It is a policy choice — a choice that the technology makes feasible and that only institutions can make real.
The economics of enough is not an economics of less. It is an economics of better — an arrangement in which the extraordinary productivity that AI provides is directed not toward more output but toward the temporal conditions that the most valuable human work requires. Conditions of space, reflection, and the slow cultivation of the judgment that no machine, however capable, can supply.
The ascending friction reveals the work. Plenitude provides the temporal architecture in which that work can be done well. Together, they describe an economy that the AI moment makes possible for the first time — an economy in which the machines handle the production and the humans do the thinking, at a pace that thinking requires, in the hours that thinking demands, and not a minute more.
Whether that economy is built depends on choices being made now. Not in a generation. Not in a decade. Now — because the work-spend cycle is absorbing the surplus at the speed of conversation, and every month without institutional response is a month in which the possibility of plenitude recedes and the reality of intensification advances.
---
The history of work-time reform is a history of interventions that seemed impossible until they were inevitable. The eight-hour day was a radical demand when the National Labor Union first proposed it in 1866. It was a political impossibility through decades of incremental struggle. Then, in 1914, Henry Ford adopted it — not out of benevolence but out of calculation, having discovered that workers who rested produced more per hour than workers who did not — and within a generation it was federal law. The weekend was an invention, not a discovery. It did not exist in any economy until labor movements created it, and the arguments against it — that reduced work time would collapse productivity, that workers would squander the time, that the economy could not bear the cost — were identical in structure to the arguments now deployed against the four-day week.
The arguments were wrong then. The evidence strongly suggests they are wrong now. But being wrong has never been sufficient to prevent an argument from delaying a reform by decades, and decades is time that the AI transition does not provide.
Schor's policy architecture for the AI era operates at three levels — organizational, national, and cultural — and at each level, the principle is the same: build structures that convert productivity gains into time rather than allowing the default institutional mechanisms to convert them into output. The structures are not hypothetical. They have been piloted, tested, and in some cases implemented at scale. What they lack is not evidence but political will, and political will is what this chapter is designed to mobilize.
At the organizational level, the most immediate intervention is the restructuring of compensation around results rather than hours. Results-only work environments — organizational designs in which performance is evaluated solely on the basis of outcomes, with no requirement for minimum hours, physical presence, or temporal availability — have been implemented in organizations ranging from Best Buy's corporate headquarters to midsized technology firms, with documented improvements in both productivity and employee wellbeing. The principle is straightforward: if the organization cares about what gets done, it should measure what gets done and stop measuring how many hours the worker was visible while doing it.
AI makes results-based compensation both more feasible and more urgent. More feasible because AI-augmented workers can produce outcomes at dramatically reduced time expenditure, making the gap between hours worked and results produced wider and more visible than ever. More urgent because the alternative — continuing to measure hours in an environment where a twenty-minute AI-assisted session can produce what previously required a week — creates a perverse incentive to fill the remaining hours with performative busyness: work that exists not to produce value but to satisfy an institutional culture that equates presence with commitment.
The four-day workweek is the most visible organizational intervention, and Schor has studied it more extensively than any other researcher. The evidence from pilot programs conducted in Iceland, the United Kingdom, South Africa, Portugal, and other nations is now substantial. In the UK trial, which involved sixty-one companies and roughly 2,900 workers over six months, the results were consistent: revenue remained stable or increased, employee wellbeing improved across every measured dimension, and at the trial's conclusion, ninety-two percent of participating companies chose to continue the four-day week. The trial was not conducted under AI conditions — it preceded the December 2025 threshold by more than two years — but its findings are directly relevant, because the productivity gains required to maintain output at reduced hours were modest compared to what AI now delivers routinely.
The organizational interventions also include what the Berkeley researchers called "AI Practice" — structured protocols that govern the integration of AI tools into workflows. Sequenced rather than parallel work, protecting the capacity for sustained attention. Protected intervals of human-only cognitive engagement, preserving the deliberative capacity that AI-assisted workflows tend to erode. Mandatory disconnection periods that institutional culture enforces rather than merely recommends. These are not suggestions for individual discipline. They are organizational designs — structures built into the workflow that create temporal dams regardless of any individual worker's inclination to override them.
At the national level, the policy architecture requires updating labor regulation to address the specific conditions of AI-augmented work. Existing overtime regulation, designed for an industrial economy in which "hours worked" was a meaningful proxy for effort and output, is structurally inadequate for a knowledge economy in which a twenty-minute AI session can produce more value than a forty-hour week of manual coding. The regulatory framework needs to address not merely hours but intensity — the cognitive load, the temporal colonization of rest, the erosion of boundaries between work and non-work that the Berkeley study documented.
Schor's Senate testimony in October 2025 laid out the policy case: if AI produces the labor displacement that serious researchers forecast, reduced work hours distributed across the workforce are preferable to concentrated unemployment. The thirty-two-hour week, distributed across the economy, absorbs the displacement that would otherwise produce mass layoffs. The arithmetic is straightforward: if AI enables each worker to produce the same output in fewer hours, reducing hours preserves employment while converting the productivity surplus into leisure. The alternative — maintaining current hours while AI displaces workers who are no longer needed at the old staffing levels — produces the worst of both outcomes: unemployment for the displaced and intensification for the remaining.
Portable benefits — health insurance, retirement savings, paid leave — decoupled from full-time employment are a prerequisite for any work-time reform. As long as essential benefits are tied to full-time employment, reducing hours means reducing benefits, which makes the reduction punitive rather than liberating. Decoupling benefits from hours worked enables a labor market in which reduced hours are a genuine option rather than a professional sacrifice.
Retraining infrastructure must accompany work-time reform, because the AI transition is not merely a quantitative shift — more output per hour — but a qualitative one. The ascending friction that Segal describes means that the work remaining after AI handles implementation is categorically different from the work it replaced. Judgment, strategic thinking, relational capability, the capacity to ask the right question rather than execute the known answer — these skills require different training than the skills they replace, and the educational infrastructure that would provide that training is further from adequate than it has been at any previous technological juncture.
The educational reform is urgent at every level. At the university level, programs designed to produce specialists in narrowing technical domains must be redesigned to produce integrative thinkers — people who can work across domains, synthesize perspectives, and exercise the judgment that AI-augmented work demands. At the secondary level, the emphasis must shift from the transmission of answers to the cultivation of questions — the capacity to identify what matters, to sit with uncertainty, to evaluate competing claims without the aid of a machine that generates plausible responses to any prompt. Schor's research on children and consumer culture, conducted for Born to Buy, adds a developmental dimension: the child's capacity for the questioning and judgment that the AI economy values is developed through exactly the kind of unstructured, temporally generous, non-productive time that the culture of intensity erodes.
At the cultural level, the reform required is the deepest and the most resistant to policy intervention. The cultural narrative that equates productive intensity with moral virtue — the narrative that makes the midnight coding session heroic and the Sunday afternoon walk suspicious — is not a policy problem. It is a civilizational inheritance, rooted in centuries of economic theology that identified work with salvation and leisure with sin.
Schor has argued, consistently and with increasing urgency, that this narrative must be confronted directly. Not with a counter-narrative of laziness — the culture will not embrace idleness, nor should it — but with a narrative of sufficiency. A narrative in which the person who works thirty-two hours of high-judgment, high-care, high-attention work and spends the remaining hours in non-market activity — community, family, creative exploration, the slow development of the relational capacities that no AI can replicate — is not less virtuous than the person who works eighty hours of AI-augmented intensity. A narrative in which enough is not a failure of ambition but its highest expression.
The cultural shift has precedents. The environmental movement produced a cultural revaluation of consumption — a shift from "more is better" to "sustainable is better" — that, while incomplete, has genuinely altered the cultural landscape in which economic choices are made. The AI moment requires a parallel revaluation of production — a shift from "more output is better" to "better output at sustainable pace is better" — that would alter the cultural landscape in which work-time choices are made.
The window for institutional response is narrower than at any previous technological transition. The speed of AI-driven productivity growth compresses the adaptation period. Every month without institutional intervention is a month in which the work-spend cycle absorbs more of the potential time dividend, making the eventual intervention harder and more costly. The institutions that could convert AI productivity into genuine leisure — reduced hours, protected rest, decoupled benefits, redesigned compensation, revalued culture — are not being built at the speed the moment requires. The gap between the speed of capability and the speed of institutional response is widening, and the people in the gap — the workers and students and parents adapting in real time without adequate support — bear the cost of the delay.
The AI age presents a choice that is genuinely unprecedented in the history of the work-time relationship. The productivity surplus is large enough to fund a transformation — a shift from an economy organized around the maximization of output to an economy organized around the sufficiency of output and the expansion of time. The transformation is feasible. The evidence is available. The arithmetic is favorable.
What is missing is the decision. The collective, institutional, political decision to capture the dividend as time rather than allowing it to be captured as output. The decision that the eight-hour day represented in its era, that the weekend represented in its era, that every previous expansion of leisure represented: the choice to insist that the productivity belongs not only to those who own the machines but to those whose lives the machines were supposed to improve.
Keynes imagined that his grandchildren would face the problem of too much leisure. He was wrong about the problem, but he was right about the possibility. The possibility has arrived. The machines are more productive than he dreamed. The leisure he predicted is available, in the arithmetic if not yet in the institutions. The question that remains is the question that has remained at every technological juncture for a century: whether the institutions will be built in time to claim what the machines have made possible, or whether, once again, the oldest broken promise in economics will be renewed rather than redeemed.
The dams must be built now. Not because now is convenient — it is not — but because the river is moving faster than it has ever moved, and the distance between now and too late is measured in quarters, not decades. The choice is ours. The clock is not.
---
The hours keep bothering me.
Not the hours I spent building — the hours in Trivandrum, the thirty days to CES, the transatlantic flights where I could not close the laptop. Those hours I understand. They had a purpose, a direction, a product at the end of them. What bothers me are the hours I cannot account for. The ones that disappeared into the space between flow and compulsion, where I was still typing but no longer choosing to type, where the machine was still responsive but I had stopped being present. The hours that were productive by every metric I know how to measure and hollow by every metric I do not.
Schor's framework gave me a name for those hours. They are the time dividend — the leisure that the productivity I was generating should have purchased, and did not, because no structure existed to convert the productivity into rest. The institutions I work inside are designed to absorb every efficiency gain as more output. The culture I inhabit treats intensity as evidence of seriousness. The tools I use are available at three in the morning with the same capability they offer at three in the afternoon, and neither the tools nor the culture nor the institutions have any mechanism to suggest that three in the morning might be a time for sleeping rather than shipping.
What Schor showed me is that my inability to stop was not a personal failing. It was the correct response to an institutional environment that rewards stopping with nothing and continuing with everything — status, income, the neurochemical reward of watching ideas become real, the social validation of posting about the work at hours that signal dedication. The whip and the hand are the same, as I wrote in the book, but Schor adds something I had not fully seen: the hand holds the whip because the institution put it there, not because the hand is weak.
The part of Schor's analysis that I resist most is the part I need most. The four-day week feels impossible from inside my life. Not logically impossible — the arithmetic works, the pilots demonstrate feasibility, the evidence is clear. Impossible in the way that closing the laptop on a transatlantic flight felt impossible: the alternative is available in theory and absent in practice, because every institutional signal I receive says that reducing hours is reducing commitment, and reducing commitment is a luxury that someone at the frontier cannot afford.
But Schor's history tells me that the eight-hour day felt exactly this impossible to the people who eventually won it. The weekend felt impossible. Every expansion of leisure in the history of work-time reform felt impossible until it was achieved, and the arguments against it — that productivity would collapse, that workers would waste the time, that the economy could not bear the cost — were wrong every time. Wrong not because the concerns were trivial but because the institutional redesign that accompanied the reform addressed them.
I wrote in The Orange Pill that we are in Stage Four — adaptation. Schor's work tells me what adaptation actually requires. Not individual resilience. Not meditation apps or digital detoxes or advice about setting boundaries. Institutional architecture. Compensation structures that reward outcomes. Organizational designs that protect rest. A cultural narrative capacious enough to hold the proposition that a person who works thirty-two hours of deep judgment-work is contributing more than a person who works eighty hours of AI-assisted production.
The twelve-year-old who asked "What am I for?" deserves an economy in which the answer is not "to produce." She deserves temporal space — hours of boredom, of aimless exploration, of the slow unstructured play that developmental science has shown to be essential for the cognitive capacities the AI age will demand of her. She deserves institutions that protect those hours with the same ferocity that the eight-hour-day movement protected the hours of factory workers a century ago.
The dams I called for in the book have, through Schor's analysis, become specific. They have dimensions, materials, engineering specifications. They are not metaphors. They are labor policy, organizational design, educational reform, and a cultural shift that treats the time dividend as the most valuable product the AI revolution can deliver.
The hours keep bothering me because they should. They are the signal that the institutions have not yet been built. And the signal is getting louder.
Every transformative technology in the last century promised to liberate workers. Assembly lines. Computers. The internet. Each delivered extraordinary productivity. None delivered leisure. The hours that should have been freed were absorbed -- by institutions designed to convert every efficiency gain into more output, never more life.
Now AI has crossed a threshold that makes every previous productivity gain look modest. A twentyfold multiplier. Products built in weekends that once took teams and months. The largest potential time dividend in the history of wage labor is sitting on the table. And it is being captured -- not by the people who generate it, but by the same institutional machinery that has been converting productivity into overwork for a hundred years.
Juliet Schor has spent four decades diagnosing why the leisure never arrives and building the institutional architecture that could finally deliver it. This book applies her framework to the AI revolution -- and reveals that the dams we need are not philosophical abstractions but labor policy, organizational redesign, and a cultural reckoning with the oldest broken promise in economics.
-- Juliet Schor

A reading-companion catalog of the 18 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Juliet Schor — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →