In the summer of 2025, doctoral student Xingqi Maggie Ye and Associate Professor Aruna Ranganathan of UC Berkeley's Haas School of Business began what would become the most rigorous empirical study of AI's effect on work thus far. They embedded themselves in a 200-person technology company for eight months and focused on observation and qualitative work. They sat in the offices, attended the meetings, watched the screens, talked to the workers, and documented what happened when generative AI tools entered a functioning organization. Before you read further, it’s important to note that this experiment predated the rubicon of the “orange pill” I describe in this book.
Their findings, published in the Harvard Business Review in February 2026, confirmed some of what Han's philosophy predicted. They also complicated it in ways that Han's philosophy cannot easily accommodate.
Finding One: AI does not reduce work. It intensifies it. Workers who adopted AI tools worked faster, took on more tasks, and even expanded into areas that had previously been someone else's domain. The boundaries between roles blurred, too. Designers started writing code. Delegation decreased. Even casual experimentation with AI led to a “meaningful widening of job scope” for users, according to the researchers, as testing a tool’s capabilities turned into relying on it to do work that was otherwise out of scope.
There was always more to do, and the tool was always ready to help do it. While at first refreshing, the costs of that mentality accrued over time.
Finding Two: Work seeps into pauses. The researchers documented a pattern they called "task seepage," the tendency for AI-accelerated work to colonize previously protected spaces. Employees were prompting on lunch breaks, sneaking requests in during meetings, even filling gaps of a minute or two with AI interactions.
Those minutes had served, informally and invisibly, as moments of cognitive rest. Now they were nonexistent. A person who would never have opened their laptop in a waiting room found themselves plugging away with an AI platform on their phone in the elevator, not because anyone asked them to, but because the tool was there and the idea was there and the gap between impulse and execution had shrunk to the width of a text message.
The internalized imperative to achieve, what Han calls auto-exploitation, converted possibility into action with a reliability that no manager could match.
Finding Three: Multitasking became the norm, and it fractured attention. AI could handle time-intensive, low-effort tasks in the background, and it could co-create code, and it could provide alternative solutions to problems, and it could do all those things while that designer was working on something else. But the human in the loop still needed to keep an eye on everything, which led to “a sense of always juggling, even as the work felt productive.”
Again, the consequences were at first unclear. But over time, the habit of filling the workday with more tasks, even AI-assisted work, led to employees putting more pressure on themselves by trying to keep all those balls in the air at once.
The Berkeley data supports Han’s concern, self-exploitation through internalized achievement pressure, with empirical specificity. The workers were not being forced to work more. They were choosing to. The tools made more work possible, and the internal imperative converted that possibility into compulsion.
The Berkeley study measured behavior. Hours worked. Tasks completed. Boundaries crossed. Self-reported burnout. These are real measurements of real phenomena, and I am not dismissing them.
But the study did not clarify whether the additional work was better or worse than the work it replaced. It did not distinguish between work that was trivial, more of the same mechanically expanded to fill the available hours, and work that was genuinely new: higher-level problems, more ambitious projects, the kind of cognitive challenge that AI had unlocked by removing the implementation bottleneck. Both show up as "more work" in a study that measures hours. Only one of them is pathological.
The study did not answer whether workers found their AI-augmented work more or less satisfying than what came before. A person can be exhausted by work they find deeply satisfying. Any new parent knows this. Any emergency room doctor. Any artist in the final weeks before a deadline. Exhaustion is a signal worth heeding. It is not a verdict on the value of what produced it, and this broader question was likely outside the scope of the Berkeley study altogether.
The study couldn’t measure what disappeared, either. The cognitive activities that AI replaced included both drudgery and depth, and from the outside they are indistinguishable.
Consider one of my engineers in Trivandrum. Before Claude, she spent roughly four hours a day on what she called "plumbing": dependency management, configuration files, the mechanical connective tissue between the components she actually cared about. That plumbing was tedious. She did not miss it. But mixed into those four hours were also the moments when something unexpected happened in the configuration, something that forced her to understand a connection between systems she had not previously learned.
Those moments were rare. Maybe ten minutes in a four-hour block. But they were the moments that built her architectural intuition, the sense of how systems fit together that no documentation could teach.
When Claude took over the plumbing, she lost both the tedium and the ten minutes. The tedium she was glad to lose. The ten minutes she did not know she had lost until months later, when she realized she was making architectural decisions with less confidence than she used to and could not explain why.
The study could not distinguish between drudgery-removal and depth-removal, because from the outside, a person doing less grunt work and a person losing access to formative struggle look exactly the same: someone spending less time on tasks they used to do.
The Berkeley researchers themselves proposed a dam. They called it "AI Practice": structured pauses built into the workday, sequenced rather than parallel work, protected time for human connection that cannot be optimized away, behavioral training alongside technical training.
I have been trying to build that dam with my team. After Trivandrum, the engineers were faster, bolder, reaching into domains that used to belong to other teams. The reclaimed time did not stay reclaimed, though. Sometimes it was filled instantly with more strategic work that mattered: a new product capability we could not have attempted before, or a rethinking of our Station audio architecture that had been on the backlog. Those were the good days. But more often, the time filled with additional tasks that happened to be available. Another feature request. Another optimization pass. “Just one more prompt,” followed by “just one more prompt,” without realizing that you still have your foot on the gas, no matter how much AI helps.
The difference between strategic thinking and task-filling was not always visible to the people doing the work. Both feel the same when the tool makes everything frictionless.
The pull of the tool is real, and organizational culture rewards visible productivity more naturally than it rewards the invisible work of reflection.
When electricity arrived in factories in the early twentieth century, the immediate effect looked remarkably like what the Berkeley researchers found a century later. Workers worked faster. They took on more. Electric lights made night shifts easier. The electric motor made continuous production feasible. The combination meant that people who had previously gone home at sundown now worked until their bodies gave out. The burnout was endemic. The human cost was staggering. Children in the mills. Sixteen-hour days. Factory towns where the concept of leisure did not exist because there was no time that was not the factory’s time.
The labor movement’s response was to build dams: the eight-hour day, the weekend, child labor laws. These dams did not stop electrification. They redirected it. They insisted that the power flowing through the new system had to leave room for the humans inside it.
Electricity was, and even now is, an expansion of capability and possibility that reshaped the standard of living for hundreds of millions of people. But the transition was painful. A society reorganizing itself around a new source of power underwent tremendous turbulence. And the dams – the labor laws, the cultural norms, the institutional structures that protected human time – were what turned turbulence into expansion rather than catastrophe.
Email and messaging in the 1990s followed the same arc. Workers were suddenly reachable at all hours. The boundary between office and home dissolved. Studies followed, and warnings of burnout accumulated with the regularity of quarterly reports. Before email, leaving the office meant leaving work. There were barriers between you and your boss and your panicked client and, typically, a mutual understanding that a problem at 9 p.m. could wait until 9 a.m.
When email arrived, the boundary did not collapse overnight. It eroded. First the executives checked from home. Then the managers. Then everyone. The erosion was invisible because each individual check felt voluntary. Nobody was forced to read email at midnight. The internalized imperative did the forcing.
The data on AI shows intensification. It does not show whether that intensification is the early symptoms of a chronic disease or the temporary fever of a body learning to accommodate something powerful and new. That distinction is what separates Han's diagnosis from the full truth. The data alone cannot resolve it. But it can tell us what to watch for: not whether people are working more, because they will, but whether the additional work is making them more capable or merely more exhausted. Whether the freed-up hours are flowing to judgment or filling with menial tasks.
Only time, and the quality of the dams we build in the interim, will answer it.
The clinical reframing of AI's relationship to occupational health: the tool does not cause burnout — it amplifies whatever organizational conditions already exist, rendering sustainable environments…
The analytical frame that reclassifies artificial intelligence from tool upgrade to environmental regime shift — the category of change for which Diamond's framework was designed and to which the…
Coyle's 2026 thesis that AI's primary economic impact operates through corporate reorganization — changes in decision-making processes, resource allocation, and coordination — rather than through…
The Berkeley researchers' prescription for AI-augmented workplaces — structured pauses, sequenced workflows, protected human-only time — reinterpreted through Wenger's framework as the participatory…
The Berkeley researchers' prescription for the AI-augmented workplace — structured pauses, sequenced workflows, protected human-only time, behavioral training alongside technical training — the…
The developmental threshold beyond which AI augmentation impedes expertise—accelerating early stages while preventing the perceptual, judgmental growth that proficiency and mastery require.
The capacity — demanded by the expanded economy of research — to perceive the logical relationships among lines of inquiry and allocate scarce investigative resources across them.
The gradual accumulation of unrecorded coupling decisions that produces accidental system structure—enabled by zero-cost refactoring.
The cognitive trace of an unfinished task that persists in working memory after switching to a new one — contaminating subsequent performance in ways the person cannot detect.
The condition in which the subject exploits herself and calls it freedom — the signature of the enterprise of the self, where the overseer's function is internalized as motivation.
Opportunism directed at the future self—present extraction of productivity at future cost—a transaction requiring governance that individual willpower cannot provide.
The distinction at the heart of the Turing Trap — between AI systems designed to replace human workers (automation) and systems designed to amplify human capabilities (augmentation) — with the same…
Nippert-Eng's foundational concept: the ongoing, active, effortful practice through which individuals construct and maintain the line between work and home — not a psychological fact but a material…
The institutional structures required to direct the AI surplus toward broadly shared welfare — infrastructure, education, labor market policy, governance of AI development, international coordination…
The specific depletion produced by sustained emotional labor under conditions of inadequate replenishment — Hochschild's framework reveals AI's new division of feeling as a burnout machine.
The migration of presence bleed from observable device interactions to purely mental composition — a form of domestic absence that leaves no physical signal to detect.
The specific balancing mechanisms — protected time, institutional limits, cultural norms valuing depth — that serve as thermostats in an AI ecosystem lacking structural self-correction.
The defensive attachment strategy developed by children who learned that reaching out for help was met with rejection or inconsistency — now the dominant adult pattern that AI tools specifically…
Stone's foundational concept for the cognitive state in which the mind scans every channel and settles on none — structurally distinct from multitasking and uniquely intensified by AI.
Toyama's critique of the beaver-dam metaphor in The Orange Pill: dams built by the powerful, in jurisdictions where their power operates, leaving the downstream communities to the unmediated force of…
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The progressive decay of the capacity for sustained, unaided concentration that occurs when practitioners rely continuously on AI assistance — incremental, imperceptible, and grounded in the…
The systematic reduction of worker skill requirements through technological design — not a side effect of automation but frequently its central purpose, documented by Noble across industrial…
The transformation of complex judgment-work into routine supervision—not simplification but a qualitative change in what 'skill' means.
The central diagnostic question Mark's framework poses to AI-augmented work: whether the subjective experience of absorption reported by builders is Csikszentmihalyi's flow — restorative and…
Two states indistinguishable from outside — intense sustained engagement — and neurochemically opposite from within. Flow couples wanting to liking; compulsion runs wanting alone. The same body, the…
The AI-era phenomenon in which seamless conversational interfaces mask rapid serial domain-switching, producing the subjective experience of sustained flow while accumulating the cognitive costs of…
The principle running through every level of Egan's framework — that the difficulty is not a cost imposed on learning but the process through which the relevant cognitive tools are actually built.
The class of affordances — syntactic, diagnostic, dependency, documentation — that the pre-AI software environment offered for the friction-rich, texturally dense engagement through which perceptual…
The systemic counterpart to Segal's individual beaver metaphor — the structural architectures of taxation, labor bargaining, portable benefits, and international coordination that operate at the…
The systematic erasure of technicians, assistants, and workers whose labor makes knowledge systems function—rendered invisible by conventions attributing output to authorities.
Lisanne Bainbridge's 1983 insight that automation does not simply remove the human from a task — it transforms the human's role into monitoring, which humans do badly.
Midgley's signature method — the unglamorous work of crawling under the conceptual house to find where the pipes have gone wrong and everything downstream has been contaminated.
The compulsive engagement pattern produced when the enterprise of the self encounters unlimited productive capability — behavior indistinguishable from addiction, output indistinguishable from…
Freud's 1914 mechanism—traumatic patterns repeat beyond pleasure—applied to builders who cannot stop building past exhaustion, driven by unconscious mastery-seeking rather than flow.
Rosa's central normative concept for the mode of relating to the world in which the subject is genuinely addressed, moved, and transformed by something that exceeds control — the vibrating wire…
The critical design distinction — borrowed from developmental psychology and pressed into service for AI — between tools that support cognitive effort and tools that eliminate it, determining whether…
The vast, inarticulate substrate of understanding that operates beneath conscious awareness and cannot be captured in any specification, no matter how detailed—Polanyi's foundational insight that "we…
The Berkeley researchers' term for the colonization of previously protected temporal spaces by AI-accelerated work — the mechanism through which the recovery windows of pre-AI workflows disappear.
The characteristic figure of Han's achievement society — the worker who has so thoroughly internalized the productive imperative that external coercion has become unnecessary, and for whom rest feels…
The emerging body of 2023-2025 empirical research documenting measurable degradation of professional capability among practitioners who rely heavily on AI tools, precisely as Ericsson's framework…
The cognitive state of perpetual vigilance Stone first observed in 1990s Microsoft executives — now democratized by smartphones and intensified by AI into the operating condition of every knowledge…
Byung-Chul Han's 2010 diagnosis of the achievement-driven self-exploitation that has replaced disciplinary control as the dominant mode of power — and, in cybernetic terms, a social system operating…
The Engels Simulation's structural analogy between Manchester's mill children and the attention-economy children of the AI age — not a comparison of magnitude but of externalized developmental costs…
The extension of Lefebvre's colonization of everyday life into the temporal domain — the structural process by which AI's continuous availability converts the unnamed intervals of the day (the…
Odell's diagnostic frame for the historical sequence by which successive waves of media technology — broadcast, internet, smartphone, AI — have progressively claimed the territory of lived human…
The daily journey between home and workplace reconceived — not as transportation but as the transitional infrastructure that allowed the industrial-era nervous system to shift between domain-selves.
The widening structural gap between the speed of AI capability and the speed of institutional response on behalf of the people the capability affects — the condition under which avoidable suffering…
The elimination by AI of the natural intervals — compile cycles, colleague delays, physical transitions — that once imposed rhythm on the workday and served, invisibly, as the cognitive…
The specific calculation that governs every deployment decision in a competitive market — if five workers can do the work of one hundred, why not just have five? — and the structural reason moral…
Laudan's paradigm conceptual problem of the AI transition: flow states and auto-exploitation are behaviorally indistinguishable, their competing theoretical frameworks make opposed predictions, and…
The cognitive pathology by which humans read AI-generated output's structural confidence as evidence of substantive quality — and the specific failure mode Schein's humble inquiry framework is…
The seamlessly responsive, intuitively designed interaction between human user and AI tool — analyzed by the Gramsci volume as the most advanced political technology for producing consent yet devised.
The organizational failure mode in which a change is successfully implemented while the transition is completely unsupported — producing metrics that rise while people quietly fracture.
The material boundary between work and non-work—enforced by offices, commutes, closing doors—has dissolved into a permeable membrane continuously eroded by tools that follow the worker everywhere, at…
The structural self-concealment of cognitive erosion — the capacity to perceive the loss is the capacity being lost.
Robert Solow's 1987 observation — you can see the computer age everywhere except in the productivity statistics — which Brynjolfsson spent his career resolving into three distinct problems: timing,…
Newport's name for the structural pattern by which every productivity technology — including AI — generates more shallow work in its slipstream than it eliminates, colonizing every freed minute with…
Rosa's formulation of the collective action problem in which every individual's rational response to competitive pressure produces a collective outcome that makes everyone worse off — the prisoner's…
Landes's comparative framework applied to AI: every transformative technology produces a gap between capability and institutions, and who bears the cost of the gap is determined politically.
The sequence of British legislation (1802, 1819, 1833, 1844, 1847) establishing enforceable standards for working conditions, child labor, and hours—arriving decades after evidence warranted but…
Immordino-Yang, Christodoulou, and Singh's 2012 paper that synthesized the default mode network evidence into a claim of startling directness — the brain at rest is the brain at work.
Xingqi Maggie Ye and Aruna Ranganathan's 2026 Harvard Business Review ethnography of an AI-augmented workplace — the most rigorous empirical documentation to date of positive feedback dynamics in…
The roughly thirty-year period between the introduction of electric motors to American factories (late 1880s) and the eventual productivity gains they produced (beginning around 1920) — the canonical…
Edo Segal's February 2026 training session in southern India — twenty engineers each operating with the leverage of a full team — read through Follett's framework as the paradigmatic instance of…