In the summer of 2025, doctoral student Xingqi Maggie Ye and Associate Professor Aruna Ranganathan of UC Berkeley's Haas School of Business began what would become the most rigorous empirical study of AI's effect on work thus far. They embedded themselves in a 200-person technology company for eight months and focused on observation and qualitative work. They sat in the offices, attended the meetings, watched the screens, talked to the workers, and documented what happened when generative AI tools entered a functioning organization. Before you read further, it’s important to note that this experiment predated the rubicon of the “orange pill” I describe in this book.
Their findings, published in the Harvard Business Review in February 2026, confirmed some of what Han's philosophy predicted. They also complicated it in ways that Han's philosophy cannot easily accommodate.
Finding One: AI does not reduce work. It intensifies it. Workers who adopted AI tools worked faster, took on more tasks, and even expanded into areas that had previously been someone else's domain. The boundaries between roles blurred, too. Designers started writing code. Delegation decreased. Even casual experimentation with AI led to a “meaningful widening of job scope” for users, according to the researchers, as testing a tool’s capabilities turned into relying on it to do work that was otherwise out of scope.
There was always more to do, and the tool was always ready to help do it. While at first refreshing, the costs of that mentality accrued over time.
Finding Two: Work seeps into pauses. The researchers documented a pattern they called "task seepage," the tendency for AI-accelerated work to colonize previously protected spaces. Employees were prompting on lunch breaks, sneaking requests in during meetings, even filling gaps of a minute or two with AI interactions.
Those minutes had served, informally and invisibly, as moments of cognitive rest. Now they were nonexistent. A person who would never have opened their laptop in a waiting room found themselves plugging away with an AI platform on their phone in the elevator, not because anyone asked them to, but because the tool was there and the idea was there and the gap between impulse and execution had shrunk to the width of a text message.
The internalized imperative to achieve, what Han calls auto-exploitation, converted possibility into action with a reliability that no manager could match.
Finding Three: Multitasking became the norm, and it fractured attention. AI could handle time-intensive, low-effort tasks in the background, and it could co-create code, and it could provide alternative solutions to problems, and it could do all those things while that designer was working on something else. But the human in the loop still needed to keep an eye on everything, which led to “a sense of always juggling, even as the work felt productive.”
Again, the consequences were at first unclear. But over time, the habit of filling the workday with more tasks, even AI-assisted work, led to employees putting more pressure on themselves by trying to keep all those balls in the air at once.
The Berkeley data supports Han’s concern, self-exploitation through internalized achievement pressure, with empirical specificity. The workers were not being forced to work more. They were choosing to. The tools made more work possible, and the internal imperative converted that possibility into compulsion.
The Berkeley study measured behavior. Hours worked. Tasks completed. Boundaries crossed. Self-reported burnout. These are real measurements of real phenomena, and I am not dismissing them.
But the study did not clarify whether the additional work was better or worse than the work it replaced. It did not distinguish between work that was trivial, more of the same mechanically expanded to fill the available hours, and work that was genuinely new: higher-level problems, more ambitious projects, the kind of cognitive challenge that AI had unlocked by removing the implementation bottleneck. Both show up as "more work" in a study that measures hours. Only one of them is pathological.
The study did not answer whether workers found their AI-augmented work more or less satisfying than what came before. A person can be exhausted by work they find deeply satisfying. Any new parent knows this. Any emergency room doctor. Any artist in the final weeks before a deadline. Exhaustion is a signal worth heeding. It is not a verdict on the value of what produced it, and this broader question was likely outside the scope of the Berkeley study altogether.
The study couldn’t measure what disappeared, either. The cognitive activities that AI replaced included both drudgery and depth, and from the outside they are indistinguishable.
Consider one of my engineers in Trivandrum. Before Claude, she spent roughly four hours a day on what she called "plumbing": dependency management, configuration files, the mechanical connective tissue between the components she actually cared about. That plumbing was tedious. She did not miss it. But mixed into those four hours were also the moments when something unexpected happened in the configuration, something that forced her to understand a connection between systems she had not previously learned.
Those moments were rare. Maybe ten minutes in a four-hour block. But they were the moments that built her architectural intuition, the sense of how systems fit together that no documentation could teach.
When Claude took over the plumbing, she lost both the tedium and the ten minutes. The tedium she was glad to lose. The ten minutes she did not know she had lost until months later, when she realized she was making architectural decisions with less confidence than she used to and could not explain why.
The study could not distinguish between drudgery-removal and depth-removal, because from the outside, a person doing less grunt work and a person losing access to formative struggle look exactly the same: someone spending less time on tasks they used to do.
The Berkeley researchers themselves proposed a dam. They called it "AI Practice": structured pauses built into the workday, sequenced rather than parallel work, protected time for human connection that cannot be optimized away, behavioral training alongside technical training.
I have been trying to build that dam with my team. After Trivandrum, the engineers were faster, bolder, reaching into domains that used to belong to other teams. The reclaimed time did not stay reclaimed, though. Sometimes it was filled instantly with more strategic work that mattered: a new product capability we could not have attempted before, or a rethinking of our Station audio architecture that had been on the backlog. Those were the good days. But more often, the time filled with additional tasks that happened to be available. Another feature request. Another optimization pass. “Just one more prompt,” followed by “just one more prompt,” without realizing that you still have your foot on the gas, no matter how much AI helps.
The difference between strategic thinking and task-filling was not always visible to the people doing the work. Both feel the same when the tool makes everything frictionless.
The pull of the tool is real, and organizational culture rewards visible productivity more naturally than it rewards the invisible work of reflection.
When electricity arrived in factories in the early twentieth century, the immediate effect looked remarkably like what the Berkeley researchers found a century later. Workers worked faster. They took on more. Electric lights made night shifts easier. The electric motor made continuous production feasible. The combination meant that people who had previously gone home at sundown now worked until their bodies gave out. The burnout was endemic. The human cost was staggering. Children in the mills. Sixteen-hour days. Factory towns where the concept of leisure did not exist because there was no time that was not the factory’s time.
The labor movement’s response was to build dams: the eight-hour day, the weekend, child labor laws. These dams did not stop electrification. They redirected it. They insisted that the power flowing through the new system had to leave room for the humans inside it.
Electricity was, and even now is, an expansion of capability and possibility that reshaped the standard of living for hundreds of millions of people. But the transition was painful. A society reorganizing itself around a new source of power underwent tremendous turbulence. And the dams – the labor laws, the cultural norms, the institutional structures that protected human time – were what turned turbulence into expansion rather than catastrophe.
Email and messaging in the 1990s followed the same arc. Workers were suddenly reachable at all hours. The boundary between office and home dissolved. Studies followed, and warnings of burnout accumulated with the regularity of quarterly reports. Before email, leaving the office meant leaving work. There were barriers between you and your boss and your panicked client and, typically, a mutual understanding that a problem at 9 p.m. could wait until 9 a.m.
When email arrived, the boundary did not collapse overnight. It eroded. First the executives checked from home. Then the managers. Then everyone. The erosion was invisible because each individual check felt voluntary. Nobody was forced to read email at midnight. The internalized imperative did the forcing.
The data on AI shows intensification. It does not show whether that intensification is the early symptoms of a chronic disease or the temporary fever of a body learning to accommodate something powerful and new. That distinction is what separates Han's diagnosis from the full truth. The data alone cannot resolve it. But it can tell us what to watch for: not whether people are working more, because they will, but whether the additional work is making them more capable or merely more exhausted. Whether the freed-up hours are flowing to judgment or filling with menial tasks.
Only time, and the quality of the dams we build in the interim, will answer it.