In the late 1990s, Roy Baumeister brought subjects into a room smelling of fresh cookies. Half were told to eat the cookies; half to eat only radishes. Afterward, both groups received an unsolvable puzzle. The cookie-eaters persisted for nineteen minutes; the radish-eaters gave up after eight. They had spent their willpower resisting the cookies, and the depletion carried over to an unrelated task. The theory — that self-regulation draws from a finite pool — has been debated and refined in subsequent decades, but its core observation has proven durable: saying no costs something, cognitively, and the cost is drawn from a resource that does not instantly replenish. For Nippert-Eng's framework applied to AI, ego depletion is the structural mechanism that converts continuous willpower-based boundary maintenance into predictable collapse.
The phenomenology of ego depletion maps precisely onto the evening of an AI-augmented knowledge worker. Every notification resisted, every prompt postponed, every 'just one more thing' avoided draws from the same reservoir. The engineer who ends her day at 5:30 p.m. and maintains the boundary until bedtime does not perform one act of resistance. She performs eight, ten, fifteen — each one small, each one consuming a measurable fraction of the finite resource, until the reservoir is low enough that the next pull succeeds. The failure does not register as dramatic. It registers as 'I'll just check one thing,' and the check becomes an hour, and the hour becomes the evening.
Material supports reduce the depletion cost. A dedicated workspace means the person does not decide, each evening, whether to enter the work domain — the architecture makes the decision. A temporal routine means the decision is made once, at a predictable moment, rather than continuously. A social agreement means the decision is made by the household's norm rather than by the individual's will. Each layer of material support removes some of the depletion burden from the individual reservoir, which means the reservoir lasts longer and the boundary holds.
Without these supports, the individual is alone with the pull. And the pull from AI is qualitatively different from the pull from previous technologies. Email and social media pulled through distraction — the notification, the scroll, the anxious check. The pull was avoidance-based and culturally delegitimized; resisting it had moral support. Claude Code pulls through creation. The pull is generative, approach-based, culturally legitimized. Resisting it feels not like discipline but like voluntary self-diminishment. The depletion is faster and the moral support is absent.
The cumulative effect across weeks and months is the slow, grinding erosion Nippert-Eng's framework predicts. The laptop opens at 10 p.m. instead of midnight. Then at 9. Then during dinner, just for a minute. Then the minute becomes ten. The trajectory is not a failure of character. It is a failure of architecture — the predictable outcome of asking a finite resource to do infinite work.
Roy Baumeister's self-regulation research, particularly Baumeister et al., 'Ego Depletion: Is the Active Self a Limited Resource?' (Journal of Personality and Social Psychology, 1998). The strong form of the theory has been challenged in replication studies; the weaker but still robust finding is that sustained self-regulation has measurable cognitive costs.
Self-regulation draws from a finite reservoir. Every act of resistance consumes resources that do not instantly replenish.
The cost is cognitive, not metaphorical. It registers as measurable impairment on unrelated subsequent tasks.
Material supports reduce depletion. Architecture externalizes the cost of boundary maintenance into the environment.
AI-pull is approach-based, not avoidance-based. This makes resistance both more depleting and less morally supported.
The failure mode is predictable erosion, not dramatic collapse. The reservoir lowers, the threshold lowers, the boundary moves.