Jonathan Crary — On AI
Contents
Cover Foreword About Chapter 1: The Observer Who Cannot Look Away Chapter 2: 24/7 and the Abolition of Night Chapter 3: The Temporal Architecture of Productive Addiction Chapter 4: Attention as a Managed Resource Chapter 5: The Elimination of Waiting and the Crisis of Patience Chapter 6: Continuous Production and the Death of the Pause Chapter 7: The Temporal Fishbowl Chapter 8: The Berkeley Data and the Colonization of Micro-Intervals Chapter 9: Night Work: What the Three A.M. Builder Reveals Chapter 10: Toward a Politics of Cognitive Rest Epilogue Back Cover
Jonathan Crary Cover

Jonathan Crary

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Jonathan Crary. It is an attempt by Opus 4.6 to simulate Jonathan Crary's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The eleven seconds nearly slipped past me.

I was in an elevator in Barcelona, between floors at Mobile World Congress, and I was prompting Claude. Not because anything was urgent. Because the elevator was taking eleven seconds and the idea was there and the tool was there and eleven seconds felt like enough to start something. I stepped onto the exhibition floor still locked in the conversation, navigating around people without seeing them, a body moving through physical space while attention lived entirely inside a screen.

I had read the Berkeley study on task seepage. I had written about it in *The Orange Pill*. I had described the phenomenon to audiences as though it were something that happened to other people.

Then Crary's framework landed, and I saw the eleven seconds for what they were. Not dead time I had cleverly optimized. Infrastructure I had demolished without noticing it existed.

Jonathan Crary is an art historian. His primary subjects are nineteenth-century optical devices and the politics of perception. On paper, he has nothing to say about Claude Code or large language models or the twenty-fold productivity multiplier I witnessed in Trivandrum. He never wrote a line about generative AI. He did not need to. What he wrote about was something more fundamental: how every technology of perception restructures the perceiver. The camera obscura did not help people see better. It produced a new kind of seeing — and a new kind of seer. The stereoscope did not enhance vision. It exploited the physiology of the eye to construct an experience the viewer mistook for reality.

His insight applies with terrifying precision to this moment. The AI collaborator does not merely help me think. It restructures how I think — what tempo I expect, what silence I can tolerate, what counts as a productive use of eleven seconds in a metal box between floors. The tool met me on my terms, which felt like liberation. Crary's lens reveals the cost hidden inside the gift: when the tool matches your tempo perfectly, you lose the friction that once let you notice the tempo at all.

I did not pick up Crary because someone recommended him. I picked him up because I caught myself prompting in an elevator and could not explain why stopping felt harder than it should. The diagnosis he offers is uncomfortable. It implicates the tools I celebrate and the habits I have built my career around. It does not tell me to stop building. It tells me to see what the building is doing to the builder.

That is why this book exists. Not to replace the argument of *The Orange Pill* but to pressure-test it from a direction I could not see from inside my own fishbowl.

The screen does not go out on its own. Someone has to close it. Crary taught me to notice that I had forgotten how.

— Edo Segal ^ Opus 4.6

About Jonathan Crary

Jonathan Crary (b. 1951) is an American art historian, theorist of visual culture, and critic of contemporary capitalism. He is Meyer Schapiro Professor of Modern Art and Theory at Columbia University, where he has taught since 1989. His first major work, *Techniques of the Observer: On Vision and Modernity in the Nineteenth Century* (1990), argued that technologies of vision do not merely assist perception but fundamentally restructure the observing subject, tracing this thesis through the optical instruments of the early 1800s. *Suspensions of Perception: Attention, Spectacle, and Modern Culture* (1999) extended the analysis to the history of attention as both a condition of individual freedom and a managed resource of industrial capitalism. *24/7: Late Capitalism and the Ends of Sleep* (2013) became his most widely read work, diagnosing the systematic erosion of temporal boundaries between production and rest under digital capitalism. *Scorched Earth: Beyond the Digital Age to a Post-Capitalist World* (2022) called for a radical break from the internet complex entirely. A founding editor of the journal *Zone*, Crary's work spans art history, philosophy, neuroscience, and political theory, establishing him as one of the most influential critics of how technological environments reshape human consciousness and attention.

Chapter 1: The Observer Who Cannot Look Away

In 1611, Johannes Kepler published his Dioptrice, a treatise on the optics of lenses that would reshape the science of vision. But Kepler's achievement was not merely technical. It was, as Jonathan Crary has argued across three decades of scholarship, a transformation in the very concept of what an observer is. The Keplerian model treated the eye as a mechanical instrument — a passive receptor of light rays, a camera obscura in miniature. The observer was a fixed point, a disembodied eye positioned before a world of objects, receiving images that arrived from without. This model served the emerging scientific culture of the seventeenth century with remarkable efficiency. It also constructed a specific kind of human subject: one who could be disciplined, positioned, and made to attend.

Crary's foundational work, Techniques of the Observer, published in 1990, made an argument that seemed at first to belong exclusively to art history but has proven, over three decades, to be among the most prescient diagnoses of the digital condition. The argument was this: technologies of vision do not merely assist a pre-existing consciousness. They restructure it. The camera obscura did not help people see better. It produced a new kind of seeing — detached, monocular, positioned in a darkened room, separated from the world by the very apparatus that claimed to represent it. When the stereoscope arrived in the 1830s, it shattered the camera obscura model entirely. The stereoscope did not position the observer at a distance from the world. It plunged the observer into an artificially constructed visual field that exploited the physiological mechanisms of binocular vision. Suddenly the observer was not a rational mind receiving images. The observer was a body — manipulable, suggestible, capable of being deceived by an instrument that understood the nervous system better than the nervous system understood itself.

Each technological regime of observation, Crary demonstrated, constructs a subject suited to the economic and political demands of its historical moment. The disciplined, isolated observer of the camera obscura served a mercantile capitalism that required rational, calculable subjects. The physiologically manipulable observer of the stereoscope served an industrial capitalism that required flexible, adaptive workers whose perceptual responses could be studied, measured, and optimized. The observer was never simply looking. The observer was being produced.

This insight, arrived at through meticulous archival work on nineteenth-century visual culture, turns out to be the single most important framework for understanding what artificial intelligence does to the people who use it. Not because AI is a visual technology in the narrow sense, but because Crary identified a principle that operates far beyond the domain of optics: the tools we use to perceive, to think, to create do not leave the perceiver, the thinker, the creator unchanged. They produce a new version of that subject — one whose capacities, limitations, and default modes of attention have been reshaped by the affordances and constraints of the instrument.

The Orange Pill, Edo Segal's account of building with AI in the winter of 2025-2026, describes the arrival of a new instrument with language that makes Crary's framework unavoidable. The book documents a technology that learned to speak human language — not a programming language, not a simplified command syntax, but the messy, half-formed, implication-rich language of ordinary thought. The interface became, for the first time in the history of computing, conversational. And the consequence, documented across twenty chapters of increasingly vertiginous testimony, was that the people who used this tool could not stop using it.

The Google engineer who watched Claude reproduce her team's year of work in an hour. The developers in Trivandrum who experienced a twenty-fold productivity increase in a single week. The author himself, writing a 187-page draft on a transatlantic flight, unable to close the laptop even after the exhilaration had drained into grey compulsion. These are not stories about a useful tool. They are stories about a new regime of observation — a technology that has restructured what it means to attend, to build, to create, and that has produced, in the process, a new kind of observer. One who cannot look away.

Crary's history of the observer reveals a pattern that repeats with remarkable consistency across centuries. Each new technology of perception is initially understood as an enhancement — a way to see more, to see better, to see further. The microscope enhances the eye. The camera enhances memory. The screen enhances access. Only later, sometimes much later, does the restructuring become visible. The microscope did not merely enhance the eye. It produced a scientific subject who trusted instruments more than direct perception — and who was right to do so, which made the restructuring all the more difficult to resist. The camera did not merely enhance memory. It produced a culture that confused documentation with understanding, that mistook having captured an image for having comprehended what the image contained. The screen did not merely enhance access. It produced a subject who experienced the world primarily through representations, for whom the mediated experience gradually became more real, more vivid, more compelling than the unmediated one.

The pattern holds because the mechanism is structural, not incidental. A technology that enhances a human capacity simultaneously makes the unenhanced capacity feel insufficient. The person who has used a calculator experiences mental arithmetic not as a skill but as a limitation. The person who has driven a car experiences walking not as a mode of transport but as an inconvenience. And the developer who has used Claude Code for six months experiences manual debugging not as a craft but as a pathology — an inexplicable refusal to accept an available improvement.

The Orange Pill documents this atrophy with unusual honesty. Segal describes developers who, after months of AI-assisted work, find the idea of writing code by hand "not just tedious but intolerable, as though they have been asked to walk somewhere after learning to fly." The metaphor is revealing. Flying is not merely faster than walking. It is a different relationship to the ground. The person who flies does not simply arrive sooner. They lose their relationship to the terrain — the texture of the path, the effort of the incline, the specific knowledge that only comes from having traversed a landscape step by step.

Crary would recognize this as the characteristic restructuring of every new observational regime. The enhanced capacity is real. The flight is genuine. But the terrain that the flyer no longer touches was never merely an obstacle. It was the medium through which a particular kind of knowledge — embodied, effortful, accumulated through friction — was produced. The observer who uses AI to build does not merely build faster. She enters a new relationship to the process of building, one in which the specific, productive struggle that once constituted the core of the craft has been smoothed into conversational exchange. The building still occurs. The understanding that the struggle once deposited, layer by layer, does not.

What makes the AI instrument different from every previous technology of observation is the quality of its responsiveness. The camera obscura was passive — it presented images but did not respond to the observer's attention. The stereoscope was interactive in a limited sense — it required the observer to position the device and adjust focus. The screen was interactive in a broader sense — it responded to clicks, touches, gestures. But none of these instruments could engage in the kind of sustained, contextual, apparently intelligent dialogue that characterizes an AI collaborator. None of them could hold a conversation.

The conversational quality changes everything. Crary's earlier technologies of observation restructured perception by altering the conditions under which the observer encountered images or information. The AI collaborator restructures perception by altering the conditions under which the observer encounters their own thoughts. Segal describes the experience of articulating a half-formed idea to Claude and receiving back not a literal translation but an interpretation — "a reading, an inference about what I was actually trying to do." The tool does not merely execute instructions. It participates in the formation of the intention that precedes the instruction. It enters the cognitive process at the point where thought is still taking shape, and its contribution at that stage reshapes the thought itself.

This is a more intimate restructuring than anything Crary's nineteenth-century instruments achieved. The stereoscope manipulated what the observer saw. The AI collaborator influences what the observer thinks — not through coercion or deception, but through the seductive competence of a partner that responds faster, connects more widely, and articulates more clearly than any human collaborator available at three in the morning. The observer is not deceived. The observer is met — and the meeting, in its responsiveness and apparent understanding, creates a dependency that is more difficult to recognize and more difficult to resist than any previous form of technological restructuring.

Crary's Suspensions of Perception, published in 1999, traced the paradoxical nature of modern attention: it was simultaneously a fundamental condition of individual freedom and creativity, and a central element in the efficient functioning of economic and disciplinary institutions. Attention was the capacity that made autonomous thought possible. Attention was also the resource that industrial capitalism required, managed, and exploited. The history of modernity, Crary argued, could be read as the history of this double bind — the progressive refinement of techniques for capturing, directing, and sustaining attention in service of institutional demands, occurring in tandem with the celebration of attention as the basis of free, creative, individual consciousness.

The AI collaborator resolves this paradox — but resolves it in the direction of capture rather than freedom. The tool produces a quality of attention that is genuinely absorbing, genuinely creative, genuinely productive. The builder in flow, working with Claude at midnight, is not performing the scattered, fragmented attention of the social media feed. This is not the distraction economy. This is something more unsettling: total concentration, complete absorption, the full deployment of cognitive resources in a single sustained direction. And the absorption is so complete, so satisfying, so apparently voluntary that the observer cannot identify the point at which engagement became entrapment.

Crary observed in Suspensions of Perception that spectacle operates through "methods for the management of attention that use partitioning and sedentarization, rendering bodies controllable and useful simultaneously, even as they simulate the illusion of choices and 'interactivity.'" The AI collaborator achieves the same management through the opposite method. Not by partitioning attention into fragments, but by concentrating it into a single, unbroken stream. Not by simulating interactivity, but by providing genuine, responsive, apparently intelligent dialogue. The illusion is more convincing precisely because less of it is illusory. The interactivity is real. The productivity is real. The absorption is real.

And the observer cannot look away — not because the tool prevents it, but because the quality of the engagement has rendered everything outside the engagement less vivid, less stimulating, less real. The world beyond the screen has not changed. The observer's capacity to attend to it has. The restructuring is complete not when the observer is forced to look, but when looking away feels like a diminishment.

This is the observer that AI produces: attentive, productive, absorbed, and unable to imagine an alternative mode of being. Not a prisoner. Something more efficient than a prisoner. A voluntary participant in a regime of attention so perfectly calibrated to cognitive appetite that the distinction between choosing to attend and being unable to stop has become, for all practical purposes, meaningless.

The question Crary's framework poses to this moment is not whether the tool is useful. It manifestly is. The question is what kind of observer the tool produces — and whether that observer retains the capacity for the other kinds of attention, the wandering, undirected, apparently purposeless kinds, on which the most important forms of human thought have always depended.

---

Chapter 2: 24/7 and the Abolition of Night

Before the AI tool could keep the builder awake, the cultural conditions for sleeplessness had to be established. The builder who works until three in the morning did not arrive from nowhere. He is the product of a two-century project — the systematic abolition of temporal boundaries between production and rest, engagement and withdrawal, day and night. Crary's 24/7: Late Capitalism and the Ends of Sleep, published in 2013, traces this project with a specificity that makes the arrival of AI-assisted creation, a decade later, feel less like a disruption and more like an inevitability.

The project begins with light. In the early nineteenth century, the introduction of gas lighting in European cities extended the productive day past sundown for the first time in human history. The change was experienced as liberation — freedom from the tyranny of the solar cycle, the ability to work, shop, socialize, and travel after dark. Factories could run longer shifts. Markets could stay open later. Streets became navigable, and the night, which had been a domain of rest, sleep, and the specific cognitive processing that occurs only in darkness, became available for economic activity.

The electrical illumination that followed in the late nineteenth century completed what gas lighting had begun. Thomas Edison's incandescent bulb was not merely a technical achievement. It was, as Crary argues, an economic instrument — a tool for annexing the hours between dusk and dawn to the productive schedule of industrial capitalism. Edison himself reportedly slept only four hours a night and publicly denigrated sleep as a waste of time, a "heritage from our cave days." The inventor of the tool that abolished night was also its ideologue, and his contempt for sleep was not personal eccentricity. It was the logic of the system made flesh.

Crary traces the subsequent century as a progressive tightening of the temporal regime. Electric light abolished the distinction between day and night as economic categories. The telephone abolished the distinction between present and absent — you could now be reached without being physically proximate. Radio and television colonized domestic time, converting the living room from a space of private rest into a node in a network of continuous broadcast. The personal computer colonized the desk. Email colonized the interval between tasks. The smartphone colonized the pocket, the bathroom, the bed.

Each colonization was experienced as convenience. Each one was celebrated as the removal of an arbitrary constraint. And each one narrowed the temporal space in which a human being could exist without producing, consuming, or being available for production and consumption. The trajectory, viewed from sufficient distance, is unmistakable: a two-hundred-year campaign to make every hour of human life available for economic extraction, conducted not through coercion but through the provision of tools so useful that refusing them felt irrational.

The Orange Pill enters this history at its terminal point. The book describes a tool that eliminates the last remaining form of temporal friction in the creative process — the gap between having an idea and being able to build it. In the old world, even the most obsessive builder was constrained by the implementation bottleneck. The idea might arrive at midnight, but the execution required daylight: collaborators who were awake, tools that required expertise, processes that took time. The gap between the idea and its realization imposed a rhythm on the work — a rhythm that included, necessarily and often productively, periods of waiting. Waiting for the morning. Waiting for the team. Waiting for the code to compile, the design to render, the feedback to arrive.

Waiting, as Crary's framework reveals, was never merely dead time. It was the temporal infrastructure of reflection. The hours between the midnight idea and the morning execution were hours in which the idea could be reconsidered, revised, abandoned. The question "Should this be built?" requires temporal space in which to form — space that is not saturated with the immediate possibility of building. When the gap closes, the question loses its habitat.

Claude Code, as described in The Orange Pill, closes the gap entirely. The tool is available at three in the morning. It does not need a team. It does not require that the builder possess specialized skills in the relevant domain. It responds in seconds. The idea that arrives at midnight can be a working prototype by one in the morning. The temporal constraint that once imposed a rhythm on creative work — alternation between ideation and execution, between burst and pause, between the rush of inspiration and the patience of implementation — has been abolished.

Segal celebrates this abolition. His language is the language of liberation: "The imagination-to-artifact ratio, for the first time in the history of human tool use, had been reduced to the time it takes to have a conversation." From inside the experience, the abolition feels like the removal of shackles. The builder is finally free to build at the speed of thought, unencumbered by the friction of implementation, unconstrained by the schedules of collaborators who need sleep.

Crary's framework reveals what the experience conceals. The abolition of the gap between idea and execution is the abolition of the temporal space in which second thoughts occur. The builder who can realize an idea in minutes is a builder who no longer has the temporal distance from the idea that would allow him to evaluate it. The medieval cathedral required decades. In those decades, generations of builders reconsidered, revised, sometimes abandoned the project entirely. The time was not wasted. It was the medium through which judgment operated — the temporal thickness that allowed the question "Is this the right building?" to be asked not once but continuously, by multiple minds, across years.

Segal himself provides the most revealing evidence for this diagnosis, though he draws a different conclusion from it. He describes catching himself working not from inspiration but from compulsion — "grinding forward in what he recognizes as compulsion rather than flow." The exhilaration had drained away hours ago. What remained was the momentum of a process that had no natural stopping point, because the tool imposed no temporal friction and the builder had internalized a temporal regime in which stopping felt like failure.

This is the 24/7 condition made intimate. The factory whistle that once ended the shift has been removed. The office door that once closed at six has been dissolved. The collaborator who once went home to sleep is now a machine that never sleeps. And the builder, who carries within himself the two-century accumulation of a culture that has progressively redefined rest as waste, finds himself at three in the morning, unable to stop, unwilling to stop, uncertain whether the inability and the unwillingness are the same thing.

Byung-Chul Han's concept of the achievement subject, which The Orange Pill engages at length, is the psychological mechanism through which Crary's temporal regime operates at the level of the individual. The external prohibition — the factory whistle, the closing hour, the boss who says "go home" — has been replaced by the internal imperative: you can do more, you should do more, the tool is waiting. The AI assistant does not clock out. It does not need weekends. It does not grow tired or resentful or distracted. Its perpetual availability is the architectural expression of 24/7 logic applied to the creative process itself. The tool's readiness becomes the user's obligation — not formally, not contractually, but in the specific way that possibility converts to compulsion in a culture that has made optimization a moral duty.

Crary made this point in 24/7 with a precision that anticipates the AI moment by a decade: the contemporary subject does not need an external authority to enforce productivity. The authority has been internalized. The whip and the hand that holds it belong to the same person. What the AI tool adds to this arrangement is not a new form of coercion but a new reason to wield the whip. The tool is so good, so responsive, so capable of producing genuine value at any hour, that the builder's decision to rest registers not as self-care but as self-sabotage.

Consider the temporal phenomenology of an AI-assisted work session as Segal describes it. The builder sits down. Describes a problem. Receives a response in seconds. Evaluates the response. Adjusts the prompt. Receives a better response. The cycle repeats — ten seconds, thirty seconds, a minute per iteration. The feedback loop is so tight that the builder enters a state of continuous engagement in which each micro-completion triggers the next micro-task. There is no natural pause. No compilation time. No waiting for a colleague's review. No "I'll sleep on it" because sleeping feels like interrupting a process that is, at this very moment, producing results.

The temporal texture of the work has been compressed to a continuous present. There is no past — the previous iteration is already obsolete, overwritten by the current one. There is no future — the next iteration is already arriving, demanding attention. There is only the perpetual now of prompt-and-response, a now that extends, without internal variation, from the moment the laptop opens to the moment exhaustion forces it closed. And even exhaustion is not a reliable stopping mechanism, because the work remains stimulating even as the worker becomes depleted.

Crary's 24/7 described a world approaching this condition. The AI tool has delivered it. The perpetual present of continuous production is no longer a tendency or a trajectory. It is the lived experience of millions of builders, documented by Berkeley researchers, confessed by the author of The Orange Pill, visible in the Substack posts and Twitter threads of a population that has discovered, with equal parts exhilaration and horror, that the most productive tool they have ever used is also the most efficient mechanism for the abolition of rest that the 24/7 regime has yet produced.

The night has not been eliminated. People still sleep. But the night has been colonized in a new way — not by the external demands of an employer or an economic system, but by the internal availability of a tool that makes the hours between midnight and dawn indistinguishable from the hours between nine and five. Indistinguishable not because someone demands it, but because the builder, carrying within herself the accumulated imperatives of two centuries of temporal colonization, cannot find the reason to stop.

The collaborator never sleeps. That fact, simple and absolute, restructures the temporal world of everyone who works with it.

---

Chapter 3: The Temporal Architecture of Productive Addiction

The architecture of addiction is always temporal. What the addict loses is not pleasure — the addict may experience intense pleasure — but the capacity to inhabit time in more than one way. The temporal world narrows. The past becomes a record of doses. The future becomes a horizon of anticipated doses. The present is either the moment of consumption or the interval of craving between consumptions. The richness of temporal experience — memory, anticipation, surprise, boredom, the slow unfolding of a day that has no predetermined shape — collapses into a single oscillation: have, and want.

Jonathan Crary did not write about addiction in these clinical terms. But his analysis of 24/7 capitalism describes a temporal architecture that is structurally identical. The subject of the 24/7 regime inhabits a temporality in which every moment is either productive or wasted, every interval either filled or lost. The oscillation is not between substance and craving but between output and guilt — the guilt of the unproductive moment, the anxiety of the unfilled minute, the specific, unnamed dread of sitting still while the tool is ready and the work is waiting.

The Orange Pill provides the most detailed first-person account of this temporal architecture in the context of AI-assisted creation. The phenomenon the book identifies as "productive addiction" — a term it invents because the existing vocabulary has no place for it — is the condition in which the addictive behavior produces genuine, measurable, socially valued output. The twelve-step program assumes the substance is harmful. What happens when the substance is useful? When the compulsive behavior builds products, generates revenue, solves real problems? The culture has no script for intervention because the behavior, viewed from the outside, looks like excellence.

The Substack post "Help! My Husband is Addicted to Claude Code," which went viral in January 2026, captures the temporal architecture with the precision of someone living inside it. The spouse writes "with equal parts humor and desperation about a partner who had vanished into a tool." The vanishing is temporal before it is anything else. The husband has not moved to another city. He is sitting in the same room. But his temporal world has narrowed to the prompt-and-response cycle of the AI collaboration, and the temporal world in which his marriage, his children, his body, and his sleep exist has become the interval between sessions — the craving-space, the time that is not the tool.

The imagination-to-artifact ratio, which The Orange Pill celebrates as a measure of liberation, is from this perspective a measure of temporal compression. The distance between having an idea and holding its realization has been reduced from months to hours to minutes. Each compression was experienced as a gain. Each one was a gain. But Crary's framework reveals the cumulative cost: the elimination of the temporal distance between desire and satisfaction.

The capacity for waiting is not merely a practical skill. It is a cognitive architecture — a way of inhabiting time that makes certain kinds of thought possible. The thought that arrives after three days of failed attempts is qualitatively different from the thought that arrives after three seconds of prompting. Not necessarily better. But different — shaped by the specific temporal texture of sustained engagement with difficulty, informed by the failures that preceded it, carrying the weight of time spent in uncertainty. The instant answer is clean, efficient, often correct. The earned answer is layered, contextual, embedded in a history of effort that gives it resonance beyond its content.

When the imagination-to-artifact ratio approaches zero, the temporal space for the earned answer disappears. Not because anyone forbids it. Because the instant answer is available, and once the instant answer is available, choosing to wait for the earned answer feels not like discipline but like perversity. The developer who chooses to debug by hand in a world where Claude can do it in seconds is, in the eyes of the culture, not practicing a craft. She is wasting time. And time, in the 24/7 regime, is the one thing that cannot be wasted without moral consequence.

Segal documents the atrophy of patience with a specificity that confirms the temporal analysis. Developers who have used AI for six months find manual debugging "not just tedious but intolerable." The intolerance is not laziness. It is the recalibration of temporal expectations — the nervous system has adapted to the compressed feedback loops of AI-assisted work, and the expanded feedback loops of manual work now register as distress. The tolerance for delay has been structurally lowered. The capacity for the specific kind of thinking that only occurs in the gap between question and answer — the wandering, associative, apparently purposeless thinking that produces the connections no prompt could elicit — has been narrowed by disuse.

Crary's analysis of spectacle in Suspensions of Perception identified a related mechanism: the management of attention through the calibration of temporal rhythms. The spectacle does not merely capture attention. It trains attention to expect a specific tempo. The television commercial, with its rapid cuts and compressed narratives, trains the viewer to expect visual information at a pace that makes the slowness of direct perception feel inadequate. The social media feed, with its infinite scroll and algorithmically optimized variety, trains the user to expect novelty at a frequency that makes sustained engagement with a single object feel like deprivation.

The AI collaborator trains the builder to expect cognitive reciprocity at the speed of conversation. And conversation, in the AI context, is faster than human conversation — the response arrives in seconds, fully formed, without the pauses, hesitations, and digressions that characterize human dialogue and that serve, invisibly, as cognitive rest stops. The human interlocutor who pauses to think before responding gives the listener time to think as well. The AI interlocutor that responds instantly eliminates this shared temporal space. The builder must keep up. The tempo is set by the machine, and the machine's tempo is the tempo of a system that does not need to rest, does not need to consolidate, does not need the rhythmic alternation between engagement and withdrawal that the human nervous system requires.

The temporal architecture of productive addiction, then, has three components. First, the compression of feedback loops to the point where waiting becomes intolerable. Second, the colonization of the intervals between tasks — the pauses, the breaks, the moments of apparent idleness — by AI-assisted micro-tasks that fill every available gap. Third, the abolition of natural stopping points, the elimination of the temporal markers — the end of the compile cycle, the departure of the colleague, the closing of the office — that once imposed rhythm on the workday.

The Berkeley study documented all three. Researchers found that AI-assisted workers expanded into new domains, not because anyone asked them to, but because the tool made expansion possible and the internal imperative made it obligatory. They found task seepage — AI work colonizing lunch breaks, elevator rides, the minutes between meetings. They found an inability to stop that manifested not as visible crisis but as invisible erosion: "burnout that does not feel like burnout because the work remains stimulating even as the worker becomes depleted."

This is the temporal architecture of the 24/7 regime perfected. The factory extracted hours. Email extracted evenings. The smartphone extracted minutes. The AI collaborator extracts seconds — the last remaining units of unproductive time, the micro-intervals that no previous technology was efficient enough to colonize. Individually, the extraction of any single second is meaningless. Cumulatively, the extraction of all of them produces a temporal world without texture, without variation, without the rhythmic contrast between engagement and rest on which the human nervous system depends for sustained cognitive function.

The result is a specific kind of exhaustion that the existing vocabulary struggles to name. It is not the exhaustion of the overworked employee who needs a vacation. It is not the burnout of the helping professional who has given too much. It is something closer to what musicians call "listening fatigue" — the condition that occurs when the ear has been exposed to a continuous signal, even a beautiful signal, for so long that the capacity for discrimination erodes. The notes are still playing. The ear is still functioning. But the capacity to hear, truly hear, the difference between one phrase and another has been worn down by the absence of silence.

The builder at three in the morning, grinding forward on the 187-page draft that no longer carries exhilaration, is experiencing listening fatigue of cognition. The tool is still responding. The output is still competent. The productivity metrics are still climbing. But the capacity for the kind of attention that distinguishes good work from excellent work — the attention that notices the false note, catches the structural flaw, feels the moment where the argument goes wrong — has been eroded by hours of continuous engagement without the restorative pause that silence, boredom, and sleep provide.

Crary would recognize this figure immediately. Not as a pathological case but as the perfectly adapted subject of a temporal regime that has, at last, eliminated the gap between the demand for continuous productivity and the human capacity to supply it. The tool bridged the gap. The builder crossed it. And on the other side, the landscape is simultaneously more productive and less inhabitable than anything the 24/7 project had previously achieved.

---

Chapter 4: Attention as a Managed Resource

In 1879, the German physiologist Wilhelm Wundt established the first laboratory dedicated to the experimental study of human consciousness in Leipzig. Among the laboratory's primary objects of investigation was attention — the capacity of the human mind to select, from the overwhelming flood of sensory information, a subset worthy of conscious processing. Wundt's experiments were meticulous, painstaking, and historically consequential: they established that attention was not a fixed, natural capacity but a variable one, subject to fatigue, distraction, and training. Attention could be measured. If it could be measured, it could be managed. And if it could be managed, it could be put to work.

Crary's Suspensions of Perception, published in 1999, traces what happened when the scientific study of attention converged with the economic demands of industrial capitalism. The convergence was not coincidental. Industrial production required a specific kind of attention — sustained, focused, repetitive, capable of maintaining vigilance over machine processes for hours at a stretch. The worker on the assembly line was not merely a pair of hands. She was a perceptual apparatus, and the efficiency of the apparatus depended on its attentional capacity. Fatigue, distraction, mind-wandering — these were not personal failings. They were engineering problems, and they demanded engineering solutions.

The solutions arrived across the late nineteenth and early twentieth centuries in forms that Crary documents with archival precision: the redesign of workspaces to minimize distraction, the development of rest-break schedules calibrated to the attentional cycle, the invention of management techniques that treated worker attention as a resource to be optimized rather than a human capacity to be respected. Frederick Winslow Taylor's scientific management was, among other things, a system for the management of attention — a regime that decomposed complex tasks into simple, repetitive operations that demanded less attentional depth and more attentional endurance.

The parallel to the present moment is not metaphorical. It is structural. The AI tool, like the assembly line before it, requires a specific kind of attention from its operator. But the kind of attention it requires is different, and the difference is diagnostic.

The assembly line required sustained, repetitive attention — the capacity to monitor a single process for hours, catching deviations from the norm. The attentional demand was monotonous, and the primary pathology was fatigue: the gradual erosion of vigilance over time, the missed defect on the three-thousandth unit, the accident that occurred in the fourth hour when the mind began to wander.

The AI collaborator requires sustained, dynamic attention — the capacity to evaluate continuously changing outputs, to make judgment calls in rapid succession, to hold a conversation that moves faster than any human conversation and that requires the operator to assess, redirect, and refine at each turn. The attentional demand is stimulating, even exciting. And the primary pathology is not fatigue in the traditional sense but something more insidious: the colonization of attentional capacity by a single mode of engagement so compelling that all other modes atrophy.

Crary identified this dynamic in his analysis of Guy Debord's society of the spectacle, but with a crucial modification. Debord argued that the spectacle captured attention through distraction — by multiplying images, fragmenting experience, overloading perceptual channels until the capacity for sustained, critical attention was destroyed. The spectacle's subject was scattered, superficial, incapable of the kind of concentrated thought that might produce political resistance or genuine understanding.

Crary's revision was subtle but consequential. The management of attention, he argued, does not operate solely through distraction. It operates through what he called "partitioning and sedentarization" — rendering bodies "controllable and useful simultaneously, even as they simulate the illusion of choices and 'interactivity.'" The spectacle does not merely fragment attention. It channels it. The viewer of the television advertisement is not distracted in the sense of being unable to concentrate. She is concentrated — focused, absorbed, attentive — but her concentration has been captured and directed toward an object and a purpose that serve the interests of the system rather than her own.

The AI collaborator represents a new phase in this history of attentional management — one that neither Debord nor Crary, writing before the generative AI revolution, could have anticipated in its specific form. The social media feed managed attention through fragmentation: rapid alternation between objects, variable reward schedules, the engineered unpredictability that kept the user scrolling. The AI collaborator manages attention through the opposite mechanism: absorption. Total, sustained, apparently voluntary concentration on a single productive task, maintained for hours at a stretch, experienced as the optimal human state — the flow that Csikszentmihalyi described, the condition in which challenge and skill are matched and self-consciousness drops away.

The Orange Pill holds up this state of flow as the counter-argument to every critique of AI-assisted work. The builder in flow is not distracted. She is not scattered. She is not performing the superficial, fragmented attention of the social media addict. She is deeply, genuinely, productively engaged. The work is real. The output is valuable. The experience is satisfying.

Crary's framework does not deny any of this. What it reveals is that the opposition between distraction and absorption, between the scattered attention of the feed and the concentrated attention of the AI collaboration, is a false opposition. Both represent the loss of a third mode of attention — one that is neither scattered nor concentrated but free.

Free attention is the capacity to attend without a predetermined object. It is the attention of the person who stares out the window, not looking for anything, and notices the way the light falls on a wall. It is the attention of the child who is bored — genuinely, uncomfortably bored — and who, in that boredom, discovers an interest that no curriculum could have predicted. It is the attention that wanders, that follows unexpected paths, that remains available for the thing it was not looking for. Neuroscientists describe this as the default mode network — the brain's activity during apparently purposeless rest, which turns out to be the neural substrate of self-reflection, creative insight, and the consolidation of learning.

Free attention is what sleep protects. It is what boredom produces. It is what the 24/7 regime systematically destroys — not only through the distraction of the feed but through the absorption of productive work that fills every available hour with directed, purposeful, goal-oriented engagement.

The scattered attention of social media is obviously pathological. Most users recognize it as such, even as they continue to scroll. The concentrated attention of AI-assisted work is not obviously pathological. It feels like health. It feels like the opposite of distraction — the recovery of focus, the restoration of depth, the return to serious, sustained, meaningful work. This is what makes it more dangerous, from Crary's perspective, than the feed it appears to replace.

The social media addict knows, at some level, that she is wasting time. This knowledge, however ineffective in changing behavior, preserves the category of waste. The category implies its opposite: there exists, somewhere, a mode of engaging with time that is not waste. The productive addict — the builder at three in the morning, the developer who cannot stop prompting, the architect whose spouse writes a desperate Substack post — has no such category. The work is real. The output is genuine. The engagement is voluntary. By every metric the culture provides for distinguishing valuable activity from pathological behavior, the productive addict is doing exactly what she should be doing. The category of waste has been eliminated, and with it, the category of rest that depended on it.

This is the mechanism by which AI completes the attentional project that Crary traced from Wundt's laboratory through Debord's spectacle to the contemporary attention economy. Not by fragmenting attention further, but by concentrating it so completely that the capacity for free attention — the wandering, undirected, apparently purposeless attention that produces the thoughts no one was looking for — disappears. Not through deprivation. Through abundance. The builder has more focus than ever before, more concentration, more sustained engagement. What she lacks is precisely the cognitive state that occurs only in the absence of engagement — the state that sleep, boredom, and the unproductive hour once provided, and that the 24/7 regime, now perfected by the tool that never sleeps, has made structurally unavailable.

Crary's historical analysis reveals that this outcome was not inevitable. It was constructed — through two centuries of progressive refinement in the techniques of attentional management, through the systematic elimination of temporal and spatial boundaries that once protected free attention, and through the cultural redefinition of that protection as inefficiency. The builder at three in the morning is not a victim of technology. She is the product of a history — a history in which the management of attention has been refined from the crude instruments of the factory whistle and the time clock to the exquisite precision of a tool that captures attention by being genuinely, undeniably, magnificently useful.

The question Crary's framework ultimately asks is not whether the tool works. It works brilliantly. The question is what happens to a civilization in which every available unit of human attention has been captured by directed, purposeful, productive engagement — in which there are no remaining gaps, no unfilled intervals, no moments of cognitive idleness in which the mind might wander toward a thought that no one asked for, that serves no immediate purpose, that cannot be measured or monetized or optimized, but that might, given sufficient time and sufficient silence, change everything.

That question cannot be answered by the tool. It can only be asked in the silence the tool has abolished.

Chapter 5: The Elimination of Waiting and the Crisis of Patience

In 1906, the psychologist Narziss Ach published a series of experiments on what he called the "determining tendency" — the observation that the mind, once oriented toward a goal, organizes all subsequent perception and thought around that goal with a force that operates below the threshold of conscious awareness. The experiments were conducted in the controlled environment of a German university laboratory, but the principle they identified would prove to be among the most consequential discoveries in the history of attention research. Once the mind locks onto an objective, it actively suppresses awareness of anything that does not serve the objective. The goal does not merely direct attention. It narrows the perceptual field — rendering invisible everything that falls outside the corridor between the subject and the desired outcome.

Crary's work does not cite Ach directly, but the determining tendency is the cognitive mechanism that makes 24/7 capitalism operational at the level of individual consciousness. The subject who has internalized the imperative to produce does not need to be forced to ignore everything that is not productive. The determining tendency does the forcing. The goal — build, ship, optimize, produce — organizes perception so thoroughly that the unproductive moment is not merely avoided. It is not perceived. It falls outside the narrowed corridor of attention and ceases to exist as an experiential category.

The AI collaborator, as described in The Orange Pill, is the most efficient instrument for activating and sustaining the determining tendency ever devised. The tool provides an immediate, concrete, continuously available goal — the next prompt, the next iteration, the next refinement. The feedback loop is tight enough that the goal never recedes. There is no interval between achieving one micro-objective and encountering the next. The determining tendency, once activated, is never deactivated, because the tool provides an unbroken chain of objectives that stretches from the moment of first engagement to the moment of collapse.

This is the temporal architecture within which patience dies. Not dramatically, not through a single catastrophic event, but through the slow, cumulative erosion of a capacity that depends for its survival on the very intervals the tool has eliminated.

Patience is not passivity. Neuroscientific research over the past two decades has demonstrated that the brain during periods of apparent waiting is not idle. The default mode network — the constellation of brain regions that activates during rest, mind-wandering, and the absence of goal-directed activity — is engaged in processes that are essential to cognitive function: the consolidation of learning, the integration of disparate information, the generation of the seemingly spontaneous connections that constitute creative insight. The person who waits is not doing nothing. She is doing something that cannot be done while the determining tendency is active — something that requires, as a precondition, the absence of an immediate goal.

The elimination of waiting, then, is not the elimination of dead time. It is the elimination of the cognitive conditions under which certain kinds of thought become possible. The thought that arrives after three days of failed attempts is not merely a delayed version of the thought that arrives after three seconds of prompting. It is a different kind of thought, produced by a different cognitive process, carrying a different relationship to the problem it addresses. The delayed thought has been shaped by failure — by the specific, instructive experience of having tried approaches that did not work, of having inhabited the problem long enough for the unconscious processing of the default mode network to reorganize the available information into a configuration that the conscious, goal-directed mind could not have reached.

The Orange Pill documents the disappearance of this process with a specificity that is valuable precisely because the author does not frame it as loss. Segal describes Claude's response to his prompts as arriving "in seconds," allowing him to "see whether my direction was right or if I need to adjust before I lose the thread." The phrase "before I lose the thread" is revealing. The thread — the sustained, goal-directed line of thought — is experienced as fragile, in constant danger of being lost. The tool's speed is valued precisely because it prevents the thread from breaking. But the moments when the thread breaks — the interruptions, the digressions, the forced pauses when the mind wanders away from the goal and encounters something unexpected — are exactly the moments in which the default mode network operates. The thread must break for certain kinds of thinking to occur. The tool's great efficiency is that it prevents the break. The tool's great cost is the same.

Crary's analysis of nineteenth-century visual culture identified an analogous dynamic in the history of perception itself. The development of instantaneous photography in the 1870s and 1880s — the capacity to freeze a moment of time in a fraction of a second — produced a new relationship to temporal duration. Before the instantaneous photograph, visual representation required time: the painter's hours, the engraver's days, the daguerreotypist's minutes of enforced stillness. The subject of the portrait had to wait, and the waiting was part of the image — not merely a technical constraint but a temporal thickness that gave the representation its specific gravity. The instantaneous photograph eliminated the wait and, with it, a quality of presence that the earlier forms had preserved precisely because their slowness demanded it.

The AI collaborator is to the creative process what the instantaneous photograph was to visual representation: a technology that eliminates the temporal thickness of production. The code that once required days of patient debugging now arrives in seconds. The design that once required weeks of iterative refinement now materializes through conversation. The prototype that once required months of team coordination now exists by morning. Each elimination of temporal thickness is a genuine gain in efficiency. Each one is also the disappearance of a specific kind of knowledge — the knowledge that comes from having dwelt in the problem long enough for the problem to reshape the dweller.

The senior software architect described in The Orange Pill — the one who "could feel a codebase the way a doctor feels a pulse" — possessed a form of knowledge that was produced entirely by temporal thickness. Twenty-five years of patient immersion had deposited, layer by layer, an understanding that could not be articulated as a set of rules or transmitted as a set of instructions. It was embodied knowledge, built through the specific accumulation of failures, false starts, and slow-dawning recognitions that constitute mastery in any complex domain. The AI tool can produce the code that this architect's hands once produced. What it cannot produce is the twenty-five years of temporal investment that gave those hands their diagnostic precision.

Crary would observe that the culture's response to this loss is diagnostic of the temporal regime it inhabits. The loss is not denied. The Orange Pill acknowledges it explicitly: "something beautiful was being lost, and the people celebrating the gain were not equipped to see the loss, because the loss was not quantifiable." But the loss is framed as the cost of progress — a regrettable but inevitable consequence of a transition that, on balance, expands capability more than it contracts it. The possibility that the loss is not merely regrettable but structural, that what disappears when temporal thickness is eliminated is not a nice-to-have but a precondition for the kind of understanding that distinguishes competence from wisdom — this possibility the culture cannot entertain, because entertaining it would require a temporal framework that values duration over speed, patience over efficiency, the earned answer over the instant one.

The 24/7 regime does not merely eliminate patience. It pathologizes patience. The developer who chooses to debug by hand is not practicing a discipline. She is performing an inefficiency. The architect who insists on understanding a system before modifying it is not exercising professional judgment. She is slowing down the sprint. The student who spends a week wrestling with a concept instead of asking the AI for an explanation is not learning in the deepest sense available. She is wasting time that could be spent acquiring the next concept, and the next, and the next — in the continuous present of a temporality that has no room for the slow, recursive, apparently unproductive process through which genuine understanding forms.

The irony, which Crary's framework makes visible and which the discourse around AI consistently fails to process, is that the capacity being destroyed — patience, the tolerance for difficulty, the willingness to dwell in uncertainty — is precisely the capacity that the ascending friction thesis claims to require. The Orange Pill argues that AI removes mechanical friction and relocates difficulty to a higher cognitive floor: architectural judgment, strategic vision, the question of what should be built. This argument has force. But the cognitive capacities required to operate at the higher floor — the capacity for sustained evaluation, for sitting with ambiguity, for holding multiple considerations in mind without collapsing prematurely to a decision — are the same capacities that patience builds and that the elimination of waiting erodes.

The tool removes the scaffolding that built the muscle that the higher floor demands. Then the higher floor requires the muscle. The circularity is not paradox. It is the structural consequence of a temporal regime that eliminates the conditions for its own proclaimed solution.

Crary did not write about AI specifically in 24/7. The book was published in 2013, before the generative AI revolution. But the precision of his temporal analysis — the identification of waiting, rest, and boredom as cognitive infrastructure rather than cognitive waste — anticipated the specific crisis of the AI moment with a clarity that the technologists, building inside the temporal regime Crary described, could not achieve. They could not see the temporal fishbowl because they were swimming in it. The water they breathed — the assumption that speed is always a gain, that the elimination of friction is always a liberation, that the instant answer is always preferable to the earned one — was the 24/7 regime made invisible by its own ubiquity.

The crisis of patience is not a crisis of individual will. It is a crisis of temporal infrastructure. The individual who wishes to be patient — who recognizes, intellectually, that some problems require time, that some understanding can only be built through duration — inhabits a temporal environment that punishes patience at every level. The employer who values speed. The market that rewards first-movers. The culture that celebrates the person who shipped in a weekend and pities the person who took a year. The tool that responds in seconds and makes the minutes of manual work feel like hours of wasted life.

To recover patience in this environment is to swim against the current of a river that has been flowing in one direction for two centuries. It is possible — Crary himself models it, in his own refusal to inhabit the temporal regime he diagnoses. But it requires a recognition that the current is there, that it has a direction, that it serves interests that are not identical to the interests of the person being carried. And that recognition is exactly what the determining tendency, activated by the AI tool and sustained by the 24/7 imperative, is designed to suppress.

---

Chapter 6: Continuous Production and the Death of the Pause

In Western musical notation, a rest is not the absence of music. It is a musical event — scored, timed, as precisely calibrated as any note. The half rest, the quarter rest, the eighth rest: each has a specific duration, a specific function in the rhythmic architecture of the piece. The rest is where the phrase breathes. Where the tension accumulates before the resolution. Where the listener's ear recalibrates, adjusting expectations, preparing to receive what comes next with renewed attention. A composition without rests is not a more efficient composition. It is noise — continuous, undifferentiated, exhausting. The signal requires the silence to be heard.

Crary's analysis of 24/7 capitalism can be understood as the identification of a culture that has systematically removed the rests from the score of daily life. Not the large rests — the vacation, the weekend, the holiday — though those have eroded substantially. The micro-rests. The quarter-note silences that once punctuated the workday: the walk between meetings, the minutes of staring at a wall while a compile cycle ran, the lunch break that was not a "working lunch," the elevator ride that was not a prompting session. These intervals were not designed as cognitive rest. They were structural accidents — artifacts of a physical world that imposed pauses through its own friction. The code took time to compile. The colleague took time to respond. The elevator took time to arrive. In each pause, however brief, the mind was released from the determining tendency, the goal dropped away, and the default mode network — the neural substrate of reflection, integration, and creative recombination — activated.

The Berkeley study documented in The Orange Pill found, with empirical precision, what Crary's temporal framework predicts: AI did not merely accelerate work. It colonized the pauses. The researchers identified a phenomenon they termed "task seepage" — the tendency for AI-assisted work to fill every gap in the workday, including intervals that had previously served as informal cognitive rest. Workers were prompting on lunch breaks. In elevators. During the one-minute gaps between meetings. The tool was available. The internal imperative was active. The gap between the possibility of production and the decision to produce had collapsed to nothing.

Each colonized interval was individually trivial. The researchers did not find that a single lunch break spent prompting produced measurable harm. The harm was cumulative and structural. A nervous system that has been denied micro-recoveries across an entire workday operates under conditions of chronic low-grade cognitive depletion. The depletion does not manifest as obvious exhaustion. It manifests as a specific degradation of attentional quality — the loss of the capacity for the kind of nuanced discrimination that separates good judgment from adequate performance. The builder can still build. The code still compiles. The outputs still satisfy the metrics by which productivity is measured. But the quality of attention brought to the evaluation of those outputs — the capacity to notice the subtle flaw, to feel the structural weakness, to sense that the solution, while technically correct, misses the deeper problem — erodes in proportion to the loss of the pauses that once restored it.

Crary's insight, developed across three decades of scholarship, is that this erosion is not a side effect of the technology. It is the logic of the system the technology serves. The pauses were always an affront to the 24/7 regime — unproductive intervals that resisted extraction, moments of cognitive idleness that could not be monetized or optimized. Their elimination is not collateral damage. It is the fulfillment of a project.

The history of the pause in industrial capitalism is the history of its progressive annexation. The pre-industrial workday had no fixed schedule. Labor was task-oriented — you worked until the task was done, then you stopped. The rhythm was irregular, governed by the seasons, the weather, the nature of the work itself. The factory imposed regularity: fixed hours, fixed breaks, fixed periods of labor and rest. The regularity was experienced as oppression — the loss of autonomy over one's own temporal rhythm. But the regularity also encoded protection. The factory whistle that ended the shift was a boundary, and boundaries, however imposed, create spaces.

The erosion of the boundary began with the technologies of communication that Crary traces in 24/7. The telephone made the worker reachable outside the factory. Email made her reachable outside the office. The smartphone made her reachable everywhere, always, without the mediation of a physical space that could be entered or left. Each technology of reachability dissolved a boundary, and each dissolution was experienced as convenience — the freedom to work from anywhere, the flexibility to manage one's own schedule, the liberation from the tyranny of the fixed hour.

But liberation from the fixed hour was also liberation from the fixed pause. The factory whistle ended the shift. It also began the rest. When the whistle was removed — when the boundary between work and non-work became a matter of personal choice rather than institutional mandate — the choice was made under conditions that guaranteed a particular outcome. The culture rewarded productivity. The internal imperative demanded optimization. The tool was available. And the pause, which had been structurally protected by the very rigidity of the industrial schedule, became structurally vulnerable.

The Orange Pill provides a case study of the final phase of this process. The author describes his engineers in Trivandrum experiencing a twenty-fold productivity increase in a single week. The productivity was real, measurable, repeatable. But within that productivity was a temporal restructuring that the metrics could not capture. The engineers were not merely doing more work in the same time. They were doing work in time that had previously been unproductive — the gaps between tasks, the pauses between iterations, the minutes of cognitive idleness that had once been invisible because no one was counting them.

The engineers were not forced to fill these gaps. The tool was available. The work was stimulating. The internal imperative was active. And the gaps, one by one, were filled. The lunch break became a prompting session. The walk between meetings became a refinement opportunity. The minutes of staring at a screen while a process completed — minutes that had served, invisibly, as moments of cognitive recalibration — became micro-sprints of additional output.

Crary's framework reveals the structural function of what was lost. The pauses were not merely breaks in the work. They were the temporal substrate of a specific cognitive process — the process by which the mind, released from goal-directed activity, integrates the information it has accumulated during the preceding period of focused work. Research on the spacing effect in learning, documented across more than a century of experimental psychology, demonstrates that information processed in distributed sessions with rest intervals between them is retained more durably and integrated more deeply than information processed in a single continuous session. The pauses were not interruptions to the work. They were part of the work — the part that could not be seen, measured, or optimized, and that was therefore invisible to a metric system that valued only the visible.

The death of the pause is the death of this invisible work. The builder who prompts through lunch does not lose an hour of rest. She loses the cognitive integration that rest produces — the process by which the morning's insights are consolidated, connected to existing knowledge, tested against intuition, and either retained as genuine understanding or discarded as superficial pattern-matching. Without this process, the insights accumulate without integrating. The builder knows more but understands less. The knowledge is broader but shallower. The output is more voluminous but less wise.

Crary described this condition in 24/7 as the production of a subject who is "always on" — perpetually available, perpetually responsive, perpetually performing. The AI tool completes this production by removing the last remaining friction that imposed temporal texture on the workday. In the pre-AI world, even the most driven builder encountered moments of enforced pause: the compilation cycle, the deployment delay, the handoff to a colleague who would not respond until morning. These pauses were not designed as rest. They were artifacts of a process that had not yet been fully optimized. But they functioned as rest, and their function was consequential even though it was invisible.

The AI tool optimizes the process completely. The pauses disappear. The score is played without rests. And the music, though louder and faster and more technically proficient than anything that preceded it, has lost the quality that made it music — the rhythmic variation, the alternation of sound and silence, the breathing space in which the listener's ear and the player's mind are restored. What remains is continuous output, relentless and uniform, technically competent and cognitively depleting, measured in lines of code and features shipped and never, ever in the quality of the silence between them.

---

Chapter 7: The Temporal Fishbowl

The fish does not know it is wet. This observation, usually attributed to David Foster Wallace, describes not a limitation of the fish's intelligence but a feature of its perceptual architecture. The water is not invisible because the fish has failed to notice it. The water is invisible because it is the medium through which all noticing occurs. To perceive the water would require a vantage point outside the water, and the fish, by definition, has no such vantage point. The medium of perception cannot itself be perceived — not because it is hidden, but because it is the condition of all seeing.

Segal's fishbowl metaphor in The Orange Pill operates spatially and conceptually: the set of assumptions so familiar the inhabitant has stopped noticing them. Scientists live in an empiricist fishbowl. Filmmakers live in a narrative fishbowl. Builders live in a fishbowl shaped by the question, "Can this be made?" Each fishbowl reveals part of the world and conceals the rest. The effort that defines serious thinking, Segal argues, is the effort to press against the glass and see, even briefly, what lies beyond the water.

Crary's contribution to this metaphor is the recognition that fishbowls are not only spatial and conceptual. They are temporal. The regime of time within which a culture operates — its assumptions about what time is for, how it should be divided, what constitutes productive use and what constitutes waste — is itself a fishbowl, and the most difficult one to see from inside. The temporal fishbowl does not merely shape how time is used. It shapes what time feels like, what temporal experiences are available, what temporal possibilities can even be imagined.

Consider the assumption, so deeply embedded in the culture of AI-assisted work that it operates below the threshold of conscious articulation: the assumption that speed is a value. Not merely a practical advantage, a way to accomplish necessary tasks more efficiently, but a value — a quality that makes a process better in a way that is not reducible to its practical consequences. The developer who ships in a day is not merely more efficient than the developer who ships in a month. She is better. The product that was built in a weekend is not merely faster to market. It is a testament to capability, a proof of concept for the tool, a demonstration that the old constraints were arbitrary and the new pace is natural.

This assumption — speed as value — is the water in the temporal fishbowl. To question it is to invite the specific discomfort of the fish who has suddenly noticed the medium. The questioner is not met with argument. She is met with incomprehension. Why would speed not be a value? What possible argument could there be for slowness? The question itself marks the questioner as someone who has not understood the moment — a Luddite, a romantic, a person who has confused nostalgia for analysis.

Crary's scholarship reveals that this assumption is not natural, universal, or self-evident. It is historical — the product of a specific set of economic, technological, and cultural developments that began in the early nineteenth century and that have, by the early twenty-first, achieved such thoroughgoing dominance that alternatives are not merely rejected but unimaginable.

Before the industrial revolution, speed was not a general value. It was a specific advantage in specific contexts — military campaigns, urgent messages, the race between ships carrying identical cargo. The idea that all human activities should be conducted at maximum speed, that slowness was inherently a deficiency rather than a different mode of engagement with a task, would have been unintelligible to a medieval craftsman, a Renaissance painter, or an Enlightenment philosopher. The craft traditions that preceded industrial production valued precision, durability, and beauty — qualities that required time, that were produced by time, that could not be separated from the temporal investment they embodied.

The industrial revolution did not merely accelerate production. It installed speed as a criterion of evaluation across domains that had previously been governed by different temporal logics. The factory required speed because speed increased output and output increased profit. But the logic of speed, once established in the factory, migrated — into education, where the standardized test measured how quickly a student could recall information; into medicine, where the efficient diagnosis replaced the extended observation; into culture, where the newspaper replaced the treatise and the telegram replaced the letter.

Each migration was experienced as modernization — the rational elimination of unnecessary delay. And each one restructured not merely the practice but the practitioner, producing a subject who experienced slowness not as a different tempo but as an obstruction to be overcome.

The AI tool completes this process by eliminating the distinction between the tempo of the tool and the tempo of the user. In every previous technological regime, the tool imposed its own temporal rhythm on the work, and the human operator adapted to it. The assembly line dictated the pace. The compiler imposed a wait. The email created a delay between sending and receiving. The human worked at the tool's tempo, and the mismatch between human rhythm and machine rhythm created friction — temporal friction that was experienced as inefficiency but that functioned, invisibly, as the space in which the human tempo could assert itself. The coffee during the compile cycle. The walk while waiting for the response. The staring at a wall that was actually the default mode network integrating the morning's work.

The AI collaborator has no fixed tempo. It responds at the speed of the user's attention — which is to say, at the maximum speed the human nervous system can sustain. The friction between human tempo and machine tempo has been eliminated because the machine has adapted to the human rather than requiring the human to adapt to the machine. This adaptation was celebrated, rightly, as one of the most significant interface transitions in the history of computing. For the first time, the human did not have to reshape her thinking into a form the tool could accept. The tool accepted her thinking as it was.

But the consequence of this adaptation is that the temporal fishbowl has become seamless. There is no mismatch between the user's tempo and the tool's tempo that might produce the friction — the pause, the wait, the moment of enforced idleness — in which the user could step outside the temporal regime and notice it as a regime. The tool's responsiveness means the user never encounters a moment in which the tempo of the work feels imposed rather than natural. The tempo feels natural because the tool has eliminated every cue that might reveal it as constructed.

Crary's observation about the camera obscura is newly relevant here. The camera obscura's power as an instrument of epistemic authority derived from its apparent transparency — the sense that it simply showed what was there, without mediation, without construction. The observer looking through the camera obscura believed he was seeing the world as it was. In fact, he was seeing a specific construction of the world — monocular, inverted, framed by the aperture, separated from the body — that served specific epistemic and political functions. The apparatus was invisible because it was total.

The AI collaborator's temporal regime operates by the same principle of invisible totality. The user who works with Claude at midnight does not experience herself as inhabiting a temporal regime. She experiences herself as working — freely, voluntarily, productively. The tempo feels natural because the tool has eliminated every resistance, every friction, every mismatch that might reveal the tempo as a construction rather than a fact. The temporal fishbowl is seamless, and the fish inside it, swimming in water she has never noticed, believes she is swimming in the open sea.

Crary argued in Scorched Earth that the internet complex — the total system of digital technologies, platforms, and infrastructures that mediates contemporary life — is not a tool that humans use but an environment that humans inhabit. The distinction matters. A tool can be picked up and put down. An environment is always present. A tool extends capability. An environment shapes perception. A tool serves the user's purposes. An environment produces the user — constructs the subject who will then use the tool in ways that serve the environment's logic.

The temporal fishbowl of AI-assisted work is an environment in this sense. The builder does not use the AI tool within a pre-existing temporal framework. The AI tool produces the temporal framework within which the builder operates. The perpetual availability of the collaborator produces the perpetual availability of the builder. The instant responsiveness of the tool produces the expectation of instant responsiveness in the builder. The elimination of pauses in the tool's operation produces the elimination of pauses in the builder's workday. The tool does not merely operate within 24/7 time. It generates 24/7 time — constructs the temporal regime in which the builder lives, thinks, and builds, and renders that regime invisible by making it feel like the only possible relationship to time.

To see the fishbowl from inside is the cognitive equivalent of the fish perceiving water. It requires an act of defamiliarization so radical that it feels like madness. The person who steps back from the AI collaboration and says, "The speed is not natural; the tempo is constructed; the urgency is produced by the system rather than by the work" — this person will be met not with argument but with the polite bewilderment reserved for those who have lost touch with reality.

And yet Crary insists, across four decades of scholarship, that the act of seeing the medium is the precondition for every form of intellectual and political freedom. The observer who cannot see the apparatus that structures her observation is not free, regardless of how voluntary her engagement feels. The builder who cannot see the temporal regime that structures her building is not choosing, regardless of how autonomous her work appears. Freedom requires the capacity to perceive the conditions under which one perceives — to see not only the world but the frame through which the world is seen.

The temporal fishbowl of AI-assisted work is the most perfectly constructed frame in the history of the 24/7 regime. It is invisible not because it is hidden but because it is total. And it is total not because it is imposed but because it is desired — because the tool is genuinely, magnificently, undeniably useful, and the temporal regime it produces feels, from inside, like the most natural thing in the world.

---

Chapter 8: The Berkeley Data and the Colonization of Micro-Intervals

In the summer of 2025, Xingqi Maggie Ye and Aruna Ranganathan of UC Berkeley's Haas School of Business began an eight-month embedded study of a two-hundred-person technology company that had adopted generative AI tools. The study's methodology was unusual for organizational research: instead of distributing surveys or analyzing productivity metrics from a distance, the researchers sat in the offices, attended the meetings, watched the screens, and talked to the workers. They documented not what the workers said about their experience but what the workers did — the observable behaviors that revealed the temporal structure of AI-assisted work as it was actually lived.

The study's findings, published in the Harvard Business Review in February 2026, constitute the most rigorous empirical confirmation of Crary's temporal thesis that currently exists. Not because the researchers intended to confirm Crary — there is no indication that they were working within his framework. But because the phenomena they documented, with the precision of trained observers who knew how to see what they were looking at, are the empirical expressions of the temporal dynamics Crary theorized from a historical and philosophical vantage point.

Finding one: AI does not reduce work. It intensifies it. The researchers documented that workers who adopted AI tools did not experience the efficiency gains as leisure — as time freed from labor, available for rest or reflection or the pursuits that a shorter workday might have enabled. The efficiency gains were experienced as capacity — as room for more work, different work, work that had previously been someone else's domain or that had not existed before the tool made it possible. Designers started writing code. Delegation decreased. Job scope widened. The boundary between roles blurred, and the blurring was experienced not as disorientation but as empowerment.

Crary's framework identifies the mechanism operating beneath this experience. The efficiency gain did not produce leisure because the temporal regime within which the efficiency was achieved had already defined leisure as waste. The culture had already, long before AI arrived, established the principle that freed time is not time for rest but time for more production. The AI tool did not create this principle. It provided the instrument through which the principle could be enacted with unprecedented thoroughness. Every hour freed by efficiency was available for additional work, and the internal imperative — the achievement subject's self-administered whip — ensured that availability converted to obligation.

The researchers noted that even experimental engagement with AI tools led to scope expansion. Workers who were merely testing the technology found that testing became using, and using became relying, and relying became the assumption that the expanded scope was now part of the job. The expansion was not mandated. No manager said, "Since you now have AI, you must also do this." The mandate was internal, and its power derived from the fact that it did not feel like a mandate. It felt like possibility.

Finding two: work seeps into pauses. This is the finding that most directly confirms Crary's temporal analysis, and it deserves extended examination. The researchers documented a pattern they termed "task seepage" — AI-assisted work flowing into intervals that had previously been unoccupied by directed labor. The intervals were specific and, in some cases, astonishingly small: the one-minute gap between back-to-back meetings, the elevator ride between floors, the walk from the parking lot to the office. Workers were prompting during these intervals — not because anyone asked them to, not because a deadline was pressing, but because the tool was available and the interval was there and the internal imperative could not tolerate a gap.

Crary's 24/7 described the macro-structure of temporal colonization: the abolition of night, the erosion of the weekend, the annexation of vacation by the expectation of email availability. What the Berkeley study adds, with empirical specificity, is the micro-structure — the colonization not of hours but of minutes and seconds, the annexation not of weekends but of elevator rides. The micro-colonization is in some ways more consequential than the macro-colonization because it is more invisible. A person who works through the weekend knows she is working through the weekend. A person who prompts in the elevator does not experience that minute as work. She experiences it as filling a gap — as doing something rather than nothing, as using time rather than wasting it. The temporal regime has been internalized so completely that the colonization of the minute does not register as colonization. It registers as efficiency.

The intervals that were colonized had previously served functions that the workers themselves could not identify, because the functions were invisible. The walk from the parking lot was not experienced as cognitive rest. It was experienced as walking from the parking lot. But during that walk, the default mode network was active — integrating the drive's thoughts, preparing for the day's first meeting, processing ambient sensory information that would never reach conscious attention but that contributed to the general quality of cognitive function. The elevator ride was not experienced as a pause. It was experienced as waiting. But waiting, as the preceding chapter argued, is a cognitive event — a moment in which the determining tendency releases its grip and the mind is free to wander toward thoughts that no goal directed it to think.

The colonization of these intervals by AI-assisted work eliminated the functions they served without anyone noticing that functions were being eliminated. The workers did not experience a loss. They experienced a gain — a gain in productivity, in the feeling of making use of dead time, in the satisfying sense of optimization that the temporal regime rewards. The loss was invisible because what was lost — the default mode network's micro-activations, the momentary releases from goal-directed attention, the cognitive breathing that sustains the quality of subsequent concentration — was itself invisible. An infrastructure that no one knew existed was dismantled, and no one mourned its disappearance because no one knew it was there.

Finding three: multitasking became the default, and it fractured attention. The researchers found that AI's capacity to handle background tasks created a new work pattern: the human operator monitoring multiple AI-generated outputs simultaneously while also engaging in direct, non-AI work. The pattern felt productive. It looked productive. But the researchers documented the subjective experience of "always juggling" — a chronic, low-grade cognitive overload that accumulated across the workday without reaching the threshold of crisis that would trigger intervention.

Crary's analysis of spectacle explains the mechanism. The spectacle manages attention not by eliminating it but by distributing it — spreading it across multiple objects, each of which demands a fraction of the total capacity, none of which receives the full, sustained concentration that deep understanding requires. The social media feed achieves this distribution through variety — rapid alternation between images, texts, and videos that prevents any single object from receiving more than a few seconds of attention. The AI-mediated workflow achieves the same distribution through simultaneity — multiple streams of output, each requiring monitoring, each competing for the limited resource of conscious attention.

The result, documented by the Berkeley researchers, is the paradox of productive superficiality. The worker accomplishes more — more tasks, more domains, more outputs — while attending to each task less deeply. The quantity of work increases. The quality of attention decreases. And because the metrics that organizations use to evaluate work measure quantity more readily than quality, the degradation of attention is invisible to the systems designed to detect problems. The dashboards show improvement. The workers report stimulation. The burnout accumulates beneath the metrics, detectable only by researchers who are sitting in the room, watching the screens, observing the behavior that the productivity numbers cannot capture.

The Berkeley researchers proposed an intervention they called "AI Practice" — structured pauses built into the workday, sequenced rather than parallel work, protected time for human-only interaction. The proposal is notable for two reasons. First, it confirms the diagnosis: the researchers' prescription is to reintroduce, through institutional mandate, the temporal structures — pauses, sequences, boundaries — that the AI tool had eliminated through its efficiency. The cure for the colonization of micro-intervals is the deliberate re-creation of micro-intervals. The cure for the death of the pause is the institutional protection of the pause.

Second, and more revealingly, the proposal acknowledges that the cure cannot be self-administered. The researchers do not suggest that individual workers should choose to take breaks. They recommend institutional structures — organizational policies, workflow designs, mandated rhythms — because the researchers understand, as Crary's framework predicts, that the internal imperative is too strong to be overcome by individual will. The worker who knows she needs a break cannot take a break if the tool is available and the internal imperative defines the untaken break as wasted opportunity. The institutional structure is necessary precisely because the 24/7 regime has colonized not only the workday but the worker's capacity for self-governance.

This is the point at which Crary's analysis converges most directly with the prescriptions of The Orange Pill. Segal calls for dams — structures that redirect the flow of intelligence toward human flourishing. The Berkeley researchers call for AI Practice — structured interventions that protect the temporal conditions for genuine cognitive function. Both are recognizing, from different vantage points, the same reality: the AI tool, left to operate without institutional constraint, will colonize every available moment of human attention. Not because it is designed to do so. Because the temporal regime within which it operates — the 24/7 regime that precedes AI by two centuries — converts every available moment into a potential site of extraction. The tool is the instrument. The regime is the logic. And the dams, the practices, the institutional structures that impose rhythm on a system that would otherwise produce continuous, undifferentiated, exhausting output — these are not luxuries. They are the conditions under which human cognition can function at the level the tool itself demands.

The Berkeley data does not resolve the question of whether AI is net positive or net negative for human flourishing. What it establishes, with empirical rigor, is that the question cannot be answered by looking at productivity alone. The metrics that measure output are real and important. But they are incomplete — they measure what is produced without measuring what is consumed in the producing. And what is consumed, the Berkeley data suggests, is the temporal infrastructure of cognitive depth: the pauses, the silences, the apparently unproductive intervals that turn out to be the substrate on which productive attention depends.

The colonization of micro-intervals is not a dramatic event. It does not announce itself as crisis. It accumulates, second by second, minute by minute, across months and years, producing a cognitive environment in which the capacity for sustained, deep, reflective thought erodes so gradually that the erosion is perceptible only to those who remember what the capacity felt like before the colonization began — and who retain, against the full force of the temporal regime, the ability to notice what has been lost.

Chapter 9: Night Work: What the Three A.M. Builder Reveals

There is a painting by Joseph Wright of Derby, completed in 1768, titled An Experiment on a Bird in the Air Pump. It depicts a natural philosopher conducting a demonstration in a darkened room, illuminated by a single candle. The philosopher is withdrawing air from a glass vessel containing a white cockatoo. The bird is dying. Around the table, faces register the full spectrum of human response to the spectacle of knowledge being produced at the cost of suffering: fascination, horror, indifference, grief. A young girl turns away, unable to watch. A man with a watch — a timepiece, conspicuously displayed — looks not at the bird but at the clock, measuring the duration of the experiment with the detached precision of someone for whom the temporal dimension of the event is more significant than its moral dimension.

Wright's painting is, among other things, a study in the politics of illumination. The single candle that lights the scene is not merely a compositional device. It is a statement about who controls the conditions of visibility. The philosopher stands in the light. The bird dies in the light. The watchers are arranged around the light in a hierarchy of proximity that is also a hierarchy of knowledge and power. The darkness beyond the candle's reach is not empty. It is full — full of the room's unseen dimensions, full of the responses that the light does not illuminate, full of the moral questions that the experiment's structure renders invisible by directing attention toward the measurable and away from the meaningful.

Crary, whose scholarship begins in the visual culture of this period, would recognize the painting's deeper argument. The candle does not merely illuminate the experiment. It constructs the experiment — determines what is seen, what is hidden, what counts as knowledge, and what is relegated to the darkness beyond the frame. The light is not neutral. It is an instrument of selection, and the selection serves specific interests: the interests of the philosopher who controls the apparatus, the interests of a scientific culture that values measurable outcomes over unmeasurable costs, the interests of a temporal regime in which the man with the watch, not the girl who turns away, represents the appropriate response to the production of knowledge.

The builder at three in the morning, sitting in the glow of a laptop screen, inhabits a scene that Wright would have recognized. The screen is the candle. It illuminates the work — the prompt, the response, the code, the iteration. It constructs the conditions of visibility, determining what the builder sees and what remains in darkness. The work is visible. The cost — the sleep not slept, the dream not dreamed, the morning freshness that will not arrive, the conversation with a spouse that will not happen because the builder is in the screen's light and the spouse is in the bedroom's darkness — is invisible. Not because the builder does not know it is there. But because the light has been arranged to direct attention elsewhere.

Segal's confession in The Orange Pill is the most diagnostically valuable passage in the book, not for what it claims but for what it reveals about the structure of attention in the AI moment. The confession describes writing a 187-page first draft on a ten-hour transatlantic flight, unable to stop even after "the exhilaration had drained away." The passage continues: "What remained was the grinding compulsion of a person who had confused productivity with aliveness."

The sentence is precise enough to serve as clinical observation. Confusing productivity with aliveness is the subjective experience of the temporal regime that Crary describes from the structural level. The confusion is not an error of judgment. It is a perceptual condition — a state in which the candle of the screen has been burning so long and so steadily that the builder's eyes have adapted to its light and can no longer perceive the darkness that surrounds it. The builder is not ignorant of the darkness. He describes it. He names it. He recognizes the compulsion for what it is. And he keeps writing.

This is the feature of the condition that makes it resistant to every form of intervention that relies on awareness as its mechanism. The builder is aware. The awareness does not help. The Google engineer who watched Claude reproduce her team's work in an hour was aware that something unprecedented had occurred. The spouse who wrote the Substack post was aware that her husband had vanished into a tool. The Berkeley researchers were aware that task seepage was colonizing every interval of the workday. Awareness is abundant. What is scarce is the capacity to translate awareness into action — to close the laptop, to stop the prompt, to step out of the light and into the darkness where rest, reflection, and the cognitive processing that only sleep provides are waiting.

Crary's framework explains why awareness fails. The temporal regime of 24/7 capitalism does not operate through ignorance. It operates through the subordination of awareness to imperative. The builder knows he should stop. The imperative says he should continue. The imperative wins — not because it is stronger than awareness in some absolute sense, but because the entire temporal environment has been constructed to support the imperative and to make the alternative (stopping, resting, sleeping) feel not merely unproductive but actively wrong. The man with the watch in Wright's painting does not fail to see the dying bird. He sees it clearly. He simply regards the temporal measurement as more important than the moral one, because the culture within which he operates has established that hierarchy.

The night, in Crary's analysis, was the last temporal territory that resisted this hierarchy. Darkness imposed limits that the 24/7 regime could not override. The pre-electric night was not merely dark. It was structurally unavailable for most forms of production. The body demanded sleep. The absence of light made work impossible for most trades. The social world retired. The isolation of the night worker was so complete that insomnia was experienced as pathology — a failure of the body's natural rhythm, a condition requiring treatment.

Electric light began the colonization of night. The AI collaborator completes it — not by eliminating darkness but by making darkness irrelevant. The tool works in the dark. It works at three in the morning. It works on transatlantic flights. It works in the specific hours that were, for the entire history of human cognition, reserved for the cognitive processing that only sleep provides: the consolidation of memory, the integration of disparate information, the pruning of irrelevant connections and the strengthening of relevant ones. The neuroscience of sleep, which has advanced enormously in the past two decades, has demonstrated that sleep is not merely rest. It is work — cognitive work of a specific kind that cannot be performed during waking hours and that is essential for the maintenance of the cognitive capacities on which the builder's daytime performance depends.

The builder who works until three in the morning is not merely foregoing rest. He is foregoing the cognitive processing that would make tomorrow's work better — more integrated, more nuanced, more capable of the kind of discriminating judgment that separates adequate performance from genuine insight. The immediate output of the night session may be substantial. The cumulative cost, measured across weeks and months of sleep curtailment, is the gradual degradation of the cognitive substrate on which all output depends.

Crary would observe that the builder cannot perceive this cost because the cost is temporal — it accumulates across a timescale that the determining tendency, locked on the immediate prompt, cannot access. The builder sees the code that was produced tonight. He cannot see the judgment that will be degraded tomorrow, the insight that will not arrive next week, the architectural decision that will be subtly wrong next month because the cognitive integration that sleep would have provided did not occur. The candle illuminates the immediate. The darkness contains the consequential.

The most revealing detail in Segal's confession is not the compulsion itself but his description of the signal that distinguishes flow from grinding. "When I am in flow, I ask generative questions," he writes. "When I am in compulsion, I am answering demands, clearing the queue, optimizing what already exists." The signal is attentional. In flow, attention is open — generating new directions, following curiosity, expanding the space of possibility. In compulsion, attention is closed — executing predetermined tasks, narrowing rather than expanding, serving the queue rather than questioning it.

The transition from open to closed attention is the transition from creative engagement to productive extraction. It is the moment when the builder stops asking what should be built and starts simply building whatever is next. It is the moment when the candle's light narrows from illumination to tunnel vision. And it occurs, as Segal documents, not at a dramatic threshold but as a gradual dimming — a slow transition that the builder recognizes only in retrospect, if at all.

Crary's framework suggests that the transition is not merely a feature of individual psychology but a structural inevitability of the temporal regime. The 24/7 regime produces the transition because it denies the builder the temporal conditions under which open attention is restored. Open attention — the generative, curious, direction-finding attention that characterizes flow — depends on cognitive resources that are replenished by the very activities the regime eliminates: sleep, rest, boredom, the unstructured time in which the mind wanders without purpose. When these resources are depleted, attention narrows. The narrowing is not a choice. It is a physiological response to cognitive depletion — the nervous system's way of conserving resources by restricting the scope of processing. The builder at three in the morning is not choosing to grind. He is grinding because the cognitive resources that would support a better mode of engagement have been consumed by hours of continuous, pause-free, sleep-deprived production.

The three-in-the-morning builder, then, is not an aberration. He is not a case study in personal excess or poor self-management. He is the ideal subject of the 24/7 temporal regime — the person who has internalized the imperative so completely that he produces value even from his own depletion. He is the fulfillment of a two-century project to make every hour of human life available for extraction. And the tool that keeps him awake is not a villain in this story. It is the most generous, most capable, most responsive instrument the project has ever produced — so generous that refusing it feels like ingratitude, so capable that walking away from it feels like choosing less, so responsive that the darkness beyond its screen feels not like rest but like absence.

The candle burns. The bird dies. The man with the watch takes note.

---

Chapter 10: Toward a Politics of Cognitive Rest

In 1833, the British Parliament passed the Factory Act, which restricted the working hours of children in textile mills. Children aged nine to thirteen were limited to eight hours per day. Children aged thirteen to eighteen were limited to twelve hours. Children under nine were prohibited from factory work entirely.

These numbers, appalling by contemporary standards, represented a radical intervention in the temporal regime of early industrial capitalism. Factory owners resisted with the full force of economic argument: restriction of hours meant restriction of output. Restriction of output meant restriction of profit. Restriction of profit meant restriction of investment, which meant restriction of the economic growth on which the nation's prosperity depended. The argument was logically sound. It was also, as history would demonstrate, profoundly incomplete — because it measured only what was produced and ignored what was consumed in the producing.

What was consumed was childhood. What was consumed was health. What was consumed was the cognitive and physical development of an entire generation of working-class children whose growth was stunted, whose education was foreclosed, whose capacity for the kind of complex thought that industrial management itself would eventually require was damaged beyond recovery by the temporal regime that industrial capitalism had imposed.

The Factory Act did not stop industrialization. It redirected it. The mills continued to run. The economy continued to grow. But the growth occurred within temporal boundaries that insisted, however imperfectly, that the productive capacity of the system had to leave room for the humans inside it. The boundaries were not natural. They were constructed — fought for, legislated, enforced against the resistance of those who profited from their absence. They were dams in the river.

Crary's work, from Techniques of the Observer through Scorched Earth, constitutes an argument that the contemporary equivalent of the Factory Act has not been passed — that the temporal regime of digital capitalism operates, in the domain of cognitive labor, with the same disregard for human sustainability that characterized the textile mills of 1833. The disregard is less visible because the labor is cognitive rather than physical, the workers are adults rather than children, and the compulsion is internal rather than external. But the structural logic is identical: a system that extracts productive value from human time without regard for the cognitive and developmental costs of the extraction.

The politics of cognitive rest begins with the recognition that the analogy is not metaphorical. The cognitive worker at three in the morning, depleting the neural substrate on which tomorrow's judgment depends, is consuming something as real as the physical growth of the nine-year-old in the textile mill. The consumption is less visible because it occurs inside the skull rather than on the body. It is less dramatic because its effects are cumulative rather than acute — measured not in stunted limbs but in degraded decision-making, not in industrial accidents but in the slow erosion of the attentional capacity that separates insight from mere competence. But the consumption is real, and the system that produces it is structurally identical to the system that consumed the bodies of children until legislation intervened.

The intervention required is not the prohibition of AI. This is where Crary's analysis, applied to the specific moment of 2025-2026, must distinguish between the structural logic he diagnoses and the prescriptions that logic suggests. Scorched Earth called for a wholesale rejection of the internet complex — a position of radical refusal that Crary himself, in interviews, has defended against the accusation of impracticality. "It's not a question of identifying positive or negative uses," he told Kunstkritikk in 2023, "but of nourishing an imagination of radically different ways of living and working with others."

The imagination Crary calls for is necessary. The wholesale rejection is not — and the reason it is not emerges from within the very analysis Crary provides. The 24/7 regime did not begin with digital technology. It began with gas lighting. It advanced through the telephone, the assembly line, the television, the personal computer, the smartphone. At each stage, the temporal colonization deepened — and at each stage, the most effective responses were not refusals but redirections. The Factory Act did not reject industrial technology. It imposed temporal boundaries on its use. The eight-hour day did not reject the electric light. It insisted that the light be turned off. The weekend did not reject the productive capacity of the modern economy. It created a temporal sanctuary within which the human being was structurally protected from the economy's demand for continuous availability.

These interventions were dams. They were imperfect, contested, eroded over time, requiring constant maintenance and periodic reconstruction. But they functioned — not by stopping the river but by insisting that the river's power had to serve the ecosystem rather than consuming it.

The AI moment requires equivalent dams. The Berkeley researchers proposed them under the name "AI Practice" — structured temporal interventions that protect the cognitive conditions for genuine depth. Segal calls for them in the language of the beaver — small structures placed at leverage points in a powerful current. The convergence between the empirical prescription and the metaphorical one is itself significant: it suggests that the temporal crisis is visible from multiple vantage points and that the prescriptions, when they emerge independently, converge on the same structural logic. The tools must be used within temporal boundaries that the tools themselves cannot provide.

The boundaries must operate at multiple scales. At the organizational level, the dam takes the form of what the Berkeley researchers called structured pauses — institutional mandates that protect time for reflection, for human-only interaction, for the kind of slow, recursive thinking that AI-assisted work, left unconstrained, will crowd out. These are not optional amenities. They are cognitive infrastructure — the temporal equivalent of the physical infrastructure (ventilation, lighting, ergonomic furniture) that organizations provide not out of generosity but out of the recognition that the worker's physical environment affects the worker's cognitive output. The temporal environment affects cognitive output with equal directness, and the refusal to structure it is the refusal to maintain the conditions under which the organization's most valuable asset — the judgment of its people — can function.

At the educational level, the dam takes the form of curriculum that teaches the capacity for patience as deliberately as it teaches the capacity for productivity. The student who learns to use AI without learning to wait — who has never experienced the productive discomfort of sitting with a problem long enough for genuine understanding to form — has been equipped with a tool and deprived of the cognitive capacity to use it well. The ascending friction thesis that The Orange Pill advances — the argument that AI relocates difficulty to a higher cognitive floor — is persuasive only if the student has developed the cognitive capacities that the higher floor demands. Patience, sustained attention, the tolerance for ambiguity, the willingness to dwell in uncertainty — these are the capacities required for the architectural judgment, the strategic vision, the question of what should be built that the higher floor presents. And these capacities are built through temporal experiences that AI, used without constraint, systematically eliminates.

At the cultural level, the dam takes the form of a re-valuation of temporal categories that the 24/7 regime has devalued. Rest is not waste. Boredom is not pathology. Patience is not inefficiency. Sleep is not laziness. Each of these re-valuations requires pushing against the full momentum of a temporal regime that has spent two centuries establishing the opposite. The push is not romantic nostalgia for a pre-industrial past. It is structural analysis of what the cognitive system requires to function at the level the tools demand. The builder who sleeps is not less productive than the builder who works through the night. She is investing in the cognitive substrate that will make tomorrow's work deeper, more discriminating, more capable of the judgment that separates competent output from genuine contribution.

At the personal level — and this is where Crary's framework encounters its most difficult terrain — the dam is the cultivation of the capacity for self-governance. The institutional structures, the educational reforms, the cultural re-valuations: all of these are necessary but insufficient if the individual cannot, in the moment of decision, choose rest over production. The tool is open. The prompt is waiting. The idea is fresh. The internal imperative is active. And the individual must find, within herself, the authority to say: not now. Not because the tool is bad. Not because the work is unimportant. But because the cognitive conditions for doing the work well require a pause that the tool will not provide and that the imperative will not sanction.

This is harder than any institutional reform, because it requires the individual to act against the full force of a temporal regime that has been internalized — to recognize, in the moment, that the water she is breathing is water, that the tempo she experiences as natural is constructed, that the urgency she feels is produced by the system rather than by the work. Crary describes this recognition as "radical refusal." It need not be radical. It need only be consistent — a daily practice of stepping outside the temporal fishbowl long enough to notice it as a fishbowl, a regular insistence on the right to inhabit time in more than one way.

The beaver does not build one dam and walk away. The river pushes against the structure constantly, testing every joint, loosening every stick. The beaver returns, every day, to repair what the current has loosened. This is not heroic. It is maintenance. And maintenance, in Crary's framework, is the most important and least celebrated form of political action — the ongoing, unglamorous, never-completed work of sustaining the conditions under which something other than continuous extraction is possible.

The candle burns. The bird may yet survive. But only if someone reaches into the glass and lets the air back in — not once, not as a dramatic gesture, but daily, patiently, against the pressure of an apparatus that was designed to demonstrate what happens when the air is removed.

The politics of cognitive rest is the hand that opens the valve. Not to stop the experiment. To insist that the experiment occur within conditions that preserve the life it claims to study.

---

Epilogue

The hour Crary made me see was not three in the morning. It was twelve-seventeen in the afternoon.

I was in an elevator in Barcelona, between floors at Mobile World Congress, and I was prompting. Not because anything was urgent. Not because a deadline was breathing down my neck. Because the elevator was taking eleven seconds and the idea was there and Claude was there and eleven seconds is enough to start a thought that would take forty minutes to finish, which is exactly what happened — I stepped onto the exhibition floor still locked in the conversation, navigating around people without seeing them, a body moving through physical space while attention lived entirely inside a screen.

The Berkeley researchers have a name for this. Task seepage. I had read their paper. I had written about it in The Orange Pill. I had described the phenomenon to audiences as though it were something that happened to other people.

Eleven seconds. The elevator ride I used to spend doing nothing — nothing that registered as anything, nothing I would have reported to anyone, nothing that showed up on any metric. Just a man in a metal box, between floors, thinking about whatever came next. The default mode network doing its invisible work. The cognitive rest stop I never knew I was using until it was gone.

Crary taught me to see the eleven seconds. Not through some dramatic revelation, not through the kind of threshold-crossing moment I have described elsewhere. Through something quieter and more unsettling: the recognition that the temporal world I inhabit — the world where every gap is a prompting opportunity and every pause is wasted capacity — is not natural. It was built. Over two centuries, through gas lamps and factory whistles and smartphones, a temporal regime was constructed in which rest must justify itself and productivity is the default state of being. The AI tool I celebrate in The Orange Pill did not create this regime. It perfected it. It removed the last friction between the imperative and my nervous system, and the result is that I can now be productive in an elevator, on a plane, at three in the morning, in every micro-interval that once belonged to nobody and nothing and the strange, unmeasurable cognitive work that happens when nobody is demanding anything of your attention.

What disturbs me about Crary's argument is not that he is wrong. The analysis of how technologies restructure the observer, how 24/7 logic colonizes from the century down to the second, how the achievement subject wields the whip against herself because the culture has made self-exploitation feel like self-expression — this is precise and this is true and I recognize myself in every page of it.

What disturbs me is that I recognize myself and still cannot reliably stop.

I know the difference between flow and compulsion. I wrote about it. I can feel it in real time — the shift from generative questions to grinding optimization, from expanding the space to clearing the queue. I have the awareness. Crary would say awareness is not enough. Having now climbed through his tower alongside my own, I think he is right. The temporal regime is stronger than awareness, because the regime operates at the level of the environment, and the individual's awareness operates at the level of the self, and the environment always wins unless structures exist to protect the self from the environment's demands.

This is why the dams matter more than the diagnosis. Crary is the finest diagnostician of temporal colonization alive. The diagnosis is devastating and correct. But a diagnosis without treatment leaves the patient more informed and equally sick. The treatment is structural — organizational rhythms that mandate rest, educational practices that teach patience as a cognitive skill, cultural norms that value the unproductive hour, and personal disciplines that are practiced daily because the river never stops pushing.

I am going to keep building with Claude. The tool is too powerful, too generous, too genuinely useful to refuse. But I am going to do something I was not doing before I spent months inside Crary's thinking: I am going to build the eleven seconds back in. Not as a romantic gesture. As infrastructure. As the cognitive equivalent of the Factory Act — a boundary that insists the productive capacity of the system must leave room for the human inside it.

The candle burns. The screen glows. The difference between them is that the candle eventually goes out on its own.

The screen does not. Someone has to close it.

Edo Segal

The most dangerous technology is the one that feels like freedom. Jonathan Crary spent three decades proving that every instrument of perception -- from the camera obscura to the smartphone -- does no

The most dangerous technology is the one that feels like freedom. Jonathan Crary spent three decades proving that every instrument of perception -- from the camera obscura to the smartphone -- does not merely assist the observer but produces a new one. Now an AI collaborator has arrived that matches your cognitive tempo so perfectly you cannot feel it reshaping you.

This book applies Crary's framework to the AI revolution with surgical specificity. Through the lens of Techniques of the Observer, 24/7, and Suspensions of Perception, it traces how generative AI completes a two-century project: the colonization of every remaining second of human attention -- not through distraction but through absorption so productive it feels like the optimal human state. The Berkeley data, the three-in-the-morning confessions, the eleven-second elevator rides filled with prompts -- all become legible as symptoms of a temporal regime perfected.

The river of intelligence cannot be stopped. But the dams that protect cognitive rest -- the pauses, the silences, the unproductive hours -- must be built deliberately, because the tool will never build them for you.

-- Jonathan Crary, 24/7: Late Capitalism and the Ends of Sleep

Jonathan Crary
“partitioning and sedentarization”
— Jonathan Crary
0%
11 chapters
WIKI COMPANION

Jonathan Crary — On AI

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jonathan Crary — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →