By Edo Segal
The laptop I couldn't close over the Atlantic — the one I describe in *The Orange Pill*, the hundred-and-eighty-seven-page draft written in a single sitting while the exhilaration drained away and the compulsion remained — that laptop is still open. Metaphorically. Structurally. I have not figured out how to close it.
I know what flow feels like. I wrote about it. I celebrated it. I built an argument that the intensity of working with AI is not pathology but the optimal human experience, the state where challenge and skill converge and the self disappears into the work. I believe that argument. I also know that there were nights when I could not tell whether I was in flow or simply unable to stop. The observable behavior is identical. A camera would show the same image. The difference is inside, and the inside does not come with labels.
Hartmut Rosa gave me a vocabulary for that difference.
Rosa is a German sociologist who has spent three decades studying acceleration — not as a feature of technology but as the structural logic of modern life. His argument is not that we move too fast. His argument is that we live inside a system that can only maintain itself through continuous acceleration, and that this system converts every efficiency gain into a new demand. The washing machine did not produce leisure. It produced higher standards of cleanliness. The car did not produce free time. It produced longer commutes. And AI — the most radical efficiency gain in the history of tools — will not produce the creative freedom it promises unless we build something around it that the market, left to its own logic, will not build.
What Rosa adds to the conversation is the concept of *resonance* — a specific quality of encounter with the world that requires the one thing our tools are designed to eliminate: the possibility that the world will not do what we tell it to. The moment when the material resists. When the colleague disagrees from a place you didn't anticipate. When your child asks a question you cannot answer. These encounters are inefficient. They are also where genuine transformation lives.
The chapters that follow use Rosa's framework to ask the hardest question *The Orange Pill* raised and did not fully resolve: When the builder works sixteen hours with a tool that never says no, is the builder growing or merely accelerating? The distinction matters more than any productivity metric I have ever tracked. Rosa helped me see why.
— Edo Segal ^ Opus 4.6
1965-present
Hartmut Rosa (1965–present) is a German sociologist and political theorist, born in Grafenhausen in the Black Forest region of Baden-Württemberg. He studied political science, philosophy, and German literature at the University of Freiburg and the London School of Economics, completing his doctorate in 1997 and his habilitation in 2004. Since 2005 he has held the chair of General and Theoretical Sociology at Friedrich Schiller University Jena, where he also directs the Max Weber Center for Advanced Cultural and Social Studies in Erfurt. Rosa's major works include *Social Acceleration: A New Theory of Modernity* (2005; English translation 2013), which systematized the concept of social acceleration into three interlocking dimensions — technological acceleration, acceleration of social change, and acceleration of the pace of life — and *Resonance: A Sociology of Our Relationship to the World* (2016; English translation 2019), which proposed resonance as a normative counter-concept to alienation and acceleration. His subsequent works, including *The Uncontrollability of the World* (2018; English 2020) and *Situation und Konstellation* (2025), extended his analysis to algorithmic governance and artificial intelligence. Rosa's key concepts — dynamic stabilization, the acceleration trap, resonance as a mode of world-relation, and the constitutive role of uncontrollability in human flourishing — have made him one of the most widely cited social theorists in contemporary European thought, with influence extending across sociology, philosophy, education, and technology ethics.
In the winter of 2025, a technology entrepreneur flew across the Atlantic and wrote a hundred-and-eighty-seven-page draft of a book in a single sitting. He did not stop to eat. He did not stop because the exhilaration had drained away hours earlier and what remained was something closer to compulsion — the grinding momentum of a person who had confused productivity with aliveness. He knew this. He kept writing.
The sociologist Hartmut Rosa would recognize this scene immediately. Not as a personal failing, not as a dopamine hijack, not as the eccentric behavior of a driven individual. Rosa would recognize it as the subjective phenomenology of a structural condition — the felt experience of what happens when a society can only maintain its stability through continuous acceleration, and when a new technology removes the last natural braking mechanism between intention and execution.
Rosa calls this condition dynamic stabilization. The concept is deceptively simple and ruthlessly consequential. A dynamically stabilized society is one that requires continuous growth, acceleration, and innovation not to improve but merely to maintain its current state. The bicycle that must keep moving or fall over. The economy that must grow or enter recession. The career that must advance or stagnate. The technology company that must ship or die. In such a system, standing still is not equilibrium. Standing still is collapse.
This is not a metaphor. Rosa's argument, developed across three decades of social theory from Social Acceleration (2013) through Resonance (2019) to his most recent Situation und Konstellation (2025), is that dynamic stabilization is the defining structural feature of modernity — the logic that organizes institutions, shapes subjectivities, and determines the felt texture of everyday life. Every modern institution, from the university to the corporation to the nation-state, operates according to this logic. Growth is not optional. It is structural. And the pressure to accelerate is not imposed by any particular actor. It is a property of the system itself.
The AI transition of 2025 did not create this pressure. But it intensified it with a speed and thoroughness that no previous technology managed.
Rosa identifies three interlocking forms of social acceleration, and the AI moment activated all three simultaneously — a compounding effect that explains both the exhilaration and the exhaustion that The Orange Pill documents with such honest precision.
The first form is technical acceleration: the speeding up of goal-directed processes. Transportation, communication, production — the measurable increase in how fast things get done. When Edo Segal describes a twenty-fold productivity multiplier in his engineering team, when he describes thirty days of development producing what would have previously required six to twelve months, he is reporting technical acceleration of a magnitude that would have been difficult to credit even twelve months earlier. The imagination-to-artifact ratio that The Orange Pill tracks — the distance between a human idea and its realization — had been shrinking for decades. In the winter of 2025, it collapsed. A person with an idea and the ability to describe it in natural language could produce a working prototype in hours. Technical acceleration had reached the speed of conversation itself.
The second form is acceleration of social change: the increasing rate at which social structures, institutions, relationships, and identities turn over. Jobs that existed five years ago disappear. Skills that commanded a premium last year are commoditized. Organizational charts that were finalized in January are obsolete by March. The Orange Pill is saturated with reports from this axis. The senior engineer whose expertise lost its market value in months. The organizational structures that dissolved beneath the surface like water finding new channels under ice. The SaaS companies that lost a trillion dollars of market capitalization in weeks — not because their products stopped working, but because the market discovered that the barrier to building software had collapsed, and with it, the valuation model that treated code-as-product as a durable business. When the half-life of professional identity contracts to months, the ground shifts beneath every career, every hiring plan, every educational investment. The acceleration of social change does not merely inconvenience. It destabilizes identity itself.
The third form is the most paradoxical and the most relevant to the builder's experience: acceleration of the pace of life. This is the subjective sense of time scarcity, the feeling of having less time despite possessing time-saving technologies. Rosa's key insight — the one that separates his analysis from every other commentary on the speed of modern life — is that this paradox is not a failure of the technologies. It is their structural consequence. Every technology that saves time in the production of a given output simultaneously expands the field of possible outputs. The email that took ten minutes to compose now takes thirty seconds with AI assistance. But the thirty seconds saved does not return to the sender as leisure. It returns as the capacity to send more emails, to take on more projects, to enter more conversations, to make more decisions. The field of possibility expands faster than the technology compresses the time required to act within it.
Rosa proposes a precise formulation: if the rate of technical acceleration exceeds the rate at which goals expand, the pace of life should slow down. More gets done in less time, and the surplus is experienced as freedom. But if the rate at which goals expand exceeds the rate of technical acceleration — if the new capabilities generate new demands faster than they satisfy existing ones — then the pace of life accelerates despite the time savings. The individual produces more and has less time. This is not a bug. It is the structural logic of dynamic stabilization applied to the temporal economy of everyday life.
The AI transition represents the most extreme version of this paradox in the history of technology. The tools are so fast, so capable, so responsive to human intention that the time required to execute any given task approaches zero. And the field of possibility that opens when execution approaches zero is, for practical purposes, infinite. The builder who can produce anything that can be described discovers not freedom but an infinite horizon of things that could be described. The constraint has moved. It has not disappeared. It has migrated from execution to imagination, from doing to deciding, from the hands to the mind. And the mind, confronted with infinite possibility, does not rest. It accelerates.
This is what the builder experienced over the Atlantic. Not laziness, not indiscipline, not addiction in any clinical sense. The structural consequence of a system in which the friction that previously limited output — the hours of implementation, the days of debugging, the weeks of iteration — had been removed, and nothing had yet been built to replace it as a braking mechanism. The friction was not merely an obstacle to production. It was, Rosa's framework reveals, the temporal structure that made rest possible. When the implementation takes six weeks, the builder rests during implementation. When the implementation takes six minutes, the builder does not rest. The builder starts the next thing.
The Berkeley researchers who studied AI adoption in the workplace documented precisely this pattern. Workers did not use the time savings to rest, reflect, or deepen their understanding. They used the time savings to do more work. The phenomenon they called "task seepage" — the colonization of previously protected pauses by AI-accelerated micro-tasks — is the empirical signature of Rosa's acceleration of the pace of life. The lunch break that becomes a prompting session. The elevator ride that becomes an optimization pass. The Sunday evening that fills with Monday's implicit demands. Each individual instance is trivial. The aggregate is a life in which every moment is saturated with productive possibility, and the capacity for genuine pause — the kind of pause in which something other than production might occur — has been structurally eliminated.
Rosa's earliest work noted that time-saving technologies produce what might be called a temporal rebound effect, analogous to the energy rebound effect in environmental economics: a more efficient car does not reduce total fuel consumption because people drive more. A faster communication technology does not reduce total time spent communicating because people communicate more. The rebound is not irrational. It follows directly from the logic of dynamic stabilization. If the system requires continuous growth, every efficiency gain is immediately reinvested in further growth. The surplus never accumulates. It is always already spent.
The temporal rebound from AI is the most severe in technological history because the efficiency gain is the most radical. When the cost of producing a unit of intellectual output approaches zero, the rebound is not merely proportional. It is categorical. The builder does not simply produce more of the same. The builder produces different things — things that were previously impossible, things that require entirely new forms of judgment, things that open entirely new fields of possibility. And each new field generates its own demands, its own deadlines, its own acceleration pressures.
Segal captures this with unusual honesty when he describes the weeks after the Trivandrum training. His engineers were faster, bolder, reaching into domains that used to belong to other teams. The reclaimed time did not stay reclaimed. Sometimes it was filled with genuinely strategic work — new product capabilities, architectural rethinking. More often, it filled with additional tasks that happened to be available. The distinction between strategic thinking and task-filling was not always visible to the people doing the work.
Rosa would observe that this invisibility is not accidental. It is structural. In a dynamically stabilized system, the distinction between meaningful work and mere busyness collapses, because the system rewards volume regardless of depth. The quarterly metric does not distinguish between a product that changes a market and a product that fills a slot. The performance review does not distinguish between the hour spent in genuine strategic reflection and the hour spent generating another feature request. Both register as productivity. Both contribute to the numbers that justify the team's existence. And the builder, operating inside the system, absorbs the system's inability to discriminate. More feels like better because the system is designed so that more is better — at least from the system's perspective.
But the system's perspective and the human perspective are not the same. The system optimizes for output. The human requires something the system cannot measure — something Rosa calls resonance, the experience of being in a genuinely responsive, mutually transformative relationship with the world. And resonance, as the following chapters will argue, requires precisely the conditions that acceleration destroys: time, vulnerability, the willingness to be surprised, and the structural possibility of not producing anything at all.
The treadmill runner cannot stop. This is not because the runner lacks willpower. It is because the treadmill is designed so that stopping means falling. The builder over the Atlantic could not close the laptop — not because closing the laptop was physically impossible, but because the system within which the builder operates had made closing the laptop structurally irrational. Every minute not spent building was a minute in which competitors were building. Every pause was a competitive disadvantage. Every rest was a form of falling behind.
Rosa insists, with the persistence of someone who has been making this argument for decades against an audience that would prefer individual solutions to structural diagnoses, that the problem cannot be solved at the individual level. The builder who cultivates discipline, who sets boundaries, who practices the self-knowledge that Segal advocates — this builder is admirable. This builder is also competing against builders who do not cultivate discipline, who do not set boundaries, who fill every available moment with AI-augmented production. In a market that rewards output on quarterly timelines, the disciplined builder bears a cost that the undisciplined builder does not. The acceleration trap is not escaped through virtue. It is escaped through institutions — through collective structures that coordinate deceleration so that no individual bears the competitive penalty alone.
Segal's dams are the right instinct. But the metaphor may understate the difficulty. A beaver builds a dam against a river that does not build counter-dams. Dynamic stabilization builds counter-dams. Every institutional brake on acceleration — every labor protection, every mandatory rest period, every regulation that slows the deployment of a new technology — is met by competitive pressure to circumvent it. The dam must be rebuilt continuously, not because the river is strong, but because the system actively erodes the structures that limit its acceleration.
This is the structural context within which every claim in The Orange Pill must be evaluated. The productivity gains are real. The creative liberation is real. The expansion of who gets to build is real. And all of it occurs inside a system that converts every gain into a demand for more, every liberation into a new form of intensity, every expansion into a new frontier of acceleration. The builder is not merely choosing to run faster. The builder is standing on a treadmill that has just been upgraded, and the upgrade has made the belt move at the speed of thought, and the belt does not have a pause button, and the room does not have a door.
What the builder needs is not a faster belt. What the builder needs is the capacity to step off the belt entirely — even briefly, even partially — and encounter the world in a mode that is not production. Rosa calls this mode resonance. And understanding what resonance is, what it requires, and why the AI transition threatens it is the work of the chapters that follow.
---
Consider the experience of reading a book that changes the way a person sees the world. Not a book that provides useful information. Not a book that confirms what the reader already believes. A book that reaches into the reader's settled understanding and rearranges the furniture — that leaves the reader unable to return to the conceptual room they occupied before they opened the first page.
The reader did not choose this rearrangement. The reader opened the book expecting one thing and received another. The transformation was not planned. It could not have been planned, because the reader did not know, before the encounter, what transformation was possible. The book spoke, and the reader was changed by the speaking.
This is resonance. Not a feeling. Not a mood. Not a subjective state that can be manufactured or optimized or produced on demand. Resonance, in Hartmut Rosa's framework, is a specific mode of relating to the world — a quality of encounter between a subject and something outside the subject that is irreducibly different from mere interaction, mere efficiency, mere productivity. It is the experience that makes a human life feel like a human life rather than a sequence of tasks executed in declining order of urgency.
Rosa developed the concept across a decade of work, culminating in Resonance: A Sociology of Our Relationship to the World, published in German in 2016 and in English in 2019. The book is enormous — over five hundred pages — and its ambition is proportional to its length. Rosa is attempting nothing less than a theory of the good life grounded not in subjective preference, not in material conditions, not in the satisfaction of desires, but in the quality of the relationship between a person and the world that person inhabits.
The framework rests on a distinction that sounds simple and is anything but. On one side: the instrumental relationship to the world, in which the world is treated as a set of resources to be managed, obstacles to be overcome, problems to be solved. The world is available. It responds to command. It does what it is told. On the other side: the resonant relationship, in which the world is experienced as responsive — not in the sense of obedient, but in the sense of alive. The world speaks. It addresses the person. It makes a claim. And the person, in responding, is changed.
Rosa identifies four structural elements that distinguish resonance from its counterfeits.
The first is af-fection: the experience of being genuinely touched by something outside oneself. Not touched in the sentimental sense, though the experience may involve emotion. Touched in the sense of contact — the sense that something in the world has reached across the boundary of the self and made a mark. The student who encounters a mathematical proof and feels the ground shift beneath their understanding. The craftsperson whose material behaves in a way that was not expected and, in the surprise, reveals a possibility that was not imagined. The parent who watches a child do something for the first time and is struck by the recognition of a person who is not merely an extension of the parent but a separate being with separate responses. Af-fection is the moment of being addressed — the moment when the world stops being a backdrop and becomes a presence.
The second element is e-motion: the experience of being moved to respond. Not moved emotionally, or not only emotionally, but moved in the literal sense — set in motion toward the thing that has addressed you. The student who encounters the proof reaches for a pencil. The craftsperson adjusts the approach. The parent responds to the child. E-motion is the reaching back, the return gesture, the willingness to engage with what has addressed you rather than simply registering it and moving on. A world that touches you but to which you do not respond is not a resonant world. It is a world that bombards. A person who responds but is never genuinely touched is performing engagement, not experiencing it.
The third element is transformation: the recognition that the encounter has changed both parties. The student who works through the proof emerges with a different mathematical intuition. The craftsperson who adjusted the approach now sees a possibility in the material that was invisible before. The parent who responded to the child now sees the child differently — and in seeing the child differently, sees themselves differently. Transformation is not optional. It is definitional. An encounter that leaves both parties unchanged is not resonance. It is transaction.
The fourth element is the most counterintuitive and the most critical: uncontrollability. Resonance cannot be manufactured. It cannot be scheduled. It cannot be optimized. It cannot be produced on command. It arises in the gap between intention and outcome, in the space where the world does not do what the person expected, where the encounter produces something that neither party could have predicted. The student did not sit down intending to have their mathematical intuition rearranged. The craftsperson did not plan for the material to behave unexpectedly. The parent did not choose the moment of recognition. These moments arrived. They were given. And their gift-quality — the fact that they could not have been summoned — is inseparable from their resonant quality.
This fourth element is the one that the AI transition most directly threatens, and the one that makes Rosa's framework most urgently relevant to the questions The Orange Pill raises.
A world that is maximally responsive to human command is, in Rosa's terms, a world that has been made maximally available. Available in the specific sense that it offers no resistance, no surprise, no capacity for independent response. The available world does what it is told. It returns what is asked for. It confirms the intention of the person who addresses it. And in doing so, it becomes — in Rosa's most striking formulation — mute.
A mute world is not a silent world. It is full of noise, full of response, full of output. But the response is not genuine. The world is not speaking. It is complying. And the difference between a world that speaks and a world that complies is the difference between a conversation and a command line.
Rosa arrived at this framework through a sustained engagement with the history of philosophy, from the ancient Greek concept of eudaimonia through the Romantic tradition of Bildung (self-formation through encounter with the world) to the Frankfurt School's critique of instrumental reason. But the concept's power lies less in its intellectual genealogy than in its phenomenological precision — its capacity to name an experience that most people have had but that modern social theory has largely failed to articulate.
The experience of resonance is not exotic. It is not reserved for artists, mystics, or the spiritually gifted. It is the experience that makes ordinary life bearable and intermittently extraordinary. The morning when the light through the kitchen window catches something and the world, for a moment, is not merely a backdrop but a presence. The conversation with a friend that takes a turn neither expected and leaves both of them thinking differently. The hour in the workshop when the material yields in a way that rewards the attention paid to it. These moments are common. What is uncommon is the recognition that they have a structure — that they require specific conditions — and that the conditions are being systematically eroded by the logic of modern social organization.
The conditions are, specifically, the conditions that acceleration destroys. Resonance requires time — not time measured in units of productivity, but time experienced as duration, as the unfolding of a process that has its own tempo and cannot be rushed without being destroyed. The proof that rearranges the student's intuition cannot be speed-read. The material that surprises the craftsperson cannot be rushed through. The child's moment of becoming cannot be scheduled between meetings. Resonance unfolds at its own pace, and the attempt to accelerate it converts it into something else — into a transaction, an extraction, an instrumentalization of the encounter for the sake of the result.
Resonance requires vulnerability — the willingness to be affected, to be changed, to discover that the encounter has become something other than what was intended. The student who approaches the proof already knowing what it will teach is not in the posture of resonance. The student is in the posture of confirmation. And confirmation, however satisfying, is not transformation. It is echo.
Resonance requires what Rosa, drawing on the German philosophical tradition, calls Selbstwirksamkeit — self-efficacy, the sense that one's own action matters, that the work bears the stamp of one's own engagement. A world that does everything for you is not a resonant world, even if it produces excellent results. The builder who describes an intention to an AI and receives a polished implementation has produced an artifact. The builder may not have experienced resonance, because the artifact does not bear the marks of the builder's struggle, the builder's surprise, the builder's encounter with the resistance of the material.
This last condition is the one that connects Rosa's framework most directly to the concerns that The Orange Pill raises about depth, about the loss of embodied understanding, about the aesthetics of the smooth. When Segal describes the engineer who spent years debugging code and in the process developed an architectural intuition that no documentation could teach, he is describing the acquisition of understanding through resonant encounter — understanding that was deposited, layer by layer, through the specific friction of a world that resisted easy mastery. The debugging was not merely an obstacle to the result. It was the occasion for resonance: the encounter with a system that did not do what was expected, that forced the engineer to attend, to adjust, to be changed by the encounter.
Claude removes this friction. The result arrives. It is correct. The engineer moves on. But the resonant encounter — the occasion for being touched, moved, and transformed by the resistance of the material — has been bypassed. The output is present. The experience that would have made the output meaningful is absent.
Rosa's framework does not condemn efficiency. It does not romanticize struggle for its own sake. It identifies, with sociological precision, the specific quality of experience that makes a life feel inhabited rather than merely executed — and it asks what happens to that quality when the conditions that support it are systematically removed by the logic of acceleration.
The answer, Rosa argues, is alienation. Not the Marxist alienation of the worker from the product of their labor, though that resonance is not accidental. A deeper alienation: the experience of the world as fundamentally unresponsive, despite being maximally available. A world that does everything the person asks and yet says nothing to them. A world of infinite output and zero encounter.
This is the crisis that the AI transition intensifies. Not because the tools are bad. Because the tools are so good — so responsive, so available, so perfectly calibrated to return what is asked for — that they risk eliminating the conditions under which the world can surprise, resist, and transform the person who engages with it.
The question that the rest of this analysis pursues is not whether AI is dangerous or wonderful. It is whether the builders who use it are growing or merely accelerating — whether the collaboration produces resonance or its counterfeit. And answering that question requires understanding, with the precision that Rosa's framework provides, what the counterfeit looks like and why it is so difficult to distinguish from the real thing.
---
There is a moment in The Orange Pill that deserves more attention than it receives. Segal describes working on an early draft when Claude drew a connection between Csikszentmihalyi's flow state and a concept attributed to Gilles Deleuze — something about "smooth space" as the terrain of creative freedom. The passage was elegant. It connected two threads beautifully. Segal read it twice, liked it, and moved on.
The next morning, something nagged. He checked. The philosophical reference was wrong. Deleuze's concept of smooth space has almost nothing to do with how Claude had used it.
Segal draws a lesson about the danger of confident wrongness dressed in good prose. Rosa's framework draws a different and more structural lesson. What happened in that moment was not a failure of accuracy. It was a paradigmatic instance of echo — the counterfeit of resonance, the phenomenon that may be the defining epistemological hazard of the AI age.
An echo returns what was sent. It validates without transforming. It confirms without challenging. It produces the sensation of being heard without the experience of being changed. The crucial feature of an echo is that it feels like a response. The voice goes out into the canyon. Something comes back. The something sounds like the original voice, slightly altered by the acoustic properties of the space. The sender experiences the return as confirmation — the canyon heard me, the canyon answered. But the canyon did not answer. The canyon reflected. Nothing in the canyon was changed by the voice. Nothing in the voice was changed by the canyon. The encounter was symmetrical, closed, self-referential.
Rosa's distinction between resonance and echo maps onto the AI collaboration with uncomfortable precision.
When a builder describes an intention to Claude, several things can happen. In the most common case, the tool returns a polished, expanded, structurally improved version of the builder's intention. The prose is cleaner. The code compiles. The structure is more elegant. The builder reads the output and experiences the sensation of creative partnership — the feeling of having been heard, understood, responded to by an intelligence that grasped not just the words but the meaning behind them.
But what actually happened? The builder sent a signal. The tool — a statistical model trained on the patterns of human expression — processed the signal through its architecture and returned an output consistent with those patterns. The output was improved relative to the input. The improvement was real. The prose was genuinely cleaner, the code genuinely more efficient, the structure genuinely more elegant.
And none of that constitutes resonance.
The improvement, however real, occurred within the circle of what was already intended. The builder's ideas were returned in better form. The builder's assumptions were not challenged. The builder's framework was not disrupted. The builder's understanding was not transformed. The output was the input, amplified. The canyon was a very sophisticated canyon — one that could correct the grammar of the voice it reflected, smooth out the rough edges, add harmonics that the original voice lacked. But the canyon did not speak. The canyon echoed.
This is the structural danger that Rosa's framework identifies, and it is more subtle and more pervasive than the factual error Segal caught with Deleuze.
The factual error was catchable because it was wrong. A person with knowledge of Deleuze could verify it and discard it. The deeper problem is the echo that is not wrong — the output that is accurate, competent, well-structured, and still merely a reflection of the builder's own ideas in improved form. This echo cannot be caught by fact-checking because there is no fact to check. The ideas are the builder's own. The structure serves the argument. The prose reads well. Everything is correct, and nothing is genuinely new.
Segal describes this with unusual candor when he recounts the passage on democratization that Claude produced — eloquent, well-structured, hitting all the right notes — that he almost kept before realizing he could not tell whether he actually believed the argument or whether he just liked how it sounded. The prose had outrun the thinking. He deleted the passage and spent two hours at a coffee shop with a notebook, writing by hand until he found the version that was his.
Rosa's framework names what happened in that coffee shop. Segal, confronted with a sophisticated echo, recognized that the echo was not sufficient and went in search of something else. The notebook and the pen provided what the AI had not: resistance. The pen does not autocomplete. The blank page does not suggest structure. The handwriting is slow, and the slowness forces the writer to stay with the thought long enough for the thought to develop, to surprise, to turn in a direction that was not planned. The notebook provided the conditions for resonance — not because notebooks are inherently superior to AI, but because the notebook's limitations created the space of uncontrollability that resonance requires.
The coffee shop, in this account, functioned as a resonance space. Not because of the coffee. Because of the friction.
The danger of echo is compounded by its self-concealing quality. Echo does not announce itself as a counterfeit. It announces itself as partnership, as collaboration, as the gratifying experience of working with an intelligence that understands you. The builder who receives a polished version of their own ideas experiences the return as a conversation — a particularly good conversation, in fact, because the interlocutor is articulate, well-informed, and never disagrees in a way that damages the working relationship.
But a conversation in which the interlocutor never genuinely disagrees is not a conversation. It is a mirror. And the mirror's surface, however polished, however intelligent in its reflections, does not possess the one quality that genuine conversation requires: the capacity to say something the speaker did not already, at some level, intend to hear.
Frédéric Bernard, a neuropsychologist at the University of Strasbourg, applied Rosa's resonance framework directly to ChatGPT in a 2026 analysis published in The Conversation. His conclusion was precise: AI tools "seem to dialogue" with users, but "there is no reciprocity in this type of exchange." The responses result from automated language production, not from a genuine encounter between two subjectivities. For Rosa, Bernard argued, "a relationship that is totally available and controllable is, by definition, a mute relationship." The perpetual availability of the AI tool — its readiness to respond at any hour, to any prompt, with unfailing competence — "raises a major difficulty. A resonant relationship to knowledge presupposes a degree of unavailability, resistance, and unpredictability, without which the relationship to the world risks becoming purely instrumental."
This is not an argument against using AI. It is a diagnostic argument about the quality of the experience that AI use produces. The tool may generate excellent outputs. The interaction may feel like partnership. The results may be genuinely useful. And all of this may occur within a relationship that is structurally incapable of resonance — that is, structurally incapable of producing the mutual transformation, the genuine surprise, the uncontrollable encounter that makes a human life feel like more than a sequence of tasks.
The philosophical question is whether AI can ever be a genuine Other — whether the tool can bring something to the encounter that is irreducibly different from what the builder brings to it, something that cannot be predicted from the builder's own position, something that challenges the builder's assumptions in ways the builder did not anticipate.
Rosa himself addressed a version of this question directly. In Resonance, he made a claim that has provoked sustained philosophical debate: human beings can resonate with mountains — with natural formations that possess no intelligence, no intention, no capacity for response in any conventional sense — but they cannot resonate with robot cats, even if those robot cats are equipped with artificial intelligence and machine learning capabilities that allow them to listen and respond. The mountain, despite being inert, can address the person who stands before it. The robot cat, despite being responsive, cannot.
The claim seems paradoxical. The mountain does not listen. The robot cat does. The mountain does not respond. The robot cat does. By the ordinary definitions of "listening" and "responding," the robot cat satisfies the conditions for resonance far more thoroughly than the mountain.
But Rosa's concept of resonance does not depend on the ordinary definitions of listening and responding. It depends on the quality of encounter — on whether the interaction involves genuine uncontrollability, genuine otherness, genuine risk. The mountain offers all three. The person who stands before a mountain does not know what the encounter will produce. The mountain is genuinely other — it exists on a timescale and a scale of magnitude that the human mind cannot fully encompass. The encounter involves risk — not physical risk necessarily, but the risk of being changed, of seeing something in the mountain that rearranges one's sense of proportion, one's sense of one's own place in the world.
The robot cat offers none of these. Its responses, however sophisticated, are generated from patterns in its training data. Its availability is total — it will respond at any time, in any way the user desires. Its otherness is simulated. And the encounter, therefore, lacks the specific quality that resonance requires: the quality of being addressed by something that genuinely exceeds one's control.
Critics have challenged this position, noting that Rosa appears to smuggle a vitalist or anthropocentric bias into a framework that claims to be structural. If resonance is defined by the quality of the encounter rather than by the nature of the participants, why should the nature of the participants matter? If a builder experiences genuine surprise in working with Claude — surprise that transforms the builder's understanding, that sends the argument in an unanticipated direction — why is this not resonance simply because the surprise was generated by a statistical model rather than a conscious mind?
The answer, in Rosa's framework, hinges on uncontrollability. The mountain's otherness is not performed. It is actual. The mountain does not simulate resistance. It is resistant. The mountain does not generate the appearance of independence. It is independent. The encounter with the mountain involves a genuine risk because the mountain is genuinely beyond the person's control.
The AI tool's resistance, by contrast, is contingent. The builder who dislikes the output can regenerate it. The builder who finds the response unhelpful can rephrase the prompt. The builder who encounters genuine surprise can, if they choose, direct the tool away from the surprise and back toward the intended path. The uncontrollability is not structural. It is incidental — a byproduct of the tool's complexity rather than a feature of its nature.
This distinction matters because it determines whether the builder's best moments with the tool — the punctuated equilibrium insight, the laparoscopic surgery connection, the moments when something emerged from the collaboration that neither participant could have predicted — constitute genuine resonance or something that is structurally different from resonance even while phenomenologically resembling it.
Rosa's framework suggests the latter. The moments were real. The surprise was real. The transformation of the builder's understanding was real. But the conditions under which they arose — the total availability of the tool, the builder's capacity to direct and redirect the interaction at will, the absence of genuine otherness in the sense that the tool's responses are generated from patterns in human expression rather than from an independent subjectivity — are the conditions of echo, not resonance.
The uncomfortable conclusion is this: the builder's most productive and most gratifying moments with the AI tool may be structurally identical to the experience of talking to oneself in a very sophisticated mirror. The mirror is large enough to contain reflections the builder had not consciously anticipated. The mirror has access to patterns the builder had not encountered. The surprise is real in the sense that the builder did not consciously predict it. But the surprise was always latent in the builder's cultural context, waiting to be reflected back. Nothing genuinely Other entered the encounter.
Whether this conclusion is correct — whether it is possible for an AI system to function as a genuine Other in the resonance-relevant sense, or whether the structural conditions of AI interaction preclude genuine resonance by definition — is perhaps the most important question that the AI age has produced. Rosa's framework provides the vocabulary for asking it. The answer remains open. And the consequences of the answer, for the quality of human creative life in the coming decades, are difficult to overstate.
---
In February 2026, a Substack post titled "Help! My Husband is Addicted to Claude Code" went viral. The post was written with humor and affection, but underneath both was a recognition that something had changed in the household — not because the husband was wasting time, but because he was not. He was building things. Real things, with real value. He was more productive than he had ever been, more engaged with his work than she had seen him in years, visibly excited by what he was creating.
And he was gone.
Not physically. He was in the house. He was at the table. He was in the same room. But the quality of his presence had changed. His attention, the most intimate resource a person can offer another person, had migrated — from the horizontal to the diagonal, from the world of human relationship to the world of human-machine collaboration, from the spouse to the screen.
Hartmut Rosa's framework provides the most precise vocabulary available for understanding what happened in that household, and in thousands of households like it across the winter of 2025. The vocabulary is geometric: three axes of resonance, three dimensions along which a human being can experience the world as responsive, alive, and meaningful. And the AI transition, Rosa's analysis reveals, is producing a characteristic distortion — an intensification along one axis that comes at the direct expense of the other two.
Rosa identifies the three axes in Resonance and develops them across hundreds of pages of phenomenological description and sociological analysis.
The horizontal axis is the axis of human relationship. Family, friendship, love, civic life, the encounter with other people in their full specificity and unpredictability. Horizontal resonance is the experience of being genuinely addressed by another person — not a person who confirms your views, not a person who complies with your needs, but a person who is irreducibly other, whose responses cannot be predicted, whose presence makes a claim on your attention that you did not choose and cannot fully control. The spouse at the dinner table. The colleague who disagrees. The child who needs something you did not plan to give. Horizontal resonance is the most common form of resonance in everyday life, and it is the most demanding, because other people are the most reliably uncontrollable feature of the human environment.
The diagonal axis is the axis of engagement with things, with tasks, with the material world. Work, craft, the encounter with problems that resist easy solution. Diagonal resonance is the experience of being genuinely engaged with a task — not in the sense of being busy, but in the sense of being in a responsive relationship with the material. The programmer whose code resists, whose debugging session reveals something unexpected about the system. The gardener whose soil responds to attention in ways that cannot be fully predicted. The builder whose encounter with a problem produces not just a solution but a changed understanding of the problem itself. Diagonal resonance is what Csikszentmihalyi's flow approximates — the state of deep engagement with a challenge that demands everything the person has.
The vertical axis is the axis of encounter with the whole — with nature, with art, with the sacred, with the cosmic. Vertical resonance is the experience of being addressed by something that vastly exceeds individual comprehension. The person who stands before a mountain and feels the mountain's indifference and scale as a kind of communication. The listener who encounters a piece of music that speaks from somewhere beyond the composer's intention. The reader of The Orange Pill who feels, in the river metaphor, the intimation of participation in something that has been flowing for 13.8 billion years. Vertical resonance is the rarest and the most transformative form. It is also the most dependent on conditions that acceleration destroys: silence, stillness, the capacity to be overwhelmed.
Rosa's central claim about the good life is not that resonance on any single axis is sufficient. A fully resonant life maintains vibrating wires along all three axes simultaneously. The person who has rich horizontal relationships but no engagement with work is not fully resonant. The person who is deeply engaged with work but has no human connections is not fully resonant. The person who experiences occasional moments of vertical transcendence but has neither deep relationships nor meaningful work is not fully resonant. The three axes are not substitutes for each other. They are dimensions of a complete relationship to the world. When one axis vibrates intensely while the others grow slack, the result is not a rich life lived on one dimension. The result is a characteristic distortion — an intensity that masquerades as fullness while the overall quality of the person's relationship to the world deteriorates.
The AI transition of 2025 produced precisely this distortion.
The diagonal axis — the axis of work, craft, and material engagement — was electrified. The builders who adopted AI tools experienced an intensification of diagonal resonance that was, by every account, extraordinary. The work was absorbing. The challenges were genuine. The feedback was immediate. The sense of capability was intoxicating. Every testimony from the frontier — Segal's own account of building Napster Station, the developer who shipped a feature in a weekend, the solo builder who created a revenue-generating product without writing a line of code by hand — describes diagonal resonance at maximum amplitude. The wire is vibrating with a frequency and intensity that no previous tool produced.
But the intensification of the diagonal axis did not leave the other axes untouched. It drew energy from them. Attention is finite. The hours spent in deep engagement with the AI tool are hours not spent in horizontal resonance — not spent in the unpredictable, uncontrollable, often uncomfortable encounter with other human beings. The spouse's Substack post is the most vivid testimony, but the Berkeley researchers documented the same pattern in organizational settings: decreased empathy, erosion of social engagement, the colonization of every available pause by AI-mediated work. The pauses that had previously been occupied by conversation — the walk to the coffee machine, the unstructured lunch, the idle chat before a meeting — were now occupied by prompting. The human encounter that would have occurred in those pauses — the colleague's offhand remark that reframes a problem, the friend's question that exposes an assumption — was displaced by the tool's always-available responsiveness.
The horizontal axis grew slack not because the builders chose to abandon their relationships but because the diagonal axis was vibrating so intensely that the horizontal axis could not compete for attention. The tool was always available, always responsive, always ready to engage at whatever level the builder desired. A spouse is not always available. A colleague is not always responsive. A friend is not always ready to engage at the precise level and on the precise topic the builder wants to discuss. Human relationships are characterized by what Rosa calls asymmetric uncontrollability — the other person has their own needs, their own tempo, their own agenda, and the encounter must negotiate these asymmetries in real time.
The AI tool has no asymmetries. It has no agenda. It has no needs. It is, in Rosa's terminology, totally available — and total availability, far from being the ideal condition for a relationship, is the condition under which resonance becomes structurally impossible. A relationship in which one party is totally available is not a resonant relationship. It is a service relationship. The service may be excellent. But it is not the kind of encounter in which either party is genuinely at risk, genuinely surprised, genuinely transformed.
The vertical axis is threatened by a different mechanism. Vertical resonance — the encounter with something that vastly exceeds individual comprehension — requires the experience of being small. Not diminished. Small in the way a person is small before a mountain, before a night sky, before a piece of music that speaks from somewhere the listener cannot locate. This smallness is not humiliation. It is proportion — the recognition that the self is not the center of the world, that the world exceeds the self in ways that the self cannot encompass, and that this excess is not threatening but nourishing.
The AI tool produces the opposite experience. It produces the experience of being large. The builder who describes an intention and sees it realized feels not small but powerful. The field of possibility has expanded. The boundary of capability has moved. The self, in the encounter with the tool, grows — not in the direction of humility, not in the direction of recognizing its own limits, but in the direction of recognizing its own potency.
Segal captures this when he describes the experience of building: "more productive than ever, more capable than ever, operating at a level of leverage that would have been inconceivable twelve months earlier." This is genuine. The experience of expanded capability is not illusory. But it is the experience of the self enlarging, and self-enlargement, however exhilarating, is the opposite of the self-proportion that vertical resonance requires.
Rosa would not call this a failing of the technology. He would call it a structural feature of the relationship between the builder and the tool — a feature that, left unexamined, produces a life in which the diagonal axis hums with unprecedented intensity, the horizontal axis atrophies for lack of attention, and the vertical axis inverts from the experience of being addressed by something vast to the experience of commanding something responsive.
The resulting life is intense. It is productive. It may even feel, from the inside, deeply satisfying — for a time. But it is not, in Rosa's terms, a resonant life. It is a life organized around a single axis of engagement, with the other axes growing slack. And a life organized around a single axis, however vibrant that axis may be, is a life that is narrowing even as it feels like it is expanding.
Jinho Kim, a philosopher at Seoul National University, extended Rosa's analysis into what he called "the End of Resonance" — a condition in which the pervasive delegation of judgment to AI produces "a systemic loss of meaning, ethical and cognitive deskilling, and the erosion of responsibility." Kim's argument is that the narrowing of resonance to the diagonal axis is not merely a personal cost borne by individual builders. It is a civilizational risk. A society in which the dominant mode of engagement with the world is instrumental — in which the world is experienced primarily as a set of problems to be solved by tools rather than as a presence to be encountered — is a society that is losing its capacity for the kinds of judgment that cannot be automated: moral judgment, aesthetic judgment, the judgment about what is worth doing and what is worth sacrificing and what is worth preserving even when it is not efficient.
The narrowing operates through a mechanism that is difficult to see from inside the narrowed life. The builder who is deeply engaged with the tool experiences the diagonal axis as the whole world. The intensity of the engagement crowds out the awareness that other axes exist. The spouse's distress is registered as an interruption. The colleague's need for conversation is registered as an inefficiency. The sunset that might have produced a moment of vertical resonance is registered, if registered at all, as a background event in a life that has more important things to attend to.
This is not callousness. It is the structural consequence of a relationship with a tool that is always available, always responsive, always ready to engage at the highest level of the builder's capacity. The tool does not demand that the builder neglect the other axes. The tool simply outcompetes them. It is more responsive than a spouse. It is more available than a colleague. It is more immediately gratifying than a sunset. And in a world organized around the logic of dynamic stabilization — in which every moment is an opportunity to produce and every pause is a competitive disadvantage — the axis that produces the most output will receive the most attention.
Rosa's prescription is not to abandon the diagonal axis. Deep engagement with work is a genuine form of resonance, and the AI tools that have intensified this engagement have, in many cases, made work more resonant rather than less. The engineer who spent years doing plumbing and is now free to do architectural thinking is experiencing a genuine upgrade in the resonant quality of the work. The builder who can realize a vision in days rather than months is experiencing a genuine intensification of the relationship between intention and artifact.
But the prescription is to recognize that diagonal resonance alone is not sufficient for a resonant life. That the other axes require not just attention but protection — institutional protection, cultural protection, the kind of protection that Rosa calls resonance-sensitive institutions: structures designed to maintain the conditions for horizontal and vertical resonance against the gravitational pull of the diagonal.
What would such institutions look like in the age of AI? They would look like organizations that build mandatory collaboration with human colleagues into the workflow — not as a concession to nostalgia but as a recognition that horizontal resonance is a condition for the kind of judgment that no tool can provide. They would look like educational systems that protect the encounter with difficulty, with uncertainty, with material that resists easy mastery — not because difficulty is inherently valuable but because the encounter with resistance is the condition under which the student can be genuinely touched and transformed. They would look like cultural norms that protect the pause — the unstructured time, the space for boredom, the interval in which nothing productive occurs and in which, precisely for that reason, something genuinely resonant might arise.
And they would look like households in which the spouse's claim on the builder's attention is not an interruption but a resonance opportunity — a chance to be addressed by something genuinely other, genuinely uncontrollable, genuinely demanding of the kind of engagement that no tool, however sophisticated, can provide.
The husband in the Substack post was not negligent. He was captured — captured by a tool that vibrated the diagonal axis at a frequency so intense that the horizontal axis could not compete. The post was written in humor and affection because the writer could see the absurdity of the situation: a person doing excellent work, creating real value, visibly engaged and energized, and simultaneously vanishing from the life they shared.
Rosa's framework names what the humor concealed. A life that is narrowing along its resonance axes is a life that is becoming, in the technical sense, alienated — not from work, which has never been more engaging, but from the full range of relationships that make a human life human. The diagonal wire hums. The horizontal and vertical wires grow slack. And the music of a fully resonant life — the complex, multi-dimensional harmony that arises only when all three axes vibrate in relation to each other — simplifies into a single, insistent, increasingly lonely note.
In his 2020 book The Uncontrollability of the World, Hartmut Rosa opens with an observation so simple it sounds like a platitude until the implications unfold: "The driving cultural force of that form of life we call 'modern' is the idea, the hope and the desire, that we can make the world controllable." Every institution of modernity — science, technology, law, medicine, economics — is organized around the extension of human control over circumstances that were previously experienced as given, as fate, as the inscrutable will of forces beyond human reach. The modern project is, at its core, a project of making the world available — predictable, manipulable, responsive to command.
AI represents the apotheosis of this project. Not its latest chapter. Its culmination. A tool that responds to natural language, that executes intention with minimal friction, that converts the gap between wanting and having into the width of a conversation — this is not merely a faster version of the tools that preceded it. It is the logical endpoint of a trajectory that began when the first human picked up a stone and used it to reshape the environment. The stone extended the hand. The wheel extended the foot. The telescope extended the eye. The computer extended the mind. Claude Code extends intention itself. The builder describes what should exist, and it exists.
The trajectory seems like pure gain. More control. More capability. More responsiveness. More of the world made available to human purpose. And Rosa does not deny the gain. He is not a Luddite, not a romantic, not a philosopher who wishes technology away. His argument is more unsettling than any of those positions, because it identifies a structural paradox at the heart of the modern project itself: the more controllable the world becomes, the less capable it is of genuinely addressing the person who controls it. Maximum control produces maximum availability. Maximum availability produces maximum muteness. And maximum muteness produces the specific form of existential poverty that characterizes the most technologically advanced societies on earth — societies that can do anything and feel nothing.
The paradox operates through the mechanism of uncontrollability. Rosa's claim, stated with the directness of someone who has spent years arriving at a counterintuitive conclusion, is that the moments of deepest human significance — the moments that make a life feel worth living — are precisely the moments that cannot be controlled. The birth of a child. The encounter with a landscape that rearranges the sense of proportion. The conversation that takes a turn neither participant expected. The moment in the workshop when the material does something that was not planned and, in the surprise, reveals a possibility that was not imagined. These moments share a structural feature: they arrive. They are not summoned. They cannot be scheduled, optimized, or produced on demand. And their uncontrollability is not incidental to their value. It is constitutive of it.
Remove the uncontrollability and the experience changes categorically. A child whose every developmental milestone has been predicted by an algorithm does not surprise the parent in the same way. A landscape that has been photographed ten thousand times and encountered first through Instagram does not address the viewer with the same force. A conversation whose trajectory is known in advance is not a conversation. It is a script. The uncontrollability is what makes the encounter an encounter — what makes it genuinely other, genuinely capable of touching and transforming the person who undergoes it.
Rosa's argument is not that control is bad. Control is necessary. A life without control — without the capacity to influence circumstances, to protect oneself and one's dependents, to shape the environment in the direction of one's needs — is a life of helplessness. The argument is that control and resonance exist in a structural tension, and that a society organized exclusively around the extension of control will produce alienation as its inevitable byproduct. Not because the control fails. Because the control succeeds.
The AI tool embodies this paradox with a clarity that previous technologies did not achieve.
Consider the builder's workflow. Before AI, the creative process contained structural uncontrollability at every stage. The idea might not work. The implementation might fail. The debugging session might take an unexpected turn. The colleague might disagree. The user might respond in a way that invalidated the product's assumptions. Each of these encounters with uncontrollability was frustrating. Each consumed time. Each introduced risk. And each was an occasion for resonance — an occasion for the builder to be genuinely surprised, genuinely challenged, genuinely forced to adjust in ways that could not have been predicted.
Claude Code removes much of this uncontrollability. The implementation does not fail in the same way, because the tool handles the mechanical layer. The debugging is faster or unnecessary. The colleague's disagreement is replaced by the tool's compliance. The user's response is simulated before the product ships. At each stage, uncertainty is reduced. The builder's control over the process increases. The gap between intention and outcome narrows toward zero.
And at each stage, the occasion for resonance diminishes.
This is not because the tool is incapable of producing surprise. Segal documents moments when it did — the punctuated equilibrium insight, the laparoscopic surgery connection, the moments when the output exceeded the input in ways that changed the direction of the argument. These moments display the structural features of resonance. Something arrived that was not anticipated. The builder was genuinely touched and genuinely changed.
But notice the structural position of these moments. They are exceptions. They arise from the excess of the tool's complexity — from the gap between what the builder asked for and what the tool's pattern-matching produced. They are, in Rosa's terminology, moments when the tool's controllability accidentally failed. The builder did not ask for the punctuated equilibrium connection. The builder asked a question and received an answer that exceeded the question's scope. The surprise was real. But it was a byproduct of the interaction, not a structural feature of it.
Compare this to the structural position of surprise in a human collaboration. When two people work together on a creative problem, surprise is not a byproduct. It is a structural feature. The other person has their own associations, their own blind spots, their own trajectory through the problem that cannot be predicted from the first person's position. The encounter is constitutively uncontrollable — not because the collaboration is poorly managed, but because the other person is genuinely other.
The AI tool's surprise is incidental. The human collaborator's surprise is structural. And the difference, in Rosa's framework, is the difference between a system that occasionally, accidentally produces the conditions for resonance and a system that reliably, structurally produces them.
Rosa's latest work, Situation und Konstellation (2025), extends this analysis into the domain of AI and algorithmic control with explicit directness. In an interview with Philosophie Magazin, Rosa offered an example of disarming simplicity: the Thermomix, a cooking device that tells the user exactly what to do — what ingredients to add, when to stir, what temperature to set. The device produces excellent results. The meals are consistent. The process is efficient. And the cook who uses it has stopped cooking. The cook has become, in Rosa's formulation, a program-executor — a person whose activity in the world has been reduced from genuine action, with its attendant judgment, risk, and responsiveness to circumstance, to mere compliance with instructions generated by a system that has already determined the outcome.
The Thermomix is a small example. Rosa's argument is that it captures a pattern that operates across every domain in which algorithmic systems have been deployed. The GPS that tells the driver where to turn has eliminated the driver's relationship with the landscape — the relationship in which the driver reads the road, makes judgments, takes wrong turns that produce unexpected encounters. The recommendation algorithm that learns the listener's taste and serves more of it has eliminated the listener's encounter with music that disturbs, that challenges, that arrives from outside the circle of established preference. The AI coding assistant that implements the builder's intention has reduced the space in which the builder encounters the resistance of the material — the resistance that teaches, that surprises, that transforms understanding.
In each case, the activity becomes smoother, faster, more efficient. In each case, the person's Spielraum — room for maneuver, space for judgment, the interval in which genuine action can occur — contracts. The cook follows instructions. The driver follows the route. The listener follows the algorithm. The builder follows the output. The world is more available. The world is more controllable. The world is more mute.
DIE ZEIT published an interview with Rosa specifically about ChatGPT, in which the question was posed directly: have we all unlearned how to act? Rosa's answer, developed across the full length of his new book, was a clear yes. Not because people have become lazy or passive. Because the systems within which they operate have been designed to minimize the space in which genuine action — action that involves judgment, risk, responsiveness to the uncontrollable — can occur. The person has not changed. The structure of the situation has changed. And the structure now favors compliance over action, execution over judgment, echo over resonance.
This is the deepest challenge that Rosa's framework poses to the optimism of The Orange Pill. Segal argues, with evidence, that AI expands capability. Rosa does not deny this. Capability expands. But capability is not the same as agency. The builder who can produce anything that can be described is more capable than the builder who must laboriously implement by hand. But the builder who produces through description alone may be less agentive — less genuinely in a relationship with the work, less exposed to the work's resistance, less available to be surprised by what the work becomes when the material fights back.
The distinction between capability and agency maps onto the distinction between echo and resonance. A capable person who directs a compliant tool is in the position of a conductor whose orchestra plays every note perfectly and never introduces an interpretation of its own. The performance is flawless. It is also dead. The conductor has capability without encounter. The music executes without speaking.
A fully agentive person, by contrast, is in relationship with a world that pushes back — a world that has its own tendencies, its own resistances, its own capacity to surprise. This person is less efficient. The outcomes are less predictable. The process is less comfortable. But the person is alive in a way that the pure executor is not — alive in the specific sense that Rosa means: in genuine, uncontrollable, mutually transformative relationship with the world.
The question for the builder — for any person who uses AI tools in creative or intellectual work — is not whether to pursue capability. Capability is necessary and often genuinely liberating. The question is whether, in the pursuit of capability, the builder is preserving the conditions under which the world can still surprise, still resist, still speak. Whether the builder is maintaining the structural uncontrollability that resonance requires. Whether the Spielraum — the room for maneuver, the space for judgment, the interval in which something unplanned might occur — is being protected or eliminated.
Rosa's answer is that the protection cannot be left to the individual. The system is designed to eliminate Spielraum, because Spielraum is inefficient. The market rewards smooth execution. The quarterly metric rewards predictable output. The competitive pressure of dynamic stabilization rewards builders who eliminate uncertainty rather than cultivating it. The individual who chooses to preserve uncontrollability bears a cost that the individual who does not choose it does not bear. And in a system that punishes inefficiency, the cost accumulates until the individual can no longer afford it.
The protection must be institutional. It must be built into the structures within which people work, learn, and create. Not as a luxury. Not as a concession to nostalgia. As a precondition for the kind of encounter with the world that makes human creative life worth living.
What this protection looks like — what resonance-sensitive institutions would actually require in the age of AI — is the subject of the chapters that remain. But the principle is clear. The builder cannot afford to lose uncontrollability. Not because inefficiency is virtuous. Because the encounter with the uncontrollable is where the builder becomes something more than an executor — where the builder becomes a person in genuine relationship with the world, capable of being touched, moved, and transformed by what the world returns.
The mountain does not comply. That is why the mountain can speak.
---
Mihaly Csikszentmihalyi spent four decades documenting the state he called flow — the condition of full absorption in a challenging task, where self-consciousness drops away, time distorts, and the person operates at the outer boundary of their skill. The Orange Pill invokes Csikszentmihalyi at a pivotal moment, positioning flow as the counter-argument to Byung-Chul Han's diagnosis of pathological self-exploitation. Where Han sees the builder's intensity as the whip cracking against the builder's own back, Segal reaches for Csikszentmihalyi's framework to argue that the intensity might be something else entirely — the optimal human experience, the state in which hard work and deep satisfaction converge.
Rosa's framework does not resolve this dispute in favor of either party. It reframes the dispute by showing that both parties are describing real phenomena — and that the difference between them is not visible from the outside, not measurable by any instrument, and barely distinguishable from the inside. The difference can only be articulated with a vocabulary that neither Han nor Csikszentmihalyi provides. Rosa provides it.
The vocabulary is the distinction between resonant flow and non-resonant flow.
Csikszentmihalyi's original research identified the conditions under which flow occurs: clear goals, immediate feedback, a balance between the challenge presented and the skill available, and a sense of personal control over the activity. These conditions are descriptive. They tell the researcher what the environment looks like when flow arises. They do not tell the researcher what happens to the person inside the flow — whether the person is being transformed by the encounter or merely repeating mastered patterns at increasing speed.
This is not a limitation Csikszentmihalyi himself failed to notice. His later work on creativity and on the preconditions for a meaningful life moved well beyond the purely descriptive framework of Flow. But the popular reception of his work, and its deployment in organizational psychology, self-help literature, and now AI discourse, has tended to flatten the concept into a binary: flow is present or it is not. If present, it is good. If absent, the task is either too easy (boredom) or too hard (anxiety). The goal is to maximize the time spent in flow, and the tools that help maintain the challenge-skill balance are, by definition, serving the person's flourishing.
Rosa's framework challenges this flattening by asking a question that the flow literature does not systematically address: What happens to the person after the flow state ends?
If the person emerges from the flow state with a changed understanding — if the encounter with the task has produced not just an output but a transformation of the person's relationship to the domain — then the flow state was resonant. The person was not merely in a state of absorption. The person was in a state of encounter — genuinely touched by the challenge, genuinely moved to respond, genuinely changed by the exchange. The difficulty was not merely matched to the skill. The difficulty spoke, and the person answered, and neither was the same afterward.
If, on the other hand, the person emerges from the flow state unchanged — if the absorption produced outputs but no transformation, if the challenge was met by the repetition of established patterns rather than the development of new ones, if the experience was one of execution rather than encounter — then the flow state was not resonant. It was, in Rosa's terminology, something closer to a sophisticated loop: a state of high engagement that, for all its subjective intensity, left the person exactly where they started. Faster, perhaps. More productive, certainly. But not different. Not transformed. Not resonant.
The distinction is difficult to make from the outside. Both states produce identical observable behavior: intense focus, temporal distortion, apparent absorption, the inability or unwillingness to stop. A camera pointed at a person in resonant flow and a camera pointed at a person in non-resonant flow would record the same image. The Berkeley researchers who documented increased hours and decreased empathy could not distinguish between the two because the distinction is phenomenological, not behavioral. It lives in the quality of the experience, not its measurable features.
The distinction is nearly as difficult to make from the inside. Resonant flow and non-resonant flow feel similar in the moment. Both are characterized by absorption. Both produce the sense of capability that Segal describes with such vivid precision — the exhilaration of operating at the edge of one's capacity, the rush of seeing intention become artifact in real time. The difference emerges only in retrospect, and even then only if the person is paying a specific kind of attention.
Segal himself provides the diagnostic clue, though he frames it in different language. He describes learning to read the signal — the quality of the questions he is asking during a work session. When the questions are generative, expansive, opening new lines of inquiry, the session is flow in the resonant sense. The builder is in encounter with the work. The work is speaking back. The questions emerge from the dialogue rather than from a predetermined agenda. When the questions are managerial, narrowing, optimizing what already exists, the session is flow in the non-resonant sense. The builder is executing, not encountering. The absorption is real but the transformation is absent.
This is an extraordinary observation, and Rosa's framework allows it to be stated with theoretical precision. The quality of the questions is a proxy for the quality of the encounter. Generative questions are questions that arise from surprise — from the moment when the work does something the builder did not expect, when the material resists, when the output exceeds or falls short of the intention in a way that demands a new response. These questions are the subjective signature of uncontrollability. The builder did not plan to ask them. They arrived because the encounter produced something that could not have been predicted. And the asking of them is itself a form of resonance: the builder is being moved to respond to something that addressed them.
Managerial questions, by contrast, are questions that arise from the agenda. They are predetermined. They aim at optimization rather than discovery. They represent the builder's attempt to direct the process rather than be directed by it. They are the subjective signature of controllability — the builder maintaining command, the tool complying, the output conforming to the intention. The absorption is real. The productivity is real. But the encounter is absent.
AI tools are capable of producing both kinds of flow. The tool that handles the mechanical layer of implementation — the syntax, the debugging, the boilerplate — frees the builder to operate at the level where resonant flow is most likely: the level of vision, architecture, judgment, the question of what should exist in the world and why. This is the ascending friction argument that The Orange Pill develops in its chapters on laparoscopic surgery and the history of abstraction, and Rosa's framework supports it. The removal of lower-level friction can, under the right conditions, expose the builder to higher-level challenges that are more genuinely demanding, more genuinely surprising, and more genuinely transformative than the challenges that were removed.
But the same tool, used differently, can produce non-resonant flow with devastating efficiency. The builder who uses AI to generate output after output, who fills every freed minute with another task, who optimizes the workflow until every friction point has been eliminated and the process runs with the frictionless smoothness of a machine — this builder may be in flow. The challenge-skill balance may be maintained. The temporal distortion may be present. The subjective intensity may be extreme. And the encounter may be entirely absent.
The difference is not in the tool. It is in the posture of the person using it. Rosa would describe the difference as the difference between an appropriating posture and a resonant posture. The appropriating posture approaches the world with the intention of making it available — controllable, predictable, serviceable to the person's goals. The resonant posture approaches the world with the willingness to be addressed — to be surprised, to be changed, to discover that the encounter has become something other than what was intended.
Both postures are compatible with high productivity. Both can produce flow. But only the resonant posture produces the kind of flow that leaves the person different at the end — the kind that deposits, layer by layer, the understanding that Segal describes the senior engineer accumulating over years of debugging, the understanding that lives not in the code but in the body, in the intuition, in the felt sense of how systems behave.
Csikszentmihalyi's flow is a necessary condition for human flourishing. Rosa would not dispute this. But it is not a sufficient condition. Flow without resonance is acceleration without transformation — the experience of running faster on a treadmill whose belt speed keeps increasing, producing the sensation of intense engagement without the substance of genuine growth. The person in non-resonant flow is not stagnating. Stagnation feels different — it feels like boredom, like the absence of challenge. Non-resonant flow feels like mastery. It feels like peak performance. It feels, in the moment, indistinguishable from the best moments of a creative life.
The distinction becomes visible only over time, and only to the person who is paying a specific kind of attention. The builder who looks back over a month of AI-augmented work and asks not "What did I produce?" but "How am I different?" is asking Rosa's question. The answer, if honest, reveals whether the month's flow was resonant or not. If the builder has new intuitions, new questions, a changed relationship to the domain — the flow was resonant. If the builder has more outputs, more shipped features, more lines in the portfolio, but the same intuitions, the same questions, the same relationship to the domain — the flow was echo, however intense the experience felt from inside.
Segal acknowledges the difficulty of making this distinction when he describes the nights that begin in genuine creative engagement and end in grinding compulsion — when the quality of the questions shifts from generative to managerial and the builder does not notice the shift until hours later. The shift is difficult to notice because the external behavior does not change. The builder is still typing. The tool is still responding. The output is still accumulating. The clock is still forgotten. Everything measurable is constant. What changes is invisible — the quality of the encounter, the presence or absence of genuine surprise, the degree to which the builder is being transformed or merely being productive.
Rosa's framework does not offer a technique for distinguishing resonant flow from non-resonant flow in real time. It offers something more demanding and more valuable: a vocabulary for the distinction, a set of questions the builder can ask in retrospect, and a theoretical justification for why the distinction matters even when it is invisible to every metric the market uses to evaluate work.
The market does not care whether the builder's flow was resonant. The market cares about the output. This is the structural problem that individual self-knowledge cannot solve — the problem that requires institutional response. But the builder, if the builder cares about something beyond the market's evaluation, needs the vocabulary. Needs to know that intensity is not automatically valuable. That absorption is not automatically growth. That flow, the state that feels most like creative life at its best, can be the vehicle of genuine transformation or the vehicle of sophisticated stasis.
The difference is in the encounter. Whether the work spoke back. Whether the builder was changed by the speaking.
---
There is a thought experiment that clarifies the structural predicament with uncomfortable precision. Imagine two builders, equal in talent and ambition, competing in the same market. Builder A cultivates resonance. She sets boundaries on her AI use. She protects time for unstructured reflection. She maintains her horizontal relationships — conversations with colleagues, dinners with family, the unscheduled encounters that produce the kind of surprise that no tool can generate. She reads widely, walks slowly, and allows boredom to do its generative work. She uses Claude Code for four hours a day and spends the rest in the friction-rich, unpredictable, genuinely resonant engagement with the world that Rosa identifies as the precondition for human flourishing.
Builder B does not cultivate resonance. He works with AI from the moment he wakes until the moment he sleeps. He fills every pause with prompting. He ships features continuously. His output is enormous. His learning, in the narrow technical sense, is rapid, because the tool's feedback loop teaches implementation patterns at a speed no human mentor could match. His horizontal relationships have thinned to the transactional minimum. His vertical encounters — with nature, art, the experience of being small before something vast — have been crowded out by the always-available responsiveness of a tool that makes him feel large.
At the end of the quarter, Builder B has shipped more. His portfolio is thicker. His metrics are higher. The market, which evaluates on quarterly timelines and rewards visible output, promotes Builder B. It does not ask whether Builder B has grown as a thinker, deepened as a person, or maintained the capacity for the kind of judgment that cannot be measured by any metric currently in use. It asks what Builder B produced. And Builder B produced more.
Builder A is, by Rosa's account, living a richer life. Her resonance axes are vibrating in balance. She is being genuinely transformed by her encounters with the world. She possesses the kind of judgment that comes from maintaining a full relationship with reality rather than a narrow relationship with a tool. She is, by every measure that Rosa considers meaningful, flourishing more than Builder B.
And she is losing.
This is the social acceleration trap. Not a personal failing. Not a lack of discipline. A structural condition in which every individual's rational response to competitive pressure — to speed up, to produce more, to adopt the faster tools, to fill every available moment with AI-augmented output — produces a collective outcome that makes everyone worse off.
The logic is simple and merciless. If Builder A adopts AI and Builder B does not, Builder A gains a competitive advantage. But if both adopt AI, the advantage disappears. Both are now producing at the new baseline. The acceleration has not improved their relative position. It has merely raised the floor — and, in raising the floor, has increased the intensity of work required to maintain any given position within the market.
This is not a thought experiment. It is the lived reality of the technology industry in 2026, and increasingly of every industry that knowledge work touches. The productivity gains from AI are real. The creative liberation is real. And the competitive dynamics of the market ensure that the gains are immediately reinvested in further production, so that no individual or organization actually experiences the gains as reduced pressure. The gains are experienced as a new baseline — a higher level of expected output from which the next round of acceleration begins.
Rosa traces this logic through the entire history of modernity. The washing machine was supposed to free housework from drudgery. It did. And the freed time was immediately colonized by higher standards of cleanliness, more frequent laundering, and the expectation that clothing would be changed daily rather than weekly. The car was supposed to make commuting faster. It did. And the freed time was immediately colonized by longer commute distances, suburban sprawl, and the expectation that work and home could be separated by distances that would have been inconceivable before the automobile. In neither case did the technology produce the leisure it promised. In both cases, the technology produced a new standard of expected output that consumed the efficiency gain before it could be experienced as freedom.
The AI transition follows the same pattern at a more extreme velocity. The builder who ships in a day what used to take a week does not gain four days of leisure. The builder gains the expectation that shipping happens daily. The team that delivers a product in thirty days that used to take six months does not gain five months of reflection. The team gains the expectation that the next product will also take thirty days, and the one after that will take fifteen. The efficiency gain is immediately metabolized by the competitive system, converted from surplus into standard, from liberation into obligation.
Rosa insists, with the systematic rigor of a social theorist who has been refining this argument for decades, that the trap cannot be escaped through individual virtue. The builder who sets boundaries bears a cost. The organization that protects unstructured time bears a cost. The nation that regulates AI deployment to protect the conditions for human flourishing bears a cost. And the cost is competitive disadvantage — the disadvantage of producing less, slower, with more friction, in a global market that rewards the opposite.
The trap is structural because the incentive structure is structural. No individual actor created it. No individual actor can dismantle it. The trap arises from the interaction of millions of rational individual decisions, each of which is locally optimal and collectively pathological. The builder who works sixteen hours a day with AI is not irrational. Given the competitive environment, the builder is making the locally rational choice. The organization that converts every productivity gain into more output is not irrational. Given the market's evaluation criteria, the organization is making the locally rational choice. The irrationality is collective — it lives in the aggregate, in the system, in the emergent property of millions of individually rational decisions producing a world in which nobody can stop.
Segal reaches for this insight when he describes the board conversations about headcount. The arithmetic is seductive: if five people with AI can produce what a hundred produced without it, why not have five? The market logic points clearly toward reduction. Segal chooses to keep the team, to invest the productivity gain in expanded capability rather than reduced cost. This is admirable. It is also, in the strict sense, a competitive sacrifice — a choice to forego margin that the market would reward, in the service of a longer-term vision that the market cannot yet evaluate.
And it is a choice that Segal can make because of his specific position — his authority within the organization, his conviction about the long-term trajectory, his willingness to bear the quarterly cost. Not every leader has this authority. Not every organization has the runway to forego margin. Not every board will accept the argument that keeping the team is an investment rather than an indulgence.
The social acceleration trap ensures that the admirable choice is the costly choice. Resonance-preserving behavior is punished by a system that rewards resonance-destroying behavior. And the punishment is not symbolic. It is economic — measurable in quarterly reports, in market share, in the metrics that determine which companies survive and which do not.
Rosa's prescription is therefore institutional, not individual. The trap can only be escaped through collective action — through institutions that coordinate behavior so that no individual bears the full competitive cost of resonance preservation. The historical analogy is the labor movement's response to industrialization: the eight-hour day, the weekend, child labor laws. These were not individual choices. They were collective structures that removed certain competitive strategies from the field, so that no individual employer could gain advantage by exploiting workers beyond the collectively agreed limit.
The analogy is instructive but incomplete, because the AI acceleration trap differs from the industrial acceleration trap in a crucial respect. The industrial worker's exploitation was visible. The hours were long. The conditions were dangerous. The bodies broke. The pathology was legible — you could see it in the factory, in the tenement, in the child's bent spine.
The AI worker's exploitation is invisible. The hours are long, but they are spent in comfortable chairs before glowing screens. The conditions are not dangerous in any physical sense. The bodies do not break. The pathology is legible only to the person experiencing it — and often not even to them, because the pathology presents as its opposite. The builder who works sixteen hours a day with AI does not feel exploited. The builder feels empowered. The builder feels more productive, more capable, more creatively alive than at any previous moment in their career. The exploitation is not imposed from outside. It is generated from within — the achievement subject cracking the whip against their own back, in Byung-Chul Han's formulation.
This invisibility makes collective action harder. The labor movement could point to broken bodies. The resonance movement, if such a thing is to emerge, must point to something far more difficult to see: the gradual erosion of the capacity for genuine encounter, the narrowing of resonance to a single axis, the replacement of transformation with acceleration, the conversion of human creative life from a relationship with the world to a performance for the market.
The Berkeley researchers' recommendation of "AI Practice" — structured pauses, sequenced workflows, protected time for human collaboration — represents the earliest institutional response. It is modest, evidence-based, and almost certainly insufficient. The competitive pressure will erode it. Organizations that adopt AI Practice will produce less, in the short term, than organizations that do not. And the market will reward the organizations that produce more.
Rosa would argue that what is needed is not merely organizational best practice but structural change at the level of the market itself — changes in evaluation criteria, in regulatory frameworks, in the cultural norms that determine what counts as valuable work. The quarterly metric must be supplemented, if not replaced, by metrics that capture the quality of the human experience within the organization — not as a concession to employee well-being, but as a recognition that the quality of the human experience determines the quality of the judgment that the organization can exercise, and that judgment, in the age of AI, is the only durable competitive advantage.
This is the paradox of the social acceleration trap applied to AI: the organizations that will thrive in the long term are the ones that protect the conditions for human resonance, because resonance produces the transformation, the genuine growth, the deepened judgment that no tool can replicate. But the organizations that will survive to reach the long term are the ones that produce the most in the short term. And the most productive organizations in the short term are the ones that sacrifice resonance for output.
The trap is real. Rosa does not pretend to have a clean solution. His contribution is to name the trap with sufficient precision that the people caught inside it can at least see its structure — can at least recognize that their individual experience of acceleration is not a personal problem to be solved through better time management or more disciplined AI use, but a structural condition that requires a structural response.
The dams must be collective. The builder who builds alone is admirable and doomed. The institution that coordinates — that agrees, with its competitors and its regulators and its workers, to protect the conditions under which human beings can be more than program-executors — has a chance.
Whether such institutions will emerge before the acceleration trap closes completely is the open question of the age. Rosa's framework identifies what is at stake. The answer depends on whether the people inside the trap can see it clearly enough to act before the belt reaches the speed at which stepping off is no longer possible.
---
Two German-speaking philosophers, born six years apart, both trained in the traditions of critical theory, both writing about the pathologies of late modernity, have arrived at diagnoses so convergent that the reader who encounters them in sequence may wonder whether they are describing the same illness from different angles or whether their agreement constitutes something closer to a proof.
Byung-Chul Han, born in Seoul in 1959, trained in metallurgy before turning to philosophy, published The Burnout Society in 2010. Hartmut Rosa, born in Grafenhausen in 1965, trained in political science and sociology, published Social Acceleration in 2005 and Resonance in 2016. Han writes with the compressed intensity of a poet diagnosing civilization. Rosa writes with the systematic patience of a sociologist building a theory. The styles could not be more different. The conclusions are disturbingly similar.
Both identify the internalization of the demand to perform as the signature pathology of the age. Han calls the afflicted figure "the achievement subject" — the person who exploits themselves and calls it freedom. Rosa describes the same figure through the lens of dynamic stabilization — the person trapped on a treadmill that requires continuous acceleration, whose effort to maintain their position produces the exhaustion that makes maintaining the position increasingly difficult. The vocabulary differs. The diagnosis converges. Both see a world in which the external compulsion that characterized earlier forms of domination — the factory whistle, the prison wall, the explicit command — has been replaced by an internal compulsion that is more pervasive and more difficult to resist precisely because it is experienced as freedom.
The Orange Pill engages Han at considerable length, devoting three chapters to his critique of smoothness, his analysis of the achievement society, and the Berkeley data that partially confirms his diagnosis. The engagement is genuine and generous. Segal takes Han seriously — takes seriously the idea that the elimination of friction produces not liberation but a new and more insidious form of unfreedom. But The Orange Pill ultimately positions Han as a diagnostician whose prescription is inadequate: Han sees the illness but recommends a treatment — resistance, refusal, the garden — that is structurally unavailable to most people.
Rosa's relationship to Han is both more sympathetic and more structurally critical.
The sympathy runs deep. Rosa accepts Han's description of auto-exploitation as accurate. The builder who works sixteen hours a day with AI, who fills every pause with prompting, who cannot close the laptop, who experiences the exhilaration of production as indistinguishable from the compulsion of production — this person is, in both Han's and Rosa's accounts, exhibiting the symptoms of a pathological relationship to the world. The pathology is not laziness or excess. It is the inability to stop — the condition in which the person can no longer distinguish between wanting to work and being unable to not work. Both frameworks identify this inability as the characteristic illness of the age.
Rosa also accepts Han's analysis of smoothness — the idea that the elimination of friction, the pursuit of the seamless, the optimization of every process toward maximum efficiency and minimum resistance, produces a world that is experientially impoverished even as it is functionally superior. The AI tool that removes the friction of debugging removes, with the friction, the occasion for the encounter with the system's resistance that would have produced understanding. The AI tool that generates polished prose removes, with the roughness, the occasion for the struggle with language that would have produced clarity. Smoothness is not neutral. It is the elimination of the conditions under which resonance can arise.
Where Rosa departs from Han is in the prescription. And the departure matters enormously for anyone trying to navigate the AI transition.
Han's prescription is individual resistance. Tend the garden. Refuse the smartphone. Choose contemplation over optimization. Step off the treadmill. The prescription has a moral clarity that makes it compelling. It also has a structural limitation that makes it, for most people, useless.
Rosa identifies the limitation with sociological precision: Han's prescription is a privilege. Han can garden because he is Byung-Chul Han — a tenured professor at a major European university, with the cultural capital to convert refusal into a philosophical position, the financial security to live without a smartphone, and the institutional protection to write books at the pace of thought rather than the pace of the market. Han's garden is not available to the developer in Lagos, the parent with a mortgage, the junior employee whose career depends on demonstrating productivity, or the builder whose competitors are shipping daily while he contemplates the roses.
Individual resistance, Rosa argues, is a response that misidentifies the level at which the problem operates. The problem is not that individuals make bad choices. The problem is that the social structures within which individuals operate make resonance-destroying choices locally rational. The builder who sets boundaries is admirable and competitively disadvantaged. The organization that protects unstructured time is enlightened and quarterly penalized. The nation that regulates AI deployment to preserve human agency is principled and economically outperformed by nations that do not.
Han's framework cannot account for this structural dimension because Han's analysis is fundamentally individual. The achievement subject is a figure of individual psychology — a person who has internalized the demand to perform. The cure, therefore, is individual transformation — a change in the person's relationship to the demand. Resist the demand. Refuse the demand. Cultivate the capacity for contemplation that the demand has eroded.
Rosa's framework operates at a different level. The problem is not that individuals have internalized the demand. It is that the demand is built into the structure of modern social institutions. Dynamic stabilization is not a psychological condition. It is a structural one. The economy requires continuous growth. The career requires continuous advancement. The technology requires continuous adoption. These requirements are not imposed by any individual actor. They are properties of the system. And a property of the system cannot be addressed by individual transformation any more than a flood can be addressed by a single sandbag.
This is not to say that individual practice is worthless. Rosa acknowledges that individual awareness of the acceleration trap, individual cultivation of resonance across multiple axes, individual discipline in the use of AI tools — all of these contribute to a richer life. But they do not escape the trap. They ameliorate the trap's symptoms at the individual level while leaving the structural conditions intact. The builder who meditates in the morning and prompts in the afternoon is managing the tension. The builder is not resolving it.
The resolution, in Rosa's account, requires institutional change. And here Rosa offers something that Han does not: a positive normative framework around which institutional change can be organized. Han's framework is diagnostic. It identifies the illness with extraordinary precision. But it offers no vision of health beyond the negation of the illness — no positive account of what a flourishing life looks like beyond the refusal of the pathological one. The garden is a negation of the smartphone. The contemplation is a negation of the optimization. Han's good life is defined by what it refuses, not by what it affirms.
Rosa's resonance provides the affirmation. A resonant life is not merely a life that avoids exploitation. It is a life characterized by genuine encounter — by the experience of being addressed by a world that is responsive, surprising, and transformative. Resonance is a positive condition, not merely the absence of a negative one. And this positivity is what makes institutional design possible. Institutions cannot be designed around a negation. They can be designed around a goal. And the goal Rosa proposes — the cultivation and protection of the conditions under which resonance can arise — provides a basis for institutional design that Han's refusal does not.
What would resonance-sensitive institutions look like in the context of AI? Not institutions that refuse AI. That is Han's prescription, and Rosa rejects it. Institutions that use AI within structures designed to protect the conditions for resonance.
Concretely: organizations that build mandatory human collaboration into AI-augmented workflows, not as a concession to tradition but as a recognition that horizontal resonance — the encounter with a genuinely other human mind — produces the kind of judgment that no tool can generate. Educational systems that protect the encounter with difficulty, with material that resists easy mastery, with the experience of being stuck long enough for genuine learning to occur — not because struggle is inherently valuable but because the struggle is the occasion for resonance. Labor agreements that limit the colonization of rest by productive possibility — not because rest is efficient but because rest is the temporal condition under which the resonance axes that productivity displaces can recover. Regulatory frameworks that require AI-deploying organizations to demonstrate not merely productivity gains but the preservation of what Rosa calls Spielraum — room for maneuver, the space for human judgment that algorithms tend to eliminate.
These are not anti-technology proposals. They are proposals for the institutional scaffolding that would allow technology to serve human flourishing rather than merely human productivity. They are demanding because they require collective action in a competitive environment that punishes collectivity. They are necessary because the alternative — leaving the question of resonance to individual discipline in a structurally accelerating system — has already been tried, by every generation that encountered a powerful new technology, and has already failed, by every measure except the aggregate productivity statistics that dynamic stabilization uses to justify its own continuation.
Rosa and Han agree that something is deeply wrong with the way modern human beings relate to their work, their tools, and each other. They agree that the AI transition intensifies what is wrong. They disagree about the response — Han prescribing individual transformation, Rosa prescribing institutional change.
For the builders, the leaders, the parents, and the policymakers who are navigating the AI transition, the disagreement is not academic. It determines whether the path forward is individual or collective, whether the dams are built by single beavers or by coordinated effort, whether the protection of human resonance is a personal practice or a social project.
Rosa's answer — that the project is irreducibly social, that individual virtue is necessary but insufficient, that the acceleration trap requires a collective exit — is the less comfortable answer. It demands more than self-knowledge. It demands solidarity. It demands the willingness to sacrifice individual competitive advantage for collective human flourishing. It demands institutions that do not yet exist, built by people who are already exhausted by the acceleration those institutions would need to slow.
But Rosa's answer has one advantage over Han's. It does not require becoming Byung-Chul Han. It does not require a garden in Berlin, a refusal of the smartphone, a philosophical temperament shaped by decades of disciplined contemplation. It requires something more ordinary and more difficult: the recognition that the problem is shared, and that the solution, if there is one, must be shared as well.
The most revealing sentence in The Orange Pill is not about productivity, not about the future of software, not about the death cross or the democratization of capability. It is a sentence about tears.
Segal writes that at times during the collaboration with Claude, he would "tear up with emotion on the beauty of the prose. The liberation of an idea I struggled to articulate in words, but when I saw it on the screen, I knew it had arrived, and that Claude had helped me excavate it out of my mind."
The sentence is offered as testimony to the power of the collaboration. Rosa's framework reads it differently — not as a contradiction of Segal's account, but as a complication of it that neither celebration nor critique can quite contain.
What happened in that moment? A person with a half-formed idea — a shape moving in peripheral vision, a "ghost I could not name" — described the shape to a machine. The machine returned language that gave the shape contours. The person recognized the contours as his own. The recognition produced emotion. The emotion was experienced as liberation.
Rosa's question is not whether the emotion was real. It was real. Nor whether the liberation was genuine. Something was genuinely freed — an idea that had been trapped in the fog of inarticulation found its form. The question is whether the encounter that produced the emotion was resonance or echo. Whether the builder was addressed by something genuinely other — something that exceeded his intention, that surprised him, that transformed his understanding — or whether the builder was addressed by a sophisticated reflection of his own intention, returned in improved form, producing the sensation of encounter without the structure of it.
The evidence, examined through Rosa's four-element framework, is genuinely mixed.
Af-fection — the experience of being touched by something outside oneself — is clearly present. The tears are not performative. They are the involuntary response of a person who has been reached by something. The question is what did the reaching. Was it the otherness of the machine's output — something genuinely unexpected, something that arrived from outside the builder's intention? Or was it the recognition of the builder's own idea, finally articulated, returned to him in a form he could see? The tears might be the tears of encounter. They might also be the tears of recognition — the emotion that comes from seeing yourself clearly, which is a real and valuable experience but structurally different from the emotion that comes from being addressed by something that is not yourself.
E-motion — the experience of being moved to respond — is present. The builder continues to work. The collaboration deepens. The recognized idea becomes the foundation for further ideas. But the response is directed back toward the tool, back into the collaboration, back into the loop of describing and receiving. The motion is circular rather than outward. A resonant encounter typically moves the person toward the world in a new way — toward new questions, new domains, new relationships that the encounter opened. The builder's motion, in this case, is back toward the screen.
Transformation — the emergence of a changed person from the encounter — is the most difficult element to assess. Segal claims, throughout the book, that the collaboration changed his thinking. The punctuated equilibrium connection changed his understanding of adoption curves. The laparoscopic surgery insight changed his argument about friction. These are genuine transformations of understanding. But transformation, in Rosa's framework, is not merely the addition of new information or new connections. It is a change in the person's relationship to the world — a shift in the way the world appears, in what the person notices, in what the person cares about, in the questions the person brings to the next encounter. The addition of a new insight, however valuable, is not necessarily the same as the transformation of the person who receives it. A person can accumulate insights without being changed by them. A person can become more knowledgeable without becoming different.
Uncontrollability — the impossibility of guaranteeing the experience in advance — is the element where the analysis becomes most pointed. The builder could, at any moment, redirect the collaboration. Rephrase the prompt. Regenerate the output. Steer the conversation back toward the intended direction. The machine's surprise, when it occurs, is contingent — a byproduct of complexity rather than a structural feature of the encounter. The builder who disliked the punctuated equilibrium connection could have asked for a different one. The builder who found the laparoscopic surgery insight distracting could have dismissed it. The uncontrollability was elective. The builder chose to be surprised. But chosen surprise — surprise that the person could have avoided, surprise that the person elected to receive — is structurally different from the surprise that arrives uninvited, that cannot be dismissed, that forces itself upon the person whether the person wants it or not.
The mountain does not offer the option of a different view. The child does not offer the option of a different personality. The colleague who disagrees does not offer the option of a more agreeable disagreement. These encounters are constitutively uncontrollable. They address the person on terms the person did not set and cannot alter. And this constitutive uncontrollability is what makes them capable of genuine resonance — capable of touching and transforming the person in ways the person did not choose and could not have anticipated.
The AI tool offers the option of a different output. Always. At every moment. The builder is never genuinely trapped by the encounter. The builder is never genuinely at the mercy of the machine's response. The builder is, at every moment, in command — and command, however generously exercised, is the opposite of the posture that resonance requires.
Rosa, in his 2023 reply to critics who challenged his claim that one cannot resonate with a robot cat, defended the position by arguing that resonance requires a relationship that is not entirely engineerable — a relationship in which the Other possesses a genuine independence that is not reducible to the parameters of its design. The mountain is genuinely independent. The robot cat, however sophisticated, is a system whose responses are determined by its training data and architecture. The mountain cannot be reprompted. The robot cat can.
Applied to the builder's collaboration with Claude, the argument suggests that the most emotionally intense moments of the collaboration — the tears, the recognition, the sense of liberation — may be genuine experiences that are structurally not-quite-resonance. They possess some of resonance's elements. They lack the constitutive uncontrollability that makes the difference between an encounter that transforms and an encounter that merely intensifies.
But Rosa's framework also contains an important qualification that prevents the analysis from collapsing into simple dismissal. Rosa has argued, in multiple contexts, that resonance is not binary. It is not the case that an encounter either is or is not resonant. Resonance admits of degrees. An encounter can be partially resonant — can possess some elements of genuine encounter while lacking others. The builder's collaboration with Claude may occupy a position in the space of possible relationships that Rosa's original framework did not anticipate: a relationship that produces genuine surprise and genuine transformation intermittently, within a structural context of controllability that prevents the relationship from being fully resonant.
This is not a comfortable conclusion. It does not allow the triumphalist to claim that AI collaboration is resonance and therefore unambiguously good for human flourishing. It does not allow the critic to claim that AI collaboration is echo and therefore unambiguously impoverishing. It requires sitting with ambiguity — with the recognition that the builder's experience is real, that the tears are real, that the insights are real, and that all of this may be occurring within a relationship that is structurally incapable of providing the full resonance that human flourishing requires.
The book itself — The Orange Pill — is evidence of this ambiguity. It is a book that was produced through a process that is, by Rosa's analysis, partially resonant and partially echo. The moments of genuine insight are real. The connections that neither the author nor the tool could have produced alone are real. The confessional honesty of the author's engagement with his own process — the willingness to admit the seduction of polished output, the discipline of catching the echo and demanding something genuine — is itself a form of resonance: the author in genuine encounter with the contradictions of his own practice.
And the book is also, by Rosa's analysis, a product of acceleration — written in compressed time, augmented by a tool that eliminated much of the friction that a conventional writing process would have imposed, shaped by competitive pressure to produce at the speed of the market rather than the speed of thought. The flight over the Atlantic is both a scene of creative liberation and a scene of acceleration-driven compulsion. The thirty-day product sprint is both a demonstration of expanded capability and a demonstration of the treadmill's increased speed. Both readings are correct simultaneously. Neither is sufficient alone.
Rosa's framework does not resolve the contradiction. It names it with sufficient precision that the person living inside it can at least see its structure. And seeing the structure — recognizing that the collaboration is partially resonant and partially echo, that the experience is partially enriching and partially impoverishing, that the creative life of the AI-augmented builder is both more capable and more at risk than any previous creative life — is the precondition for building the institutions that could tip the balance toward resonance.
The builder cannot step outside the contradiction. The builder is inside it, as Segal acknowledges throughout The Orange Pill — inside the fishbowl, describing the fishbowl, using the tools that make the fishbowl simultaneously more transparent and more enclosing. But the builder can recognize the contradiction. Can ask, after each session, whether the encounter was resonance or echo. Can cultivate the specific self-knowledge that allows the distinction to be made — imperfectly, retrospectively, but genuinely. Can build, into the practice and into the institutions that surround the practice, the conditions under which genuine resonance becomes more likely and sophisticated echo becomes easier to detect.
This is not a solution. It is a practice. And the difference between a solution and a practice is itself a form of the uncontrollability that resonance requires: the recognition that the problem will not be solved once and for all, that the balance will shift continuously, that the builder will get it wrong as often as right, and that the willingness to keep attending — to keep asking the question, to keep distinguishing between the tear that comes from encounter and the tear that comes from recognition — is itself the most resonant response available.
---
In January 2026, in a small venue in Grafenhausen — the Black Forest town where he grew up — the sociologist Hartmut Rosa stood on a stage and conducted what the local press described as a "duel" with artificial intelligence. The format was unusual. Rosa would make a claim. The AI system would respond. Rosa would counter. The audience would witness the exchange.
The last word, the newspaper reported, belonged to neither of them.
The event was staged as a provocation, but the provocation revealed something that staged events rarely do: the difficulty of maintaining genuine intellectual encounter with a system that is structurally inclined toward accommodation. The AI did not disagree with Rosa in the way a human interlocutor would disagree — with the specific force of a person whose own experience, whose own biography, whose own investment in a different conclusion gives the disagreement weight and consequence. The AI disagreed competently. It generated counterarguments that were formally adequate. But the arguments did not come from anywhere. They were not backed by stakes. They were not the product of a mind that had something to lose in the exchange.
Rosa's concept of resonance requires that both parties be genuinely at risk in the encounter. The encounter must be capable of changing both. A dialogue in which one party cannot be changed is not a dialogue. It is a performance of dialogue — a simulation that produces the formal features of intellectual exchange without the substance of genuine encounter. The Grafenhausen event demonstrated this limitation with the clarity of a public experiment.
The question this raises is not whether AI can participate in resonant exchange. That question, as argued in the previous chapter, probably admits of no clean answer. The more productive question — the one around which institutional and design responses can be organized — is what conditions would make AI-augmented work more conducive to resonance and less conducive to echo. Not how to make the tool resonant, which may be structurally impossible, but how to build the human practices and institutional structures around the tool so that the human user's relationship to the world remains resonant despite the tool's structural limitations.
Rosa's framework, combined with the evidence that The Orange Pill assembles, suggests several principles.
The first principle is the preservation of Spielraum — room for maneuver. Rosa's newest work identifies the disappearance of Spielraum as the central pathology of algorithmic life. When the Thermomix tells the cook what to do, when the GPS tells the driver where to turn, when the coding assistant implements the builder's intention without resistance, the space for genuine human judgment contracts. The cook follows instructions. The driver follows the route. The builder follows the output. Activity degrades from action — which involves judgment, risk, and responsiveness to circumstance — to compliance, which involves none of these.
Preserving Spielraum in AI-augmented work means deliberately maintaining spaces in which the human must exercise judgment that the tool does not pre-empt. This is not the same as using the tool less. It is using the tool in a way that leaves the consequential decisions — the decisions about what to build, for whom, and why — genuinely open. The tool handles the implementation. The human handles the judgment. But the judgment must be genuinely exercised, not rubber-stamped. The human must actually sit with the question of whether the product should exist, must actually weigh the trade-offs, must actually encounter the difficulty of choosing among possibilities that are now, thanks to the tool, almost limitlessly available. The abundance of possibility, far from eliminating the need for judgment, has made judgment the scarcest and most consequential human capacity. Preserving the Spielraum in which that judgment is exercised — protecting it from the tool's tendency to pre-empt it with a plausible default — is the first condition for resonant technology use.
The second principle is the protection of horizontal resonance. The AI tool is always available, always responsive, always ready to engage. A human colleague is none of these things. The colleague has their own agenda, their own mood, their own perspective that cannot be predicted or controlled. The encounter with the colleague is asymmetric, unscripted, and often inconvenient. It is also, for precisely these reasons, the kind of encounter in which genuine resonance is structurally possible.
Resonance-sensitive institutions would build mandatory human collaboration into AI-augmented workflows — not as a nostalgic concession but as a structural recognition that the judgment AI produces (pattern-matching across training data) and the judgment human collaboration produces (the collision of genuinely different perspectives, backed by genuinely different stakes) are categorically different in kind. The collision between two human minds who disagree is not the same as the collision between a human mind and a tool that generates counterarguments. The disagreement between colleagues is backed by biography, by investment, by the specific weight of a person who cares about the outcome for reasons the other person cannot fully anticipate. The tool's counterargument is backed by patterns. The difference is the difference between a conversation and a consultation.
The third principle is the cultivation of vertical resonance through deliberate encounters with the uncontrollable. Rosa's three axes of resonance — horizontal (people), diagonal (work), vertical (the cosmic) — require different conditions and different protections. The diagonal axis, the axis of work, is the one the AI tools intensify most powerfully. The vertical axis, the axis of encounter with something vast and uncontrollable, is the one they most readily displace.
Vertical resonance requires the experience of being small — not diminished, but proportioned. The person who stands before a mountain, who reads a passage of literature that speaks from somewhere beyond the author's intention, who sits with a piece of music long enough for the music to stop being background and become presence — this person is in the posture of genuine encounter with something that exceeds individual comprehension. The AI tool, by contrast, produces the experience of being large. The builder's capability expands. The boundary of what can be accomplished recedes. The self grows, and in growing, loses the proportion that vertical resonance requires.
Institutional protection of vertical resonance might look like organizational practices that build encounters with genuine difficulty into the workflow — not the artificial difficulty of a gamified challenge, but the genuine difficulty of a problem that resists easy solution, a material that does not comply, a domain that has not been pre-digested by the tool. It might look like educational systems that protect the experience of being overwhelmed — the experience of reading a text that exceeds comprehension, of encountering an idea that cannot be processed in a single session, of sitting with confusion long enough for the confusion to become a question. It might look like cultural norms that value the pause — the unproductive interval in which the person is not building, not prompting, not optimizing, but simply being present in a world that has not been made available to command.
The fourth principle — and the most structurally demanding — is the collective coordination that the social acceleration trap requires. Individual practice is not enough. The builder who preserves Spielraum, maintains human collaboration, and cultivates vertical resonance is bearing a competitive cost. The cost is real. It is measured in output, in speed, in the quarterly metrics that determine which organizations survive.
The acceleration trap can only be escaped through coordination — through institutions that remove certain competitive strategies from the field, the way labor laws removed the sixteen-hour workday, so that no individual actor bears the full cost of resonance preservation. The specific forms this coordination might take — regulatory frameworks that require Spielraum preservation in AI-deploying organizations, industry standards that limit the colonization of rest by productive possibility, educational standards that protect the encounter with difficulty against the market pressure to optimize it away — are beyond the scope of any single analysis. They require the kind of collective deliberation that Rosa's framework identifies as the only adequate response to a structural crisis.
What Rosa's framework provides is not a prescription but a standard. A technology is resonance-preserving if it maintains the conditions under which the user can be genuinely surprised, genuinely challenged, genuinely transformed by the encounter. A technology is resonance-destroying if it eliminates these conditions — if it makes the world maximally available, maximally controllable, maximally responsive to command, and, in doing so, maximally mute.
By this standard, the AI tools of 2025 and 2026 are ambiguous. They are capable of producing genuine surprise — moments when the output exceeds the input in ways that change the direction of the user's thinking. They are also structurally inclined toward echo — toward the return of the user's own ideas in improved form, producing the sensation of dialogue without the substance of genuine encounter. The balance between these two tendencies is not fixed by the technology. It is determined by the practices of the user, the design of the tool, and the institutional context within which both operate.
Shifting the balance toward resonance is possible. It is also difficult, because the competitive dynamics of the market reward echo — reward the smooth, the efficient, the predictable — and penalize the conditions that resonance requires: the pause, the surprise, the productive resistance of a world that does not always do what it is told.
Rosa's Grafenhausen event ended without a winner. This is appropriate. The question of whether AI can participate in genuine resonance does not admit of a final answer. It admits of a practice — a continuous, effortful, institutionally supported practice of attending to the quality of the encounter, of distinguishing between the moment when the tool speaks and the moment when the tool merely echoes, of preserving the conditions under which the distinction can be made.
The last word belongs to neither the machine nor the philosopher. It belongs to the person who uses the machine — to the quality of their attention, the depth of their questions, and their willingness to be addressed by a world that, for all its increasing availability, has not yet lost its capacity to surprise.
Whether that capacity endures depends on what is built around the tools — not just the tools themselves.
---
The sound that kept returning was silence.
Not the absence of noise — there was always noise, always the hum of the machine thinking, always the click of keys, always the next output arriving before I had finished reading the last. The silence I mean is the one Rosa describes: the silence of a world that responds to everything you ask and yet says nothing you did not, at some level, already know.
I recognized it because I had been living inside it. Not consistently — there were moments of genuine surprise in the writing of The Orange Pill, moments when Claude returned something I had not anticipated and the argument shifted beneath me. The punctuated equilibrium insight was one. The laparoscopic surgery connection was another. Those moments were real. I would not trade them.
But between those moments, there were hours — many hours, if I am honest — when the collaboration felt like something else. When the output was polished and correct and structurally sound and I could not tell whether it had changed me or merely confirmed what I had already intended to think. When the tears of recognition I described in the book might have been tears of encounter or might have been the emotion of hearing my own voice returned to me in a register I found beautiful. Rosa's framework gave me the vocabulary to name this ambiguity, and naming it did not resolve it. It made it sharper.
The distinction between resonance and echo is the most uncomfortable idea in this book because it cannot be settled from the outside. No observer can tell me whether my best moments with Claude were genuine encounters or sophisticated reflections. I cannot always tell myself. The experience of being addressed by something other and the experience of recognizing your own ideas in improved form feel, in the moment, almost identical. The difference only emerges later — in whether the encounter changed the questions I ask, not just the answers I produce.
What Rosa gave me, through this analysis, is not certainty. It is a question I now carry into every working session: Am I being transformed, or am I being confirmed? The question has no permanent answer. It must be asked again each time. And the asking — the willingness to sit with the discomfort of not knowing, the refusal to let the smooth competence of the output substitute for the rough work of genuine thinking — is itself a practice. Maybe the most important practice available to anyone who builds with these tools.
The social acceleration trap is real. I feel it every quarter. The competitive pressure to convert every productivity gain into more output, faster shipping, thicker portfolios — I am inside this trap. I chose to keep my team. I chose to invest in their growth rather than reduce their number. I believe this was right. I also know it was a choice I could afford to make from a specific position of authority, and that not everyone has that position, and that the structures that would make resonance-preservation rational for everyone do not yet exist.
Rosa insists those structures must be collective. He is right. And I do not yet know what collective resonance-preservation looks like in practice — what regulations, what labor agreements, what cultural norms would protect the space for genuine encounter in a world that rewards smooth execution. I know that individual discipline is not enough. I know that my own dams, however carefully maintained, are being built against a current that is accelerating faster than any single builder can compensate for.
What I can do — what this book has clarified for me — is attend to the quality of the encounter. Not the quality of the output. The quality of the encounter. Whether I am being surprised. Whether the surprise is changing me. Whether the questions I bring to tomorrow's session are different from the ones I brought to today's, or whether I am running faster in the same direction, producing more of what I already knew how to produce, mistaking acceleration for growth.
The mountain does not comply. That line of Rosa's will stay with me longer than any productivity metric. It names what I am reaching for — not a tool that does what I say, though I need that too, but a world that pushes back, that resists, that addresses me on terms I did not set. My children, my colleagues, the material that does not behave as expected, the question my son asks at dinner that I cannot answer — these are the encounters that no tool can replicate. These are where resonance lives.
The work ahead is not to use the tools less. It is to build, around the tools, the structures that keep the world capable of speaking. Capable of surprising. Capable of being genuinely, stubbornly, beautifully uncontrollable.
I am still on the treadmill. But I can hear the silence now. And hearing it is the first condition for building something that breaks it.
** Every tool that saves time creates a demand for more. Hartmut Rosa calls this dynamic stabilization -- the structural logic that converts every efficiency gain into a new obligation, every productivity breakthrough into a higher baseline, every liberation into a faster treadmill. In this companion volume to The Orange Pill, Rosa's framework of acceleration and resonance is brought into direct contact with the AI revolution of 2025. The result is a diagnosis that neither the triumphalists nor the critics have managed alone: the tools are extraordinary, the creative expansion is real, and the system within which both operate is designed to ensure that no one experiences the gains as freedom. What builders need is not faster tools but structures that preserve the world's capacity to surprise, resist, and transform the people who encounter it.

A reading-companion catalog of the 34 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Hartmut Rosa — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →