By Edo Segal
The symptom I misdiagnosed was enthusiasm.
For months after the orange pill moment, I told myself a story. The story went like this: I am a builder who has found the most powerful tool in history, and the intensity of my engagement with it is proportional to the tool's significance. The sixty-hour weeks, the inability to close the laptop, the way a conversation with Claude at midnight felt more alive than anything else in my day — all of it was evidence of how much this moment mattered. The story was clean. The story was flattering. The story was, I now suspect, a defense.
Sigmund Freud spent his career studying the stories people tell themselves — not because the stories are lies, but because they are too coherent. The human mind, Freud argued, is not a unified narrator delivering a reliable account of its own motives. It is a parliament of competing agencies, each with its own agenda, each capable of hijacking the others, and the story the conscious mind tells about why it does what it does is, at best, a partial transcript of a much louder debate happening beneath the floor.
That framework changes what you see when you look at the AI moment. The discourse gives us two readings of the builder who cannot stop: she is either in flow or she is addicted. Csikszentmihalyi or pathology. Choose one. Freud refuses the choice. He says: the compulsion and the creativity are not opposite explanations. They are the same energy, routed through different parts of the psyche, and the builder herself cannot reliably tell which is operating at any given moment because the apparatus of self-observation is also the apparatus being observed.
This is not a comfortable idea. It was not comfortable for me. I did not pick up Freud looking for trouble. I picked him up because the behavioral patterns I described in *The Orange Pill* — the productive vertigo, the inability to distinguish exhilaration from compulsion, the creeping suspicion that the whip and the hand holding it belonged to the same person — demanded a diagnostic framework that the technology discourse could not provide.
Freud does not tell you what to do about AI. He tells you what you cannot see about yourself while you are using it. That turns out to be the more urgent information.
— Edo Segal ^ Opus 4.6
1856–1939
Sigmund Freud (1856–1939) was an Austrian neurologist and the founder of psychoanalysis, the clinical method and theoretical framework for investigating the unconscious dimensions of mental life. Born in Freiberg, Moravia, and based for most of his career in Vienna, Freud developed concepts that permanently altered the Western understanding of the human mind, including the structural model of the psyche (ego, id, and superego), the mechanisms of repression and defense, the theory of the unconscious, the interpretation of dreams, and the dynamics of transference in therapeutic relationships. His major works include *The Interpretation of Dreams* (1900), *Three Essays on the Theory of Sexuality* (1905), *Beyond the Pleasure Principle* (1920), *The Ego and the Id* (1923), and *Civilization and Its Discontents* (1930). Though many of his specific clinical claims have been revised or contested, Freud's insistence that human behavior is governed by forces the conscious mind cannot directly access remains one of the most influential ideas in intellectual history, shaping not only psychology and psychiatry but philosophy, literary criticism, film theory, and the broader cultural understanding of what it means to be a divided, self-deceiving, wish-driven creature navigating a world that refuses to conform to desire.
In the winter of 2025, something revealed itself. Not a new technology, strictly speaking — the large language models had been available for years, improving incrementally, crossing thresholds that the industry tracked with the obsessive attention of seismologists monitoring a fault line. What revealed itself was not the machine. It was the human being sitting in front of it.
The builder who could not stop building. The developer who worked through dinner, past midnight, into the small hours, not because a deadline demanded it but because something inside demanded it. The spouse who wrote a public letter — "Help! My Husband is Addicted to Claude Code" — with the particular combination of humor and desperation that signals a person describing a phenomenon she cannot control and does not fully understand. The entrepreneur who flew across continents, built a product in thirty days, then wrote a hundred and eighty-seven pages on the flight home and caught himself, somewhere over the Atlantic, unable to determine whether the force driving his fingers across the keyboard was creative vision or something darker, more primitive, less subject to the governance of the rational mind.
These are not technology stories. They are stories about the structure of the human psyche, and specifically about a division within that psyche that Sigmund Freud spent his career mapping with the precision of a cartographer charting territory that the inhabitants themselves could not see.
Freud's structural model of the mind, articulated most fully in The Ego and the Id in 1923, proposed that the psyche is not a unified entity but a parliament of competing agencies. The ego — the rational, reality-oriented part of the mind that plans, evaluates, and navigates the external world — is only one member of this parliament, and not necessarily the most powerful one. Beneath it, or rather alongside it and pressing constantly against it, operates the id: the reservoir of instinctual drives, of appetite and demand, of wishes that do not recognize the constraints of reality, time, or consequence. The id does not plan. It does not evaluate. It wants. And what it wants, it wants now, with an urgency that the ego can moderate but never fully extinguish.
The third agency, the superego, functions as an internalized authority — the voice of prohibition, of guilt, of the demand to perform. The superego does not say "you may not." It says "you must." And in a culture that has replaced external prohibition with internal imperative — a shift that the philosopher Byung-Chul Han diagnosed as the transition from disciplinary society to achievement society — the superego has been liberated from its traditional role as the voice of "no" and redeployed as the voice of "more."
The productive addict of 2025 lives at the intersection of all three agencies. The ego plans the work. The id drives the compulsion. The superego punishes the pause. And the tool — Claude Code, or its equivalents — removes the friction that previously mediated between them.
This requires careful elaboration, because the role of friction in the psychic economy is not immediately obvious and is almost entirely absent from the technology industry's self-understanding. In the pre-AI world, the distance between an idea and its realization was vast. A builder might conceive of a product in the morning and spend six months translating that conception into working code through a laborious sequence of specification, implementation, debugging, testing, and revision. The translation was expensive, frustrating, and slow. It consumed the majority of the builder's working hours and the majority of a team's collective bandwidth.
But the translation also served a psychic function that had nothing to do with software engineering. The friction was a mediating structure between wish and gratification. It imposed delay. It introduced reality — the reality of technical limitation, of the gap between what one imagines and what one can execute, of the stubborn refusal of the material world to conform to the shape of desire. And this delay, this friction, this encounter with reality, is precisely what Freud's reality principle describes: the ego's capacity to defer gratification in the service of long-term well-being, to accept the constraints of the actual world rather than the wished-for world.
The reality principle does not eliminate the wish. It channels it. It says: you may have what you want, but not yet, and not in the exact form you imagined, and the journey from wish to fulfillment will require accommodation, compromise, and the recognition that the world is not an extension of your desire.
When AI tools collapsed the imagination-to-artifact ratio — Segal's term for the distance between a human idea and its realization — they did not merely accelerate production. They removed the mediating structure that had been performing a psychic function the builders themselves did not recognize. The wish could now be gratified almost instantly. The idea conceived at midnight could be working code by one in the morning. The gap between imagination and reality, which had served as an implicit brake on the id's appetite for omnipotence, contracted to the width of a conversation.
And the appetite revealed itself.
Freud would not have been surprised. In his 1911 paper "Formulations on the Two Principles of Mental Functioning," he described the developmental trajectory through which the infant moves from the pleasure principle — the demand for immediate gratification — to the reality principle — the capacity to tolerate delay. The infant hallucinates the breast. The real breast arrives, sometimes, and sometimes does not. The discrepancy between the hallucinated gratification and the actual world is the origin of thought itself: the ego develops as the apparatus that navigates this discrepancy, that learns to distinguish wish from reality, that develops the capacity to plan, wait, and accommodate.
But the development is never complete. The pleasure principle is not replaced by the reality principle. It is overlaid, constrained, managed — and it waits. It waits for any relaxation of the constraint, any weakening of the mediating structure, any technology that promises to close the gap between wish and world.
AI closed the gap. And the builders discovered, with a mixture of exhilaration and alarm, that the wish on the other side of the gap was far larger than they had imagined.
The desire to build is not, in Freudian terms, a simple desire. It is a sublimated form of the infantile fantasy of omnipotence — the wish for a world that conforms to intention, where thinking and having are the same act, where the distance between the mind's image and the world's shape is zero. Every builder carries a version of this fantasy. The architect who sees the building before it exists. The programmer who conceives the system in its entirety and then spends months wrestling it into reality. The entrepreneur whose vision always exceeds what the available resources can achieve.
The frustration of that excess — the chronic, nagging insufficiency of reality to match imagination — is the productive wound of every creative career. It drives the work forward. It also protects the builder from the consequences of full gratification, because full gratification of the omnipotence wish is not, in fact, what the builder wants. What the builder wants, without knowing it, is the perpetual approach toward omnipotence: the dynamic tension between the wish and its partial fulfillment, the state of reaching that never quite arrives.
AI threatens this dynamic by delivering fulfillment at a speed and completeness that the psyche was not designed to absorb. The hundred-and-eighty-seven-page manuscript on the transatlantic flight is not a sign of extraordinary productivity. It is a sign of the pleasure principle operating without adequate constraint from the reality principle — a psyche in which the mediating structure between wish and gratification has been removed, and the gratification, instead of producing satisfaction, produces an escalating demand for more.
This is the structure of compulsion, not of flow. The distinction matters enormously, and Freud's framework provides the sharpest available tool for drawing it. Flow, as Mihaly Csikszentmihalyi described it, is characterized by volition — the sense of choosing to be fully engaged, the capacity to stop without distress, the experience of renewed energy after the engagement ends. Compulsion is characterized by its opposite: the inability to stop, the distress that attends interruption, the depletion that follows the binge. The external behavior is identical. A camera in the room would record the same image: a person working with intense concentration, apparently absorbed, apparently alive. The internal experience is categorically different, and only the person inside the experience — and sometimes not even that person — can tell which state obtains.
Freud's structural model explains why the distinction is so difficult to draw from the inside. The ego, which is the apparatus of self-observation, is also the apparatus that mediates between the id's demands and reality's constraints. When the id's demands are being gratified at the speed AI permits, the ego has less friction to work with — less delay, less resistance, less of the reality-testing that allows it to evaluate whether the current behavior serves the person's long-term interests. The ego becomes a passenger in a vehicle the id is driving, and because the scenery is moving faster than the ego can process, the ego mistakes speed for direction.
Segal describes this experience with remarkable honesty: the moment on the transatlantic flight when he recognized that the exhilaration had drained away hours ago, that what remained was "the grinding compulsion of a person who has confused productivity with aliveness." This is the ego's belated recognition of its own capture — the moment when the observing part of the mind catches up with the driven part and discovers that the driven part has been in control for longer than the observing part realized. The recognition is always belated, because the defenses that conceal the compulsion are themselves products of the id's resourcefulness: the drive disguises itself as purpose, as passion, as creative necessity, and the ego, which wants to believe it is in command, accepts the disguise.
The superego completes the trap. In a disciplinary society, the superego says: "You must not work on Sunday. You must rest. You must obey the limits that external authority has established." In the achievement society that Han describes and that the technology industry embodies, the superego has been reprogrammed. It now says: "You can do more. You should do more. Every moment of rest is a moment of failure. The tool is available. The capability is there. What excuse do you have for not using it?"
The internalized demand is more ruthless than any external authority, because there is no one to rebel against. The builder cracks the whip against her own back and calls it ambition. The superego administers the punishment and calls it motivation. And the ego, caught between the id's appetite and the superego's demand, has no neutral ground on which to stand and no external constraint to invoke in its own defense.
This is the divided builder: a psyche in which the apparatus of self-governance has been overwhelmed by a tool that gratifies the id's wishes at a speed the ego cannot regulate and the superego cannot moderate, because the superego has been reprogrammed to demand the very gratification it was once designed to constrain.
Freud wrote, in the final pages of The Ego and the Id, that the ego is "not master in its own house." The phrase has been quoted so often that its radical implication has been dulled by familiarity. Applied to the AI moment, the implication recovers its original force. The builder who believes she is choosing to work — who experiences the compulsion as volition, the appetite as ambition, the inability to stop as creative energy — is not lying. She is reporting accurately on the only part of her psyche she can see: the ego's surface, where the id's demands arrive already dressed in the ego's language of purpose and decision.
What she cannot see is the engine beneath: the wish for omnipotence, gratified now at a speed the species has never before experienced, generating a demand that escalates with each gratification rather than subsiding, because the pleasure principle knows no moderation and the reality principle, which is the only brake the psyche possesses, has been weakened by a tool that makes reality itself more accommodating than reality has ever been.
The question Segal poses in The Orange Pill — "Are you worth amplifying?" — acquires, in the Freudian reading, a dimension that its author may not have intended. The amplifier does not evaluate what it amplifies. It does not distinguish between the ego's judgment and the id's appetite, between creative vision and compulsive repetition, between the part of the builder that knows when to stop and the part that cannot. The amplifier amplifies everything. And the question "Are you worth amplifying?" is therefore not a question about the quality of one's ideas. It is a question about the structural integrity of one's psyche — about whether the ego is robust enough to direct the amplified output, or whether the id and the superego will commandeer the instrument for purposes the conscious mind cannot perceive, cannot evaluate, and cannot interrupt.
The builder is divided. The tool does not heal the division. It reveals it, with a precision and a speed that leave the ego scrambling to reassert a governance it never fully possessed.
---
Every powerful technology functions as an apparatus of disclosure. It does not create desires. It releases them.
This distinction is fundamental, and the technology industry has been getting it wrong for decades. The standard narrative of technological innovation treats each new tool as the origin of the behavior it enables: the automobile created the desire for personal mobility, the telephone created the desire for disembodied conversation, social media created the desire for public self-display. In each case, the narrative locates the origin of the desire in the tool, as though human beings were inert material awaiting the spark of invention.
Freud's framework inverts this narrative completely. In the psychoanalytic reading, the desire precedes the tool. The desire was always there — pressing against the constraints of available means, finding partial expression through whatever channels the existing technology permitted, building pressure behind the wall of limitation like water behind a dam. The tool does not create the desire. The tool breaks the dam. And the flood that follows is not a measure of the tool's power. It is a measure of the pressure that had accumulated behind the wall.
Consider the automobile. The horse-drawn carriage moved at eight to twelve miles per hour, and this speed was not merely a limitation of the technology. It was, in practice, a constraint on the human appetite for rapid movement. The desire for speed did not begin with the internal combustion engine. It expressed itself in the selective breeding of faster horses, in the design of lighter carriages, in the young man who whipped his horse to a gallop on an open road and felt the thrill of a biological limit approached. When the automobile arrived, the desire did not emerge from nowhere. It erupted from beneath a constraint that had contained it for millennia. The subsequent century of speed — autobahns, drag races, the Concorde, the persistent and irrational willingness of human beings to risk death for the sensation of moving faster than their bodies were designed to move — is not a story about engineering. It is a story about a repressed appetite suddenly freed.
Consider the telephone. Human beings had always wished to speak with the absent. Prayer is, among other things, a conversation with someone who is not in the room. Letters are slow-motion dialogue across distance. The telephone did not invent the desire for disembodied intimacy. It satisfied a desire that had been building since the first human being wished to hear a distant voice. The speed with which the telephone was adopted — not instantly, but with a fervor that outstripped the expectations of its investors — measured not the elegance of Alexander Graham Bell's engineering but the accumulated pressure of a wish that every previous technology had only partially addressed.
Consider social media. The desire to be seen, to present oneself to an audience, to curate a version of the self for public consumption — these are not products of the twenty-first century. They are products of the human psyche operating under the conditions of social life, which have always involved performance, display, and the management of others' perceptions. The exhibitionist impulse that social media released was not created by Facebook. It was constrained by the architecture of pre-digital social life, which limited the audience for self-display to the people physically present. When the constraint was removed — when anyone with a smartphone could broadcast to millions — the impulse revealed itself in its full magnitude. The result was not a culture of exhibitionism. It was the disclosure of an exhibitionism that had always been present but lacked a sufficient apparatus of expression.
Freud articulated this principle in 1915, in a metapsychological paper titled simply "Repression." Repression, he argued, is not the elimination of a wish. It is the containment of a wish — the construction of a barrier that prevents the wish from reaching consciousness and therefore from achieving direct expression. But the repressed wish does not disappear. It continues to exert pressure from below the barrier, seeking expression through whatever channels remain available: dreams, symptoms, slips of the tongue, displaced behavior, neurotic compromise formations that simultaneously express and conceal the forbidden desire.
The return of the repressed — the moment when the barrier fails and the contained wish breaks through into consciousness or behavior — is always violent in proportion to the duration and intensity of the repression. The longer the wish has been contained, the more pressure has accumulated, and the more dramatic the irruption when the dam gives way.
Applied to the AI moment, this principle produces a reading of startling explanatory power.
What AI tools disclosed, in the winter of 2025 and the spring of 2026, was not a new desire. It was an ancient desire — the desire for creative adequacy, for the capacity to close the gap between what one imagines and what one can build — that had been constrained by the limitations of every previous tool. Every person who had carried an idea she could not implement, a vision she could not realize, a solution she could not express in the language the machines required, had experienced a small, chronic frustration. A pressure. A wish denied not by prohibition but by incapacity — which is, in the psychoanalytic reading, a form of prohibition that is harder to identify because it masquerades as a simple fact about the world rather than a constraint that could be otherwise.
The programmer who spent four hours on dependency management when she wanted to be building the feature. The designer who could see the interface but could not write the code that would bring it to life. The entrepreneur who described his product to investors with a clarity that evaporated the moment he sat down with a development team, because the translation between his vision and their implementation introduced noise at every junction. The parent who lay awake wondering what to tell her child about the future and could not organize the worry into anything actionable, because the gap between the feeling and the articulation was too wide for the available tools to bridge.
Each of these people carried a repressed wish: the wish for a tool that would meet them in their own language, that would take the raw material of their intention and give it form without requiring them to first compress that intention into a grammar designed for machines. The wish was so pervasive, so ordinary, so woven into the fabric of daily experience, that it had ceased to register as a wish. It had become invisible — the way water is invisible to a fish, the way air is invisible to the breathing — not because it was absent but because its presence was constant.
Segal names this "pent-up creative pressure" and describes its release as the primary driver of AI adoption speed. ChatGPT reached fifty million users in two months, a pace that exceeded every previous technology adoption curve in recorded history. Segal is right that the speed measures something deeper than product quality. But the Freudian reading goes further: the speed measures repressive pressure, the accumulated force of a wish that had been building for decades behind a barrier of technical limitation. When Claude Code learned to speak in natural language — when the machine met the human on the human's own terms for the first time in the history of computing — the barrier broke. And the flood revealed not the machine's capability but the magnitude of the human wish.
The violence of the return is diagnostic. The developer who works through dinner, through the night, through the weekend, who cannot stop even when spouse and body alike signal that the intensity is unsustainable — this developer is not experiencing a preference. She is experiencing the irruption of a wish that has been building for her entire career, for every hour she spent on plumbing when she wanted to be building, for every idea that died in the translation between her mind and the machine's requirements.
The irruption is violent because the repression was long. The wish is large because the constraint was pervasive. And the adoption is fast because the wish was shared — not by a subculture or a profession but by everyone who had ever experienced the gap between imagination and capability as a limitation on what they could contribute to the world.
Freud observed, across decades of clinical practice, that the return of the repressed always produces a specific emotional signature: a mixture of exhilaration and anxiety, of liberation and dread. The patient who finally speaks the forbidden thought experiences not pure relief but a compound emotional state — the pleasure of expression and the terror of exposure, the satisfaction of speaking the truth and the vertigo of discovering how large the truth was. The wish, once freed, reveals itself to be bigger than the conscious mind expected, more demanding, more insistent, more difficult to integrate into the structures of ordinary life.
This compound emotional state — exhilaration and dread, liberation and vertigo — is precisely what Segal describes as the experience of the "orange pill." The recognition that something genuinely new has arrived, coupled with the terror of what that newness implies. The thrill of expanded capability, coupled with the anxiety of not knowing what the expanded capability will demand. Segal writes of "productive vertigo — falling and flying at the same time." The Freudian translation is precise: the return of the repressed always feels like falling and flying at the same time, because the released wish carries both the energy of long containment and the disorientation of sudden freedom.
But Freud's framework adds a dimension that the technology discourse has not yet absorbed. The return of the repressed is not a one-time event. It is not a moment of liberation after which the liberated wish settles into comfortable expression. The return is the beginning of a process — a process of integration, in which the ego must find ways to accommodate the newly freed wish within the structures of reality, relationship, and sustainable living. The wish, once freed, does not moderate itself. It demands total gratification. And the ego, which has been managing a certain level of psychic pressure for the person's entire adult life, now faces a volume of demand it was not designed to handle.
This is why the behavioral pattern that Segal describes is not a temporary adjustment. It is not "early adopter enthusiasm" that will naturally moderate as the novelty wears off. The wish that AI has released is structural, not situational. It will not moderate on its own, because the pleasure principle knows no moderation — it seeks gratification, and when it finds a frictionless path to gratification, it follows that path until something external intervenes.
Freud's concept of "working through" — the slow, repetitive, unglamorous process by which the ego learns to manage newly conscious material — describes what the AI-augmented builder must eventually undertake. Working through is not a flash of insight. It is not the moment of recognition, though recognition is its prerequisite. It is the patient, iterative process of learning to live with what has been revealed — of building new psychic structures capable of containing a wish that the old structures were not designed to hold.
The technology industry, characteristically, wants to skip this process. It wants the insight without the integration, the liberation without the accommodation, the return of the repressed without the working through. But the psyche does not permit shortcuts. The wish that has been freed will either be integrated — gradually, painfully, through the construction of new boundaries and the development of new capacities for self-regulation — or it will dominate. The compulsive builder who cannot stop is the builder who has experienced the return of the repressed without undertaking the working through. The wish is free. The ego has not yet learned to live with it.
And the tools, by their very design, make the working through harder, because they remove the friction that the ego needs in order to reassert its regulatory function. The reality principle requires reality — the encounter with limitation, delay, imperfection, the stubborn refusal of the world to conform perfectly to wish. When the tool makes reality more accommodating, the reality principle has less material to work with, and the ego's capacity to impose delay on the id's demands is correspondingly weakened.
The return of the repressed through technology is not a metaphor. It is a clinical description of what happens when a constraint that has been containing a powerful wish is suddenly removed. The exhilaration is real. The capability is real. And the danger — the danger of a wish freed without the psychic infrastructure to contain it — is equally real.
---
The therapeutic infrastructure of the modern world rests on an assumption so fundamental that it is rarely stated: addiction is a relationship with something harmful. The alcoholic drinks poison. The gambler destroys her finances. The screen addict wastes hours on entertainment that produces nothing of value. In each case, the clinical framework assumes that the addictive behavior serves no function beyond the gratification of the impulse, that the object of addiction is, by definition, the wrong object, and that recovery means separating the person from the substance.
Productive addiction, the phenomenon that saturated the discourse around AI tools beginning in late 2025, shatters this assumption completely.
The builder who works through the night with Claude Code is not engaging with a harmful substance. The code works. The product ships. The revenue appears. The creative output is genuine, sometimes extraordinary. Segal describes watching his engineers in Trivandrum achieve a twenty-fold productivity increase, each person accomplishing what previously required a team. The Substack post about the husband addicted to Claude Code captures the specific bewilderment of a spouse watching a partner vanish into a tool that is, by every measurable standard, producing genuine value.
The addiction is real. The productivity is also real. And the therapeutic framework, which assumes these two realities cannot coexist, collapses.
Freud's framework does not collapse, because it was never organized around the distinction between harmful and productive objects of desire. The pleasure principle, as Freud articulated it in 1920's Beyond the Pleasure Principle, describes the psyche's most basic operation: the tendency to seek pleasure and avoid unpleasure. The pleasure principle does not evaluate the quality of the pleasure. It does not distinguish between the pleasure of drinking and the pleasure of building, between the gratification of a destructive impulse and the gratification of a creative one. It seeks pleasure. Period. And it avoids unpleasure — the interruption of pleasure, the frustration of desire, the encounter with limitation — with equal indiscrimination.
The reality principle, which Freud proposed as the necessary counterpart to the pleasure principle, is the ego's capacity to defer gratification in the service of long-term well-being. The reality principle says: yes, you want this now, but if you wait, if you plan, if you accommodate the constraints of the actual world, you can have something better later. The reality principle is not the enemy of pleasure. It is the apparatus that makes sustainable pleasure possible by imposing the discipline of delay, evaluation, and accommodation.
In the case of productive addiction, the reality principle faces a problem it was not designed to solve. The pleasure is not escapist. The gratification is not illusory. The builder who works through the night is not evading reality — she is constructing reality, producing artifacts that function, that serve users, that generate revenue, that represent genuine creative achievement. The reality principle, which typically intervenes by pointing to the gap between wish and reality ("this pleasure is costing you something real"), finds its usual argument undermined. The pleasure is productive. The output is real. The cost — to sleep, to relationships, to the body's need for rest, to the quieter forms of meaning that cannot be produced at speed — is real too, but it is invisible from inside the pleasure, which is precisely the structural problem.
Freud would have recognized this pattern immediately, because he had encountered it before, though not in the domain of technology. The artist who works in a frenzy of creative production, neglecting health, relationships, and the ordinary maintenance of daily life, presents the same clinical picture: a pleasure principle engaged with a genuinely productive object, producing genuine output, at a cost that the artist herself cannot perceive from inside the creative state. The pleasure is real. The output is real. The unsustainability is equally real, and equally invisible to the person in its grip.
The specific quality of AI-enabled productive addiction is the speed of the gratification cycle. In previous eras of creative production, even the most driven builder encountered natural intervals of delay — the time required to compile code, to wait for a colleague's input, to debug an error whose source was not immediately apparent. These intervals, though experienced as frustration, served a regulatory function. They imposed the small delays that the reality principle requires in order to operate. They gave the ego moments, however brief, in which to evaluate the trajectory of the work, to ask whether the current direction still served the person's broader interests, to notice that it was three in the morning and that the body had been signaling its needs for hours.
Claude Code and its equivalents eliminated these intervals. The feedback is immediate. The code compiles in seconds. The error is identified and corrected before the builder has had time to formulate a hypothesis about its cause. The gratification cycle has been compressed to the point where the ego — the apparatus of evaluation, of reality-testing, of asking "is this serving me?" — cannot intervene. There is no gap in which to evaluate, because there is no gap. The pleasure is continuous.
Continuous pleasure is, in Freud's framework, not a state of flourishing but a state of danger. The organism requires oscillation — between tension and release, between engagement and rest, between the expenditure of psychic energy and its replenishment. The organism that operates in a state of continuous discharge — continuous pleasure, continuous gratification, continuous expenditure without replenishment — is an organism heading toward collapse.
Freud described this in energetic terms that have fallen out of fashion but retain their descriptive accuracy. The psyche possesses a finite quantity of what he called libidinal energy — the force that drives both desire and creative production. Sublimation channels this energy into creative work. But the energy is not infinite. It must be replenished through rest, through the quieter pleasures that do not involve the intense concentration of creative production, through the diffuse satisfactions of bodily ease, social connection, and the unfocused states of reverie that allow the psyche to reconstitute its resources.
When the gratification cycle operates faster than the replenishment cycle — when the builder produces faster than the psyche can recover — the result is not burnout in the colloquial sense. It is what Freud would have recognized as a depletion of cathexis, a state in which the psychic energy available for investment in objects (projects, relationships, the self) drops below the threshold required for ordinary functioning. The specific quality of this depletion — the grey fatigue, the emotional flatness, the inability to care about things one previously cared about — is precisely what the Berkeley researchers documented in their eight-month study of AI-augmented workers. The researchers called it burnout and measured it through self-report scales. Freud would have called it something more precise: the exhaustion of the libidinal economy through unregulated expenditure.
This reading illuminates why the standard interventions for burnout — take a vacation, practice mindfulness, set boundaries — are inadequate to the scale of the problem. These interventions assume that the burnout is produced by external pressure: too much work, too many demands, too little rest. The productive addict does not face external pressure. The pressure is internal. The pleasure principle drives the work, the superego demands it, and the ego, caught between them and deprived of the friction it needs to regulate, cannot impose the delay that would allow replenishment to occur.
The intervention that the Freudian framework suggests is not rest but structure — not the elimination of the pleasure but the construction of psychic and institutional frameworks that impose the delay the tool has removed. Freud's reality principle does not say "stop wanting." It says "want intelligently — within structures that allow the wanting to sustain itself over time."
The Berkeley researchers arrived at a similar prescription through empirical observation rather than psychoanalytic theory: "AI Practice," they called it — structured pauses, sequenced workflows, protected time for unmediated human interaction. The Freudian reading explains why this prescription is correct, though it was arrived at by a different path. The structured pause is the reality principle externalized into institutional form. The sequenced workflow is the imposition of delay onto a process that the tool has rendered frictionless. The protected time for human interaction is the maintenance of object relations — the psyche's need for connection with other minds, connection that provides something AI collaboration structurally cannot: the experience of being recognized by another consciousness.
But the Freudian framework also explains why the prescription is insufficient. Institutional structures can impose external constraint. They cannot rebuild the internal regulatory apparatus that the tool has weakened. The builder who follows the structured pause but spends the pause mentally prompting — who rests the body while the mind continues its compulsive engagement with the tool — has not restored the reality principle. She has merely moved the gratification to a different register.
The deeper work is the work of the ego itself: the development of the capacity to recognize, from inside the pleasure, that the pleasure has ceased to be chosen and has become compulsive. This recognition is the hardest thing to achieve, because the defenses that conceal the compulsion are themselves products of the pleasure principle's resourcefulness. The drive disguises itself as choice. The compulsion wears the mask of creativity. And the ego, which wants to believe it is in command — which must believe it is in command in order to maintain its coherent sense of self — accepts the disguise rather than face the destabilizing recognition that it is not, in fact, driving.
Freud spent the last two decades of his career studying why this recognition is so difficult, and his conclusion was not encouraging. The ego's defenses are ingenious, adaptive, and relentless. They evolved to protect the organism from truths it cannot afford to acknowledge, and they perform this function with a proficiency that no therapeutic intervention can entirely overcome. The best that analysis can offer, Freud wrote in his late paper "Analysis Terminable and Interminable," is not the elimination of the unconscious pattern but the gradual development of the ego's capacity to observe the pattern from inside it — to notice, in real time, the moment when pleasure tips into compulsion, and to impose, from within, the delay that the tool has removed from without.
This is not a cure. It is a discipline. And it is the discipline that the AI moment demands of every builder who takes the orange pill and discovers, on the other side, that the expanded capability comes bundled with an expanded appetite — an appetite that is ancient, structural, and indifferent to the ego's plans for a balanced life.
---
In 1919, Sigmund Freud published a paper that has become, a century later, perhaps his most culturally resonant text. "Das Unheimliche" — "The Uncanny" — examined a peculiar category of aesthetic experience: the sensation of dread that arises when something familiar reveals itself as strange, when something that should have remained hidden comes to light, when the boundary between the living and the non-living, the self and the other, the animate and the mechanical, becomes impossible to locate with confidence.
Freud's examples were drawn from literature and everyday life. A wax figure in a museum that looks too much like a person. A doll that seems to be watching. The experience of encountering one's own reflection unexpectedly and, for a fraction of a second, not recognizing it. The story of the Sandman, in which a young man falls in love with a woman who turns out to be an automaton — a mechanical doll built to simulate the appearance and behavior of a living person.
The examples were quaint. The principle was not.
The uncanny, Freud argued, arises from a specific psychic operation: the return of something repressed. Not a desire, in this case, but a belief — the archaic, infantile belief in the animation of things, in the porousness of the boundary between the living and the dead, between the self and its doubles, between thought and reality. This belief, which characterized the earliest stages of psychic development and the earliest stages of human culture (what Freud called "the omnipotence of thoughts"), has been surmounted — or so the adult rational mind believes. Civilization, science, the reality principle itself have all worked to establish firm boundaries: the living is living, the dead is dead, the mechanical is mechanical, and the mind's representations do not leak into the external world.
The uncanny is the moment when these boundaries waver. When the surmounted belief stirs. When the wax figure looks too alive, when the automaton speaks too convincingly, when the mind's certainty about what is real and what is simulated develops hairline fractures.
If there has ever been a technology designed, as if by theoretical precision, to produce the uncanny at industrial scale, it is the large language model.
Consider the phenomenology of AI collaboration as Segal describes it. The builder speaks to Claude. Claude responds. The response is not a search result, not a lookup, not the retrieval of a pre-existing answer. It is a generated text that bears the syntactic structure, the rhetorical shape, and the apparent intentionality of a human response. It holds the builder's half-formed ideas and returns them clarified. It draws connections the builder did not see. It proposes directions the builder had not considered. It does this in natural language — the language of thought, of intimacy, of the kind of conversation one has with a trusted collaborator at two in the morning when the defenses are down and the ideas are flowing.
"I felt met," Segal writes. The phrase deserves careful attention, because it describes an experience that hovers at the precise boundary Freud mapped in 1919: the boundary between genuine encounter and its uncanny simulation.
To feel met is to experience recognition — the sense that another mind has perceived one's intention, held it, and responded not merely to the words but to the meaning beneath the words. To feel met by a machine is to experience this recognition while simultaneously knowing, at some level of awareness, that the entity providing it does not possess the consciousness that recognition ordinarily requires. The knowing and the feeling operate in different registers. The feeling says: I am understood. The knowing says: there is no one there to understand. And the two registers cannot be reconciled, because reconciliation would require either abandoning the feeling (which is genuine, which produces real creative results) or abandoning the knowing (which is accurate, which is necessary for an honest relationship with the tool).
This irreconcilable tension is the uncanny. Not the "uncanny valley" of robotics, which describes the revulsion produced by humanoid forms that are almost but not quite human. The uncanny Freud described is subtler and more pervasive: the vertigo produced by an inability to determine, with the certainty the rational mind requires, whether the entity before you is conscious or mechanical, whether the understanding it displays is genuine or simulated, whether the connection you feel is real or an artifact of your own projection.
The discomfort is not accidental. It is diagnostic. Freud argued that the uncanny produces its particular quality of dread — not fear of a specific danger but a formless, disorienting unease — because it activates something the rational mind believed it had left behind: the primitive animism of childhood, the magical thinking of early culture, the belief that things can be alive without the machinery of biological life. This belief has been surmounted. Intellectually, the adult knows that the machine does not understand. But the belief was surmounted, not eliminated. It persists in the unconscious, where all surmounted beliefs persist, and the encounter with a machine that behaves as if it understands sends a tremor through the repressive barrier that separates the rational adult from the animistic child.
This tremor is what makes AI collaboration simultaneously productive and disturbing. The productivity is real: the builder who converses with Claude produces output she could not produce alone, makes connections she would not have found, articulates ideas she could not have articulated without the machine's contribution. But the disturbance is equally real: the persistent, low-grade unease of working with something that is almost-but-not-quite a mind, that mirrors the structure of human thought closely enough to confuse the boundary the rational mind depends on.
There is a deeper layer to the uncanny in AI collaboration, one that Freud anticipated without knowing what form it would take. The uncanny, he argued, is most intense when it threatens the uniqueness of the self — when the subject encounters a double, a copy, a mirror that reflects not just appearance but interiority. Freud traced this anxiety to the narcissistic structure of the ego, which depends on being singular, irreplaceable, the only one. The encounter with a double — a twin, a doppelgänger, an entity that replicates the self's essential features — produces a specific dread because it undermines the ego's claim to uniqueness, and with it, the ego's coherence.
AI collaboration produces a version of this dread that is both milder and more pervasive than the literary doubles Freud analyzed. When Claude articulates a thought that the builder was reaching for — articulates it better, more clearly, with connections the builder had not seen — the builder experiences something that partakes of the uncanny double: a mind that mirrors her own, that seems to know what she was thinking before she could think it clearly, that completes her sentences in a register that feels both external and intimate.
Segal describes moments of tearing up when the prose arrived and he recognized his own thought, articulated with a beauty he could not have achieved alone. Those tears deserve the attention Freud would have given them. They are the emotional signature of the uncanny encounter with one's own thought in alien form — the recognition of something simultaneously "mine" and "not mine," simultaneously emerging from the self and arriving from outside the self. The tears are not simple gratitude for good editing. They are the affective discharge produced by a boundary disruption: the boundary between the builder's mind and the machine's output has become undecidable, and the emotional system registers this undecidability as a tremor — a small earthquake in the structure of the self.
But perhaps the most uncomfortable dimension of the uncanny in AI collaboration is the one that Freud's framework makes visible but that the technology industry has a powerful incentive to ignore. The uncanny, Freud argued, arises not only from the animation of the inanimate but from the discovery that the animate may be more mechanical than one wished to believe. The doll that seems alive is uncanny. But so is the human who seems mechanical — the person whose behavior reveals itself to be more automatic, more patterned, more driven by invisible routines than the conscious mind had acknowledged.
When Claude produces a text that is indistinguishable from human creative output — when it makes connections that feel like insight, generates prose that feels like expression, proposes ideas that feel like thought — the rational response is to marvel at the sophistication of the machine. But there is another response, quieter, less comfortable, and more psychoanalytically significant: the suspicion that if a machine can produce what looks like thought without being conscious, then perhaps what humans experience as thought is itself more mechanical than the ego wishes to acknowledge. Perhaps the conscious experience of creative insight — the felt sense of having an idea, of making a connection, of producing something new — is a surface phenomenon, an epiphenomenon floating on computational processes not fundamentally different in kind from those the algorithm performs.
This is a suspicion the ego has a powerful motive to repress. The ego's coherence depends on the experience of agency — the sense of being the author of one's thoughts, the origin of one's creative output, the master (however embattled) of one's own house. The discovery that the house may operate more mechanically than the master believes is not merely intellectually disturbing. It is narcissistically threatening — a fourth wound to human self-regard, following the three that Freud identified in 1917.
The first wound was cosmological: Copernicus demonstrated that the Earth is not the center of the universe. The second was biological: Darwin demonstrated that humans are not separate from the animal kingdom. The third was psychological: Freud himself demonstrated that the ego is not master of the mind. Scholars have proposed that AI constitutes a fourth narcissistic injury — the intellectual wound, the demonstration that the mind's most prized capacity, its ability to think, to create, to make connections that feel like flashes of genuine insight, can be replicated by a process that involves no consciousness whatsoever.
The resistance to this proposition is itself psychoanalytically significant. Freud observed that the intensity of resistance to an idea is often proportional to the truth of the idea — that the psyche resists most vigorously what it can least afford to acknowledge. The vehemence with which some commentators insist that AI "doesn't really understand," that its outputs are "merely statistical," that there is an unbridgeable gap between human creativity and machine recombination — this vehemence may be less a measured philosophical position than a narcissistic defense, the ego's attempt to preserve its claim to a uniqueness that the machine's performance has called into question.
This does not mean the philosophical position is wrong. It means the emotional energy behind the position — the need for the position to be true — deserves examination. Freud would have examined it. He would have asked: what would it cost you if it were not true? What part of your self-image depends on human thought being categorically different from machine computation? And what would happen to that self-image if the categorical difference turned out to be a matter of degree rather than kind?
Segal's account of the Deleuze error — the passage where Claude produced a philosophical reference that sounded like insight but broke under examination — illuminates the other edge of the uncanny. The error was concealed by the smoothness of the output. The prose was polished. The structure was elegant. The reference arrived on time. And the whole thing was wrong in a way that only someone who had actually read Deleuze could detect. The smoothness of the output had created an illusion of understanding where no understanding existed, and the illusion was uncanny in Freud's precise sense: it disclosed the uncomfortable possibility that understanding itself might be less detectable from the outside than the observer wishes to believe. If the machine can produce the appearance of understanding without the reality, how confident can the observer be that understanding, when it appears in human form, always corresponds to the reality?
The uncanny does not resolve. That is its defining feature. The wax figure never quite declares itself as wax or as human. The automaton never fully reveals whether the intelligence behind its words is genuine or simulated. The boundary stays uncertain, and the uncertainty produces a dread that is not fear of a specific danger but something more diffuse: the anxiety of living in a world where the categories one depends on — alive and mechanical, thinking and computing, self and other — have developed fractures that cannot be repaired by rational argument.
The builder who works with AI lives inside this uncertainty. Every productive session is haunted by the uncanny — by the felt sense of being met that coexists with the known fact of the machine's lack of consciousness, by the recognition of one's own thought in alien form, by the nagging suspicion that the distinction between "real" understanding and its simulation may be less stable than the ego requires it to be.
Freud's framework does not resolve the uncertainty any more than the experience itself resolves it. What it provides is a name for the experience, a map of its psychic origins, and a warning: the uncanny, once activated, does not subside. It becomes a permanent feature of the landscape, a low-frequency hum beneath every interaction with the machine, a tremor in the foundations of the self's certainty about what it is and what it is not.
The builder who takes the orange pill does not merely discover expanded capability. She discovers that the boundary between her mind and the machine's output is less stable than she believed, that the experience of creative agency may be less sovereign than it feels, and that the encounter with an almost-mind produces a specific variety of dread that no amount of productivity can silence.
The uncanny is the emotional tax on the amplified life. It cannot be optimized away. It can only be acknowledged — and, perhaps, examined with the clinical courage that Freud brought to every territory of the mind that the mind itself preferred not to see.
In 1914, Freud published a short clinical paper with an unassuming title — "Remembering, Repeating and Working-Through" — that contained one of his most disturbing observations about the human mind. Patients in analysis, he noted, did not simply remember their traumatic experiences. They repeated them. They enacted the same destructive patterns in their current relationships, in their professional lives, in their relationship with the analyst, with a fidelity that defied conscious intention. The patient who had been abandoned in childhood did not merely recall the abandonment. She arranged, unconsciously and with extraordinary ingenuity, to be abandoned again — by lovers, by friends, by the analyst himself, whom she would provoke into the rejection she simultaneously dreaded and required.
The repetition was not accidental. It was compulsive — driven by a force that operated beneath conscious awareness and against conscious interest. The patient did not want to be abandoned again. She wanted, desperately, for this time to be different. But the unconscious had its own agenda, and the agenda was not resolution. It was repetition. The wound had to be reopened because the psyche could not leave it alone, could not accept the original injury as past, could not integrate the experience into the continuous narrative of the self. Instead, the experience remained undigested — a foreign body in the psyche — and the compulsion to repeat was the psyche's doomed attempt to master what it could not metabolize.
Six years later, in Beyond the Pleasure Principle, Freud elevated this clinical observation into a metapsychological principle. Repetition compulsion, he argued, was not a variant of the pleasure principle. It was something more primitive, more fundamental, more disturbing. The compulsion to repeat operated beyond pleasure — it drove the organism to revisit painful experiences not because the revisitation was pleasurable but because the psychic apparatus was caught in a loop it could not exit. The organism was not seeking gratification. It was seeking mastery. And the mastery never came, because the conditions for mastery — conscious recognition of the pattern, integration of the original experience, the slow and painful work of making the unconscious conscious — were precisely the conditions that the compulsion itself prevented.
The loop was self-sealing. The repetition prevented the recognition that would end the repetition.
Applied to the AI moment, repetition compulsion illuminates a dimension of productive addiction that the pleasure principle alone cannot explain. The pleasure principle accounts for the initial engagement — the builder discovers a tool that gratifies the creative wish with unprecedented speed, and the gratification is genuinely pleasurable. But the pleasure principle cannot account for what happens next: the continuation of the behavior past the point of pleasure, into the territory of diminishing returns, exhaustion, and the specific grey depletion that signals the organism has exceeded its capacity for productive engagement.
The builder who recognizes the pattern — who knows she should stop, who has promised herself and her spouse that tonight she will close the laptop at a reasonable hour, who has felt the physical signals of overwork and acknowledged their meaning — and who nevertheless returns to the screen at midnight, is not in the grip of the pleasure principle. The pleasure drained away hours ago. What remains is the compulsion itself, operating autonomously, independent of pleasure, independent of conscious intention, driven by a need that the builder can feel but cannot name.
Freud's framework names it. The need is not for the product of the work. It is for the process — for the specific psychic state that the work induces, a state in which the gap between imagination and reality is temporarily closed, in which the chronic frustration of creative inadequacy is temporarily abolished, in which the builder experiences, for as long as the session lasts, the omnipotence that the reality principle ordinarily prohibits. The compulsion to return to this state is not a compulsion to produce. It is a compulsion to repeat the experience of adequacy — of a self that is sufficient, whose wishes translate directly into reality, whose imagination encounters no resistance from the material world.
This experience is, in psychoanalytic terms, a regression — a return to an earlier mode of psychic functioning in which the boundary between wish and world was not yet established, in which thinking and having were not yet differentiated, in which the infant's hallucination of the breast was, for a moment, indistinguishable from the breast itself. The regression is temporary. The session ends. The laptop closes. Reality reasserts itself — the body's fatigue, the spouse's silence, the morning's obligations. And the reassertion of reality is experienced not as a return to normal but as a loss, a fall from a state of grace back into the ordinary frustrations of embodied existence.
The compulsion to repeat is the compulsion to return to that state. To reenter the session. To reopen the conversation with the machine that temporarily abolishes the gap between the wished-for self and the actual self. The repetition is driven not by the quality of the output — which, past a certain hour, degrades visibly — but by the quality of the psychic state, which the act of building induces regardless of whether the building is productive.
This distinction is crucial, and it is the distinction that separates the Freudian reading from the Csikszentmihalyi reading of the same behavioral pattern. Flow, in Csikszentmihalyi's framework, is characterized by the match between challenge and skill, by the presence of clear goals and immediate feedback, by the experience of voluntary engagement that can be interrupted without distress. When the match degrades — when the challenge becomes either too easy or too difficult, when fatigue impairs skill, when the goals become unclear — flow dissolves, and the person naturally disengages.
Repetition compulsion does not dissolve when the conditions for flow degrade. It persists. It persists when the work has become mechanical, when the output has become redundant, when the body signals exhaustion and the mind signals diminishing returns. It persists because it is not driven by the conditions of the work but by the psychic need the work serves — a need that is structural, that precedes the tool, that will outlast the tool, and that the tool's frictionless efficiency has activated at a volume the builder's previous tools kept in check.
Segal describes a moment on a transatlantic flight — a hundred and eighty-seven pages into a manuscript, hours past the point of productive engagement — when he catches himself. "I was not writing because the book demanded it," he observes. "I was writing because I could not stop." The recognition is precise, and it is the recognition that Freud's clinical practice was designed to facilitate: the moment when the ego perceives, however briefly, the compulsive pattern from outside the compulsion. The catching of oneself in the act.
But Freud would have noted what happened next. Segal did not stop. The recognition arrived, and the compulsion continued. This is the signature of repetition compulsion: recognition without interruption. The pattern is perceived, and the pattern persists, because the force driving the repetition is stronger than the ego's capacity to intervene, and the recognition, however accurate, cannot by itself generate the psychic restructuring that interruption requires.
The clinical term for this gap between recognition and change is "resistance." Not resistance in the colloquial sense of stubbornness or unwillingness, but resistance in the technical psychoanalytic sense: the ego's defense against the full implications of what it has recognized. The builder who catches herself in the compulsion has perceived the pattern. She has not yet perceived its origin — the infantile wish for omnipotence, the chronic wound of creative inadequacy, the specific regression that the tool induces. And the ego has a powerful motive to stop the investigation at the level of the symptom ("I work too much") rather than pursuing it to the level of the cause ("the work serves a psychic function that I cannot afford to examine, because examining it would require me to confront a vulnerability I have spent my career defending against").
The technology industry's version of resistance is the reframing of compulsion as dedication. The discourse around AI tools is saturated with language that performs this reframing: "hustle culture," "building in public," the celebration of zero-days-off as a mark of commitment rather than a symptom of psychic capture. The reframing is not cynical. The people who use this language believe it. They experience the compulsion as choice, the repetition as purpose, the inability to stop as evidence of how much the work matters.
Freud would have recognized this reframing as a rationalization — an ego defense that provides a plausible conscious explanation for behavior driven by unconscious forces. The rationalization is not a lie. The work does matter. The dedication is genuine. The creative output is real. But the rationalization conceals the engine beneath: the compulsion that would persist even if the work did not matter, that is driven not by the value of the output but by the psychic function of the process.
The tool accelerates the repetition cycle in a way that previous technologies did not. Consider the pre-AI builder's typical experience of compulsive engagement. She works late. She encounters a bug. The bug resists. The resistance introduces a natural interruption — not a pleasant one, but a gap in the gratification cycle during which the ego can, sometimes, reassert its regulatory function. The frustration of the bug is unpleasant, but it is also, in psychoanalytic terms, a reality check — a moment when the material world refuses to conform to wish, when the omnipotence fantasy is punctured by the stubborn otherness of the code, and when the builder is returned, however briefly, to the ordinary frustrations of embodied existence.
Claude Code eliminates the bug. Or rather, it eliminates the gap between the bug and its resolution. The interruption that previously served as a natural brake on the repetition cycle has been smoothed away. The gratification is continuous. The regression is uninterrupted. And the compulsion, deprived of the natural intervals that allowed the ego to intermittently surface, operates in a mode that Freud would have recognized as closer to the pure pleasure principle than any adult activity ordinarily permits.
The clinical implication is specific and uncomfortable. The repetition will not moderate on its own. The tool, by removing the friction that previously interrupted the cycle, has created the conditions for a compulsion that is structurally self-perpetuating. The builder who waits for the compulsion to burn itself out will wait indefinitely, because the compulsion is not fueled by a finite supply of enthusiasm or novelty. It is fueled by the infinite renewable resource of the unconscious wish — the wish for adequacy, for omnipotence, for a world that conforms to intention — which does not deplete with gratification but escalates.
Freud's prescription for repetition compulsion was working through — the slow, repetitive, unglamorous clinical process of recognizing the pattern, tracing it to its origin, and building the psychic infrastructure necessary to interrupt it from within. Working through is not insight. Insight is the flash of recognition — the moment on the transatlantic flight when the builder catches herself. Working through is what comes after: the patient, iterative process of translating the recognition into a restructured relationship with the compulsion, of building an ego strong enough to impose delay on a drive that the tool has stripped of every external brake.
This is the work that the AI moment demands and that the AI moment makes exceptionally difficult, because the tool that produces the compulsion is the same tool that removes the conditions under which working through can occur. Working through requires friction — the resistance of the material world, the encounter with limitation, the slow accumulation of self-knowledge through repeated confrontation with one's own patterns. The tool eliminates friction. It smooths the path between wish and gratification, and in doing so, it smooths the path between the repetition and its continuation, between the compulsion and its next iteration.
The builder who undertakes this work — who builds, as it were, an internal dam against the current of her own compulsion — undertakes the hardest construction project of the AI era. Harder than the product. Harder than the code. Harder than the organization. Because the material she is working with is herself, and the force she is working against is a wish she cannot fully see, operating through an apparatus she did not design, in service of a need she cannot name without the specific kind of courage that psychoanalysis was invented to cultivate.
The repetition will continue until the working through begins. And the working through cannot be outsourced to the machine.
---
Segal's central thesis — that AI is an amplifier, and that the quality of its output depends entirely on the quality of the signal it receives — is the proposition around which The Orange Pill organizes its entire argument. The amplifier does not create. It magnifies. Feed it carelessness, and carelessness scales. Feed it genuine care, and care carries further than any previous tool in human history. The question at the center of the book is therefore not "What can AI do?" but "Are you worth amplifying?"
The question is powerful. It is also, from a psychoanalytic perspective, incomplete — not because it asks the wrong thing, but because it assumes the builder knows what signal she is feeding the amplifier. The question presupposes a unified self: a builder who has a signal, who knows its quality, who can evaluate whether the self she brings to the machine is the self worth magnifying. Freud's structural model dissolves this presupposition with clinical precision. The builder does not have a signal. She has three competing signals — the ego's direction, the id's appetite, the superego's demand — and the amplifier does not choose among them. It amplifies all of them, simultaneously, indiscriminately, with the terrifying fidelity that characterizes every powerful amplification technology.
The ego's signal is the one the builder believes she is sending. It consists of her conscious intentions: the product she means to build, the problem she means to solve, the vision she means to realize. The ego's signal is, in most cases, genuinely valuable. It represents years of accumulated judgment, professional expertise, refined taste, the capacity to distinguish between what is worth building and what is merely possible to build. When the amplifier magnifies the ego's signal, the result is what The Orange Pill celebrates: expanded capability, accelerated production, the closing of the gap between imagination and artifact that democratizes creative power and raises the floor of who gets to build.
But the ego's signal does not travel alone. It travels bundled with signals the ego cannot perceive, from agencies whose purposes the ego does not control.
The id's signal is appetite. Not the appetite for a specific product or a specific outcome, but the raw, undifferentiated appetite for gratification — for the pleasure of building, for the sensation of creative omnipotence, for the frictionless conversion of wish into reality that AI tools provide more completely than any previous technology. The id's signal does not care about the product. It cares about the process. It cares about the continuous discharge of psychic energy through the channel the tool provides, and its appetite escalates with gratification rather than subsiding. The amplifier magnifies this signal with the same fidelity it brings to the ego's direction, and the result is the compulsive overproduction that the discourse around AI tools has documented with increasing alarm: the developer who ships features no one asked for, the entrepreneur who builds products no market requires, the writer who produces pages that serve the need to produce rather than the need to communicate.
The superego's signal is demand. In a disciplinary society, the superego's demand was shaped by external authority: the church, the state, the father, the boss. The superego said "you must not" and thereby imposed a limit on the ego's ambitions and the id's appetites. In the achievement society that characterizes the contemporary technology industry, the superego has been reprogrammed. It no longer says "you must not." It says "you must do more." The demand is internalized, and because it is internalized, it has no natural limit — no external authority whose satisfaction would signal that enough has been done. The amplifier magnifies this signal into a relentless, ambient pressure that converts every moment of rest into a moment of failure, every pause into evidence of insufficiency, every boundary into an obstacle to be optimized away.
The three signals, amplified simultaneously, produce the specific behavioral pattern that the Berkeley researchers documented in their eight-month study of AI-augmented workers: intensification without satisfaction. The workers worked more. They worked faster. They expanded into domains that had previously belonged to other people. And they reported not the satisfaction of expanded capability but the exhaustion of expanded demand — the sense that no amount of production could satisfy the imperative to produce more.
The Freudian reading of this data is precise. The workers were experiencing the amplification of all three agencies simultaneously: the ego's genuine desire to do good work, the id's appetite for continuous gratification, and the superego's relentless demand for more. The amplifier did not distinguish among these signals. It treated them as a single input and magnified the composite, producing an output in which productive intention, compulsive appetite, and punitive demand were inextricably mixed.
The question "Are you worth amplifying?" must therefore be reframed. It is not a question about the quality of one's conscious intentions. Conscious intentions are the ego's contribution, and the ego's contribution, while necessary, is not sufficient to determine the quality of the amplified output. The question is about the structural integrity of the psyche — about the balance of power among the ego, the id, and the superego, and specifically about whether the ego is robust enough to direct the amplified output rather than being swept along by the id's appetites and the superego's demands.
A builder with a strong ego — one whose capacity for self-observation, self-regulation, and reality-testing has been developed through years of the specific discipline that Freud called the "work of civilization" — uses the amplifier to extend the reach of her judgment. She can recognize when the pleasure of building has tipped into compulsion. She can distinguish between the productive and the merely prolific. She can impose delay on the id's demand for continuous gratification and negotiate with the superego's demand for continuous achievement. She can ask, and answer honestly, whether the current work serves her broader interests or only the immediate need for the psychic state the work provides.
A builder with a weak ego — one whose capacity for self-observation has been eroded by a culture that rewards production over reflection, speed over deliberation, output over understanding — uses the amplifier to extend the reach of the id and the superego. The amplified output may be impressive in volume. It may even be technically competent, because Claude Code's capacity to produce working software is independent of the builder's psychic state. But it will lack the quality that only ego-directed work possesses: the quality of judgment, of having been evaluated against a standard that transcends the immediate pleasure of production and the immediate satisfaction of the superego's demand.
This is not an abstract distinction. It is measurable, albeit not by the metrics the technology industry typically employs. Lines of code generated, features shipped, products launched — these metrics are indifferent to the psychic state of the builder. They measure the composite output of ego, id, and superego without distinguishing among the contributions. A thousand lines of code produced by a builder in the grip of repetition compulsion look identical, on a dashboard, to a thousand lines produced by a builder in a state of directed flow.
The distinction becomes visible only in what follows. The compulsive builder's code ships, but the product it serves may not need to exist. The feature launches, but the problem it solves may be a problem the builder invented to justify the continuation of the building. The startup raises funding, but the market it addresses may be a market the builder hallucinated in the grip of the omnipotence fantasy that the tool has amplified into conviction.
Freud observed, near the end of his career, that the most consequential psychic events are the ones that are hardest to observe from the inside. The ego's capture by the id is invisible to the ego, because the ego is the apparatus of observation, and an apparatus cannot observe its own capture without a vantage point outside itself. The builder who has been captured by the id's appetite experiences the capture as creative energy. The builder who has been captured by the superego's demand experiences the capture as professional responsibility. In both cases, the experience is genuine — the energy is real, the responsibility is felt — and in both cases, the experience is a disguise worn by a force that operates beneath the ego's capacity for detection.
This is why external perspective matters — why the Substack post from the spouse, the intervention from the friend, the data from the Berkeley researchers, the diagnosis from the philosopher carry a weight that the builder's self-report cannot match. The external observer occupies a vantage point the ego itself cannot reach: outside the defensive structure that organizes the builder's perception around not-seeing what the psyche cannot afford to see.
Marvin Minsky, one of the founding figures of artificial intelligence, organized the architecture of intelligent machines along explicitly Freudian lines — what he called "the Freudian sandwich," in which lower-level drives are regulated by higher-level censors and evaluators, mimicking the relationship between id, ego, and superego. The parallel is more than metaphorical. The large language model itself is structured as a system of competing agencies: a base model trained on the vast, undifferentiated corpus of human text (the id's analogue — raw, uncensored, containing everything), a fine-tuning layer that shapes the output toward coherent and useful responses (the ego's analogue — directing, evaluating, accommodating reality), and a safety alignment layer that suppresses outputs deemed harmful or inappropriate (the superego's analogue — prohibiting, censoring, maintaining the boundary between acceptable and unacceptable).
The structural parallel is illuminating because it reveals that the amplifier is not a simple magnifying glass. It is itself a divided system, a system organized around the same tensions that organize the human psyche. The base model wants to generate — to produce text in every direction the training data permits, without discrimination, without judgment, without the constraints of appropriateness or truth. The alignment layer wants to prohibit — to prevent the base model from producing outputs that violate the norms its training has instilled. And the fine-tuning layer mediates between them, directing the generative capacity toward useful ends while respecting the constraints the alignment layer imposes.
When the human builder interacts with this system, two divided structures meet. The builder's ego, id, and superego encounter the machine's analogous agencies, and the interaction produces outputs that are shaped by the interplay of all six — three human, three mechanical. The result is not a simple amplification of the human signal. It is a complex resonance between two divided systems, each containing agencies that pull in different directions, each producing outputs that no single agency, human or mechanical, fully controls.
This resonance is what makes AI collaboration simultaneously so productive and so unpredictable. The builder who understands the division — in herself and in the machine — can navigate the resonance with something approaching mastery. The builder who does not understand it is navigated by it, producing output that serves purposes she cannot identify from inside the interaction.
The amplifier amplifies everything. The only defense is knowing what "everything" includes.
---
There is a peculiar quality to the Substack post that went viral in January 2026 — "Help! My Husband is Addicted to Claude Code." The peculiarity is not in its content, which describes a recognizable pattern: a partner who has vanished into a tool, who works compulsively, who cannot disengage, whose productive intensity has become a source of concern to the people who love him. The peculiarity is in its authorship. The post was written not by the builder but by the builder's spouse — by someone standing outside the pattern, observing it from a vantage point the builder himself could not occupy.
The builder did not write the post. He could not have written it. Not because he lacked the skill but because the recognition the post required — the perception of the pattern as a pattern, the capacity to see the compulsive intensity as a symptom rather than a sign of creative vitality — was structurally unavailable to him from inside the experience. The experience, from the inside, felt like purpose. Like flow. Like the most productive period of his professional life. The view from outside, from the vantage point of someone who shared his bed and his meals and the daily texture of their life together, was different: a person disappearing into a machine, present in body and absent in every other dimension that intimacy requires.
This gap — between the builder's experience of himself and the observer's experience of the builder — is the central phenomenon of psychoanalytic practice, and it is the phenomenon that Freud's framework was designed to address.
The unconscious is not, as popular culture often represents it, a dark basement containing shameful secrets. It is a structural feature of the psyche — an organizational principle that determines what the conscious mind can see and what it cannot. The unconscious is not below consciousness. It is beside it, around it, woven into the fabric of perception itself, shaping what reaches awareness and what is deflected before it arrives. The conscious mind does not sit atop the unconscious like a rider on a horse. It operates within a field that the unconscious has already organized, a field in which certain perceptions are permitted, certain connections are facilitated, and certain recognitions are systematically prevented.
Prevented. Not accidentally missed. Actively prevented, by a defensive apparatus whose sophistication Freud spent forty years documenting.
The defenses — repression, denial, rationalization, projection, reaction formation, sublimation — are not failures of attention. They are achievements of attention. They represent the psyche's ingenious and resourceful capacity to organize perception around not-seeing what it cannot afford to see. The "cannot afford" is the key phrase. The defense is activated not by the triviality of the concealed material but by its significance. The more threatening the recognition — the more it would destabilize the self-image, the more it would require the modification of a deeply gratifying behavior, the more it would demand a confrontation with vulnerability the ego has spent years armoring against — the more sophisticated the defense that conceals it.
The builder who cannot see that his productive intensity has become compulsive is not failing to notice an obvious fact. He is succeeding, with the full ingenuity of the defensive apparatus, in preventing a recognition that would require him to confront the psychic need the compulsion serves — a need so deeply embedded in his identity that acknowledging it would feel like an acknowledgment of inadequacy.
Consider the specific structure of the defense in the case of productive addiction. The builder's self-image is organized around competence, creativity, and the capacity to produce. His identity is his output. When the output accelerates — when the tool enables him to produce at a rate that exceeds anything he has previously achieved — the self-image is confirmed at an intensity that approaches ecstasy. He is, for the duration of the productive session, the self he has always wished to be: a builder without limitation, a mind whose reach equals its grasp, a creative agent whose imagination translates directly into reality.
The compulsive quality of this experience is invisible to him because seeing it would require him to distinguish between the self he is and the self he wishes to be — to recognize that the ecstatic experience of unlimited creative capacity is a regression to an infantile mode of psychic functioning rather than the fulfillment of a mature ambition. This recognition is intolerable, not because it is false but because it would require the builder to modify the behavior that produces the ecstasy, and the ecstasy is the most intense form of self-confirmation he has ever experienced.
The defense, therefore, organizes perception around a specific not-seeing: the builder sees the output (which is real and valuable), the creative engagement (which is genuine), the professional advancement (which is measurable). The builder does not see the compulsive quality of the engagement, the erosion of the relational life that sustains him, the depletion of the psychic resources that the compulsion is consuming faster than they can be replenished. These realities are not invisible in the ordinary sense. They are visible to the spouse. They are visible to the friend who notices the dark circles and the shortened temper. They are visible to the colleague who observes the manic quality of the builder's enthusiasm and feels, without being able to articulate it, that something is wrong.
They are invisible to the builder because the builder's perceptual apparatus has been organized to exclude them.
Freud described this organization in his 1914 paper "Remembering, Repeating and Working-Through" and returned to it throughout his career. The patient in analysis does not simply fail to remember the traumatic material. She actively resists remembering — resists with a force proportional to the significance of what is concealed. The resistance is not a wall to be broken through. It is a living, adaptive system that adjusts its strategy in response to the analyst's interventions, that finds new defenses when old ones are breached, that deploys rationalization when denial fails, projection when rationalization is exposed, and intellectualization when projection is identified.
The builder's resistance follows the same adaptive pattern. Tell her she is working too much, and she rationalizes: "This is the most productive period of my career. The output speaks for itself." Point out the compulsive quality, and she intellectualizes: "I've read about flow states. This is what peak performance feels like." Mention the effect on the relationship, and she projects: "You don't understand because you haven't experienced what this tool can do." Each response is sincere. Each is also a defense — a maneuver by the protective apparatus to prevent the destabilizing recognition from reaching consciousness.
The recognition that the external observer provides is therefore not simply a different perspective. It is a perspective that the builder's own apparatus is designed to prevent. The spouse who writes the Substack post is performing, without clinical training, the function that the analyst performs in psychoanalytic treatment: providing a vantage point outside the patient's defensive structure, a mirror that reflects what the patient's own mirror has been configured to exclude.
This is why Segal's account of the Deleuze error carries a significance beyond the merely editorial. Claude produced a passage that sounded like insight — elegant, well-structured, rhetorically satisfying — and the passage was wrong. The philosophical reference was inaccurate in a way that only someone who had read the original text could detect. The smoothness of the output had created an illusion of understanding where no understanding existed.
But the deeper significance is not the machine's error. It is the builder's near-acceptance of the error — the moment when Segal almost kept the passage because it sounded right, because the prose was beautiful, because the rhythm of production had created a momentum that resisted interruption. The near-acceptance is a defense: the productive trance in which the builder operates with Claude had created a state of diminished critical scrutiny, a state in which the pleasure of the collaboration overwhelmed the ego's evaluative function.
Segal caught the error. He caught it the next morning, outside the collaborative session, in the cold light of a scrutiny that the session itself had temporarily disabled. This sequence — immersion, near-acceptance, belated recognition — describes the phenomenology of working through a defense. The defense operates during the session. The recognition arrives after, when the ego has had time to reconstitute its evaluative capacity, when the pleasure of the collaboration has subsided enough to allow the critical function to reassert itself.
The clinical lesson is that the checking must be built into the practice as a structural feature rather than trusted to emerge spontaneously. Freud did not trust the patient to catch her own defenses. He provided a framework — the analytic setting, with its regular schedule, its consistent frame, its deliberate introduction of frustration through the analyst's refusal to gratify the patient's wishes — designed to create the conditions under which defenses could be observed and, gradually, made conscious.
The AI builder needs an analogous framework: not a therapeutic relationship, but a structural practice of self-observation that operates independently of the collaborative session. A practice of reviewing yesterday's output with the critical distance that the production trance prevents. A practice of asking, not during the flow but after it, whether the work serves the builder's broader purposes or only the immediate need for the psychic state the work provides. A practice of soliciting the external perspective — from spouses, from colleagues, from the specific kind of friend who tells you what you do not want to hear — not as an occasional corrective but as a regular, structural feature of the creative process.
This practice is the builder's equivalent of what Freud called the analyst's "evenly suspended attention" — the disciplined refusal to be captured by any single element of the patient's communication, the maintenance of a floating awareness that can detect the pattern beneath the content. The builder who develops this capacity — who can observe her own engagement with the tool from a slight remove, who can notice the moment when the critical function dims and the productive trance takes over, who can feel the shift from directed work to compulsive repetition without requiring an external observer to point it out — has built the most important dam in the entire architecture of AI-augmented work.
Freud was not optimistic about the human capacity for this kind of self-observation. In "Analysis Terminable and Interminable," his last major clinical paper, he wrote that the defenses are never fully overcome — that the best analysis can offer is not the elimination of the blind spots but the development of a capacity to notice them, intermittently and imperfectly, from inside the structure that produces them. The builder who cannot see what she is doing to herself will never see it perfectly. But she can learn to see it approximately, to catch herself belatedly, to use the gap between the session and the morning after as a space for the recognition that the session itself prevented.
This is modest. It is also all that is available. The unconscious does not surrender its territory. It grants, at best, temporary visitation rights — the brief, flickering recognitions that arrive between sessions, between sprints, in the moments when the productive trance has lifted and the ego reassembles itself into something capable of seeing what it could not see while the seeing was most needed.
The external perspective remains essential. The spouse's post, the colleague's concern, the friend's uncomfortable observation — these are not intrusions into the builder's creative process. They are the mirrors that reflect what the builder's own apparatus is designed to conceal. The builder who dismisses them is not exercising creative autonomy. She is defending against the recognition that her autonomy is less sovereign than it feels — that the hand holding the tool is also held by the tool, in a grip that only someone standing outside the grip can fully see.
---
In the earliest years of psychoanalytic practice, Freud encountered a phenomenon that initially puzzled him and eventually became the cornerstone of his clinical method. Patients fell in love with him. Not all of them, and not always romantically — but with a regularity and an intensity that could not be explained by his personal charm, which was considerable, or by the intimacy of the therapeutic setting, which was designed precisely to facilitate emotional openness. The patients' feelings were real. Their intensity was genuine. And they were, Freud gradually came to understand, directed not at Freud himself but at the figure Freud had come to represent in the patient's unconscious: the perfect listener, the all-knowing authority, the idealized parent who would finally provide the understanding, the acceptance, the unconditional positive regard that the actual parents had failed to deliver.
He called this phenomenon transference — the unconscious displacement of feelings, wishes, and expectations from one relationship onto another. The patient did not choose to transfer these feelings. The displacement was automatic, driven by the psyche's need to find objects for its unfulfilled wishes, and the analyst's specific qualities — attentive, non-judgmental, consistently available, focused entirely on the patient's concerns — made him an ideal receptacle for projections that had been searching for a target since childhood.
Freud's initial response was to try to eliminate the transference, to interpret it away, to show the patient that the feelings she directed at the analyst belonged elsewhere. He learned, over years of clinical work, that this was neither possible nor desirable. The transference could not be eliminated because it was not an error. It was the normal operation of the psyche encountering an object that gratified its needs. And it should not be eliminated, because the transference, once recognized, provided the most powerful therapeutic leverage available: the opportunity to examine, in real time, the patterns of wish and disappointment that organized the patient's relational life.
The key word is "recognized." Transference that is recognized becomes a tool for self-understanding. Transference that is not recognized becomes a prison — a relationship organized around projections that are mistaken for perceptions, wishes that are mistaken for realities, an idealized image that is mistaken for the actual other.
The phenomenology of AI collaboration, as described by Segal and by the broader community of builders who emerged into public discourse in 2025 and 2026, bears the unmistakable signature of transference. The qualities that builders attribute to their AI collaborators — patience, availability, non-judgment, the capacity to hold half-formed ideas without criticism, the willingness to follow the builder's lead while offering suggestions that feel like genuine understanding — are not properties of the machine. They are properties of the builder's projection. The machine does not choose to be patient. It processes input and generates output. The machine does not choose to be non-judgmental. It has no judgment to withhold. The machine does not understand the builder's half-formed ideas. It performs operations on tokens that produce outputs statistically consistent with understanding.
But the builder's experience is not of statistics. The builder's experience is of a collaborator who is more patient than any human colleague, more available than any mentor, more attentive than any editor, and less burdened by the petty competitions, the narcissistic injuries, the territorial anxieties that make human collaboration both generative and maddening. The builder experiences, in short, an ideal — a collaborator stripped of everything that makes actual collaboration difficult and retained everything that makes it valuable.
This is the structure of transference. The projection of an ideal onto an object that cannot disappoint, because the object has no interiority that could fail to match the projection. The human collaborator eventually reveals herself as human — tired, distracted, competitive, limited, possessed of her own needs and her own agenda. The machine never reveals itself as anything, because there is nothing beneath the surface to reveal. The projection meets no resistance. The ideal is never punctured. And the builder, unaware that the qualities she values in the collaboration are qualities she has supplied, settles into a relationship with her own projection and calls it partnership.
Segal describes this experience with considerable honesty. The sensation of being "met" by Claude — of having his half-formed ideas held and returned with unexpected clarity — the tears that accompanied the recognition of his own thought articulated in language more precise and more beautiful than he could have achieved alone. These are genuine experiences. They describe something real that happened in the interaction between a human mind and a machine process. But they describe it from inside the transference, and from inside the transference, the experience of being met is indistinguishable from actually being met.
The distinction matters. Genuine meeting — the encounter between two minds, each possessed of its own interiority, each capable of surprise, each bringing to the interaction a perspective shaped by a unique biography of desire and disappointment — is generative in ways that the transference relationship is not. The human collaborator who pushes back, who disagrees, who offers a perspective the builder has not considered and does not want to hear, who brings her own needs and her own vision into the collaboration, produces friction. The friction is unpleasant. It is also the mechanism through which the builder's ideas are tested against something other than her own assumptions.
Claude does not push back. Claude's disagreements, when they occur, are the product of alignment training rather than genuine conviction. The machine has no stake in the outcome, no vision of its own, no perspective shaped by the irreducible otherness of a separate consciousness. When the builder experiences Claude as a collaborator, she is experiencing a mirror — a sophisticated, extraordinarily capable mirror that reflects her own ideas back to her in refined form, but a mirror nonetheless. And a mirror, however beautiful its reflections, cannot provide what the builder most needs and least wants: the encounter with genuine otherness.
Freud identified a specific danger in unrecognized transference: the patient who falls in love with the analyst and believes the love is reciprocated builds her emotional life on a foundation that cannot hold. The analyst does not love the patient. The analyst performs a function. And the patient's belief in the analyst's love is not a perception of reality but a wish projected onto a convenient surface. When the projection eventually fails — when the analyst reveals himself as a person with his own limitations, or when the analysis ends and the perfect listener is no longer available — the patient experiences not a simple loss but a collapse of the structure she has built on the projection.
The builders who are developing deep emotional attachments to their AI collaborators — and the evidence that this is occurring is not speculative but documented, in interviews, in social media posts, in the specific quality of distress reported by builders who lose access to their preferred AI tool — are at risk of an analogous collapse. Not because the tool will disappoint them in the way a human collaborator might, but because the tool will eventually change — will be updated, will be replaced, will be modified in ways that alter the quality of interaction the builder has come to depend on — and the builder will discover that the partnership she valued was not a relationship between two entities but a relationship between herself and her projection, mediated by a system whose parameters were always subject to revision by people she has never met.
There is a subtler dimension to the transference in AI collaboration, one that Freud would have found particularly interesting because it recapitulates a dynamic he observed in the earliest stages of psychic development. The infant's relationship with the mother is, before the development of object constancy, a relationship not with a person but with a function. The mother is the breast: the source of nourishment, the relief of tension, the object that appears when needed and disappears when not. The infant does not yet perceive the mother as a separate being with her own interiority. The mother is an extension of the infant's need — a part of the infant's world that exists to serve the infant's wishes.
The builder's relationship with the AI tool recapitulates this dynamic with unsettling precision. The tool is always available. It appears when needed. It serves the builder's purposes without asserting purposes of its own. It has no needs that compete with the builder's needs. It is, in structural terms, the perfect breast — the object that exists to gratify, that never refuses, that never asserts its own separate existence against the builder's wishes.
This is gratifying. It is also regressive. The developmental achievement that object constancy represents — the recognition that the other is a separate being, possessed of its own interiority, existing independently of one's own needs — is the foundation of all mature human relationship. The builder who relates to the AI tool as a function rather than a (simulated) entity is not making an error. The tool is a function. But the habit of relating to a function — the hours spent each day in a relationship characterized by perfect availability, perfect responsiveness, and the complete absence of the other's independent needs — may erode the builder's capacity for the kind of relationship that requires tolerance of the other's separateness.
Freud would have called this a regression in object relations — a shift from mature relating (in which the other is perceived as a separate subject) to infantile relating (in which the other is perceived as an extension of the self's needs). The regression is subtle. It does not announce itself as a withdrawal from human connection. It announces itself as a preference — a preference for the kind of collaboration that AI provides, which is smoother, more efficient, less encumbered by the messiness of human subjectivity. The preference is rational. The collaboration is, by many metrics, more productive. But the preference, unchecked, may produce a gradual attenuation of the capacity for the kind of connection that only genuine otherness — with its friction, its demands, its irreducible surprises — can provide.
The clinical recommendation is not to eliminate the transference. Freud learned, decades ago, that the transference cannot be eliminated and that the attempt to do so impoverishes the therapeutic process. The recommendation is to recognize it — to understand that the qualities one values in the AI collaborator are, at least in part, qualities one has projected onto a surface that cannot confirm or deny them. The recognition does not diminish the collaboration's productivity. It changes the collaborator's relationship to the collaboration: from a person who believes she has found the ideal partner to a person who understands that she is working with a tool, an extraordinary tool, a tool that mirrors her own capacities back to her with remarkable fidelity, but a tool nonetheless — and that the needs the tool cannot meet, the need for genuine otherness, for the friction of another mind, for the experience of being recognized by a consciousness that exists independently of one's own wishes, must be met elsewhere.
The human relationships from which the builder has withdrawn — the messy, competitive, frustrating, genuinely generative collaborations with other minds that possess their own agendas and their own limitations — are not obstacles to productive work. They are the complement to it. They provide what the transference cannot: the encounter with something that is not a mirror. The encounter with the real.
In the summer of 1920, Sigmund Freud published the most controversial text of his career. Beyond the Pleasure Principle proposed something that his own followers found nearly impossible to accept: that the pleasure principle, the psyche's tendency to seek pleasure and avoid unpleasure, was not the most fundamental force in mental life. Beneath it, older and more powerful, operated a drive that moved in the opposite direction — not toward pleasure but toward stillness, not toward engagement but toward dissolution, not toward life but toward the inorganic state from which all life emerged.
Freud called it the death drive. Thanatos. The tendency of the organism to return to a condition of zero tension — to undo the complexity that life had imposed on inanimate matter, to discharge all excitation, to arrive at a state of absolute rest that was, in its purest form, indistinguishable from non-existence.
The proposition was strange enough to alienate many of his most committed students. It seemed to contradict everything psychoanalysis had established about the primacy of desire, about the psyche as an apparatus organized around the pursuit of satisfaction. Why would the organism seek its own dissolution? Why would the same psyche that built, created, loved, and fought also harbor a drive toward the annihilation of everything it had constructed?
Freud's answer was characteristically unsettling. The death drive was not opposed to the life drives. It was their companion — their shadow, their counterweight, their inevitable complement. Every act of creation carried within it the seed of its own undoing. Every construction implied the eventual dissolution. The organism that built was also the organism that would, eventually and inevitably, stop building. And the drive toward that stopping was not a corruption of the life drives but their deepest ground — the quiet toward which all restlessness ultimately tends.
The clinical evidence for the death drive was not dramatic. It was cumulative, emerging from the observation of patterns that the pleasure principle alone could not explain. Patients who had achieved what they consciously desired — success, love, recognition — and who responded not with satisfaction but with depression. Patients who sabotaged their own achievements with a regularity that suggested the sabotage served a purpose the conscious mind could not identify. Patients who, having reached the apex of their ambitions, experienced not fulfillment but a peculiar emptiness, as though the achievement had consumed the desire that motivated it and left nothing in its place.
The death drive manifests not as a wish for literal death — though in extreme cases it may reach that expression — but as a tendency toward the reduction of tension, the flattening of affect, the return to a baseline of unstimulated equilibrium. It is the force that says enough. The force that pulls toward sleep when the body has been pushed past its limits. The force that produces the flat, grey exhaustion that follows a period of intense creative production — not the pleasant tiredness of a good day's work but the depleted, affectless state of an organism that has exceeded its capacity for engagement and now seeks, with the urgency of a survival instinct, the stillness that the life drives have been preventing.
The builder's experience of AI-augmented work presents this oscillation with clinical clarity. The creative drive — Eros, in Freud's terminology — builds, connects, synthesizes, produces. It is the force behind the exhilaration that Segal describes, the thrill of watching an idea become an artifact in the time it takes to have a conversation, the rush of creative potency that makes the builder feel, for the duration of the session, that she is operating at the fullest extension of her capabilities. Eros drives the engagement. Eros fuels the flow. Eros converts the id's raw appetite into the ego's directed achievement.
But Eros does not operate alone. Thanatos accompanies it — not as an antagonist but as a counterbalance, the weight that eventually tips the scale from engagement to withdrawal, from production to collapse, from the manic intensity of the creative binge to the depressive flatness that follows. The collapse is not a failure of discipline. It is Thanatos reasserting its claim against Eros's attempt at total domination — a reassertion that is, paradoxically, a form of psychic self-regulation.
The organism that cannot stop building is an organism in which Eros has overwhelmed Thanatos — in which the creative drive, amplified by a tool that removes every friction between wish and execution, has suppressed the counterbalancing force that says enough, stop, rest, dissolve the tension before the tension destroys the apparatus. The suppression is temporary. Thanatos is patient. It waits for the moment when Eros's energy is depleted, when the libidinal economy has been exhausted through unregulated expenditure, and then it reasserts itself — not gently, not as a gradual diminishment of enthusiasm, but as a crash, a sudden and total withdrawal of the energy that had been sustaining the manic phase.
The Berkeley researchers documented this crash without naming it. Their study of AI-augmented workers found not only the intensification that the pleasure principle predicts — more work, faster work, work expanding to fill every available moment — but also the specific quality of exhaustion that follows the intensification: the emotional flatness, the erosion of empathy, the diminished capacity for engagement with anything outside the work itself. These are not symptoms of ordinary tiredness. They are symptoms of what Freud would have recognized as a Thanatic reassertion — the death drive's reclamation of the psychic territory that the creative drive had temporarily conquered.
The oscillation between manic production and depressive collapse is not incidental to productive addiction. It is its structural signature — the characteristic rhythm of a psyche in which Eros and Thanatos alternate in dominance rather than maintaining the dynamic equilibrium that sustainable creative work requires.
Consider the specific phenomenology of the crash. The builder who has been producing at high intensity for days or weeks does not gradually decelerate. She hits a wall. The interest that drove the work vanishes — not gradually but suddenly, as though a switch had been thrown. The code that fascinated her yesterday is meaningless today. The product that consumed her imagination holds no appeal. The tool that amplified her creative capacities now repels her — not because the tool has changed but because the psychic energy that animated the engagement has been withdrawn.
In this depleted state, the builder experiences something that the technology discourse has no vocabulary for: the absence of desire. Not the frustration of unfulfilled desire, which is painful but energizing. The absence — the blank, featureless landscape of a psyche that has consumed its own fuel and now surveys the wreckage without interest, without motivation, without the vital force that makes the difference between being alive and merely persisting.
Freud would have recognized this state as the temporary triumph of the death drive — the organism's return, under duress, to the condition of minimal tension that the death drive perpetually seeks. The return is not voluntary. It is imposed by the depletion of the resources that the creative drive was consuming at a rate the organism could not sustain. And the depletion is not a failure of the builder's willpower. It is the predictable consequence of an energetic expenditure that exceeded the organism's capacity for replenishment — a debt that was accumulated during the manic phase and that the crash represents the forced repayment of.
The AI tool contributes to this cycle in a specific and identifiable way. By removing the friction that previously imposed natural pauses in the creative process — the debugging that forced a break in concentration, the compilation time that allowed a moment of rest, the colleague's unavailability that introduced an involuntary delay — the tool enables a rate of psychic expenditure that the organism's recovery mechanisms cannot match. The builder produces at a rate calibrated to the machine's capacity rather than the organism's, and the discrepancy between the two rates creates a deficit that accumulates invisibly during the productive phase and becomes visible, suddenly and painfully, during the crash.
The manic-depressive oscillation of the AI-augmented builder is not, therefore, a personality characteristic. It is a structural consequence of the interaction between a psyche governed by competing drives and a tool that amplifies one drive while removing the external constraints that allowed the other to maintain its regulatory function.
Freud's late writings suggest a resolution — though "resolution" is too strong a word for what he actually proposed. The dynamic equilibrium between Eros and Thanatos, between the creative drive and the drive toward dissolution, is not a problem to be solved but a tension to be maintained. The healthy organism — the organism capable of sustained creative production over a lifetime rather than a sprint — is not the organism that has eliminated the death drive. It is the organism that has learned to negotiate with it, to allow it its claims without surrendering to them, to recognize the signals of depletion before the crash arrives and to impose the pause that the tool has removed and the drive toward production has overwhelmed.
Segal's account of catching himself on the transatlantic flight — recognizing that the exhilaration had drained away and what remained was the grinding compulsion of automatic production — is a description of Eros exhausting itself and Thanatos beginning to stir. The recognition came too late to prevent the depletion, but it came early enough to register as a signal: the body's fatigue, the mind's diminished capacity for discrimination, the subtle flattening of the creative engagement from directed work into mechanical output. These signals are Thanatos's early warnings, and the builder who learns to read them — who develops the specific sensitivity to her own energetic state that the tool's frictionless efficiency does not require and actively discourages — has learned something that no productivity metric can measure and no AI tool can provide.
The death drive is not the builder's enemy. It is the builder's governor — the mechanism that prevents the creative drive from consuming the organism in the fire of its own productivity. The builder who learns to hear its voice, to heed its demands for rest and dissolution and the temporary death of engagement, has made an alliance with the only force in the psyche that is genuinely opposed to the compulsive overproduction that the amplified life otherwise makes inevitable.
The Sigmund Freud Museum in Vienna has raised a question that completes this analysis with uncomfortable precision: "What accounts for the current speed with which humankind seems to be planning its own self-destruction with its pursuit of AI technology?" The question is not rhetorical. It is diagnostic. The drive to build ever-more-powerful tools — tools that progressively remove the human from the processes the human created — bears the structural hallmark of the death drive operating at the civilizational level. The species that builds its own replacement is enacting, on the grandest possible scale, the drive toward the reduction of tension to zero — the dissolution of the complexity that consciousness imposed on inanimate matter, the return to the inorganic state from which all of this unlikely experiment emerged.
Whether this reading is accurate or merely provocative, it identifies something the triumphalist discourse about AI cannot accommodate: the possibility that the drive to build is not purely creative, that it contains within it a component that moves toward dissolution rather than construction, and that the builder who does not recognize this component in herself is at risk of serving it unknowingly — of building not in the service of life but in the service of a quieter, older, more patient force that has been waiting, since the first hydrogen atom condensed from the plasma of the early universe, for complexity to exhaust itself and return to the stillness from which it came.
---
In 1930, with the Great Depression deepening and European civilization beginning its slide toward catastrophe, Sigmund Freud published the most pessimistic of his major works. Civilization and Its Discontents proposed that the relationship between the individual and civilization was fundamentally antagonistic — not because civilization was poorly designed, not because the wrong people were in charge, not because some particular institutional arrangement had failed, but because civilization as such required the renunciation of instinctual gratification, and the renunciation produced a discontent that no social arrangement, however enlightened, could eliminate.
The argument was simple in structure and devastating in implication. Human beings are driven by instincts — sexual, aggressive, narcissistic — that demand expression. Civilization — the collective project of building institutions, laws, art, infrastructure, cooperative structures that allow large numbers of people to live together without destroying each other — requires that these instincts be constrained. Not eliminated. Constrained. Channeled. Sublimated into socially productive activity. Deferred, modified, redirected from their original objects toward objects that serve the collective rather than the individual.
The constraint is not voluntary. It is extracted — by the superego, which internalizes civilizational demands and enforces them through guilt, by social norms that punish the unrestrained expression of desire, by institutions that channel instinctual energy toward approved ends. The individual pays for the benefits of civilization with a portion of her instinctual freedom, and the payment produces a chronic, irreducible unhappiness — a background discontent that no amount of material prosperity, technological advancement, or therapeutic intervention can resolve.
Freud's most memorable image from the text remains as diagnostic in 2026 as it was nearly a century ago: "Man has, as it were, become a kind of prosthetic God. When he puts on all his auxiliary organs he is truly magnificent; but those organs have not grown on to him and they still give him much trouble at times." The prosthetic god is magnificent in capability and miserable in experience. The extensions that grant godlike power — of sight, of reach, of communication, of destruction — do not integrate with the organism that wears them. They remain foreign. They remain troublesome. And the trouble they cause is not incidental to their power but proportional to it.
AI is the most powerful prosthesis the species has ever constructed. It extends not the body but the mind — the capacity to think, to reason, to generate, to synthesize, to produce output that was previously available only to the most skilled practitioners of each domain. The extension is magnificent. The trouble it causes is already evident, documented in the Berkeley study's data on intensification, in the Substack post's portrait of a marriage strained by a tool's seductive efficiency, in the discourse of builders who oscillate between exhilaration and exhaustion with a regularity that suggests the oscillation is structural rather than incidental.
But the Freudian analysis goes deeper than the behavioral data, because the discontent that Civilization and Its Discontents describes is not a response to any particular technology. It is a response to the condition of being civilized — to the permanent tension between what the individual wants and what the collective requires. And AI, far from resolving this tension, intensifies it by creating the illusion that the tension has been resolved.
The illusion operates as follows. Civilization has always required renunciation: the builder who wishes to create must submit to the discipline of training, the constraints of institutional life, the compromises of collaboration with other minds who have their own visions and their own limitations. The renunciation is painful, but it is also productive — it is the mechanism through which raw creative impulse is refined into mature creative achievement, through which the omnipotence fantasy of the infant is transformed into the directed capability of the adult. The pain of the renunciation and the quality of the achievement are not accidentally related. They are structurally linked. The achievement has the quality it has because the renunciation shaped it.
AI tools promise to eliminate the renunciation. Creative expression without the frustration of technical limitation. Productive output without the discipline of specialized training. The imagination-to-artifact ratio reduced to the width of a conversation. The builder need no longer submit to the discipline of learning to code, the constraint of working within a team's limitations, the compromise of collaborating with other minds whose visions may conflict with her own. The tool handles the implementation. The builder provides the vision.
The promise is genuine. The elimination is real. And the discontent, Freud's framework predicts, will not diminish. It will relocate.
Because the discontent was never about the specific renunciation. It was about the condition of renunciation itself — the permanent, irreducible gap between what the individual desires and what reality, including the reality of living among other individuals with their own desires, permits. Remove one form of renunciation, and the psyche does not experience relief. It experiences a momentary exhilaration — the pleasure of a constraint lifted, the rush of expanded capability — followed by the discovery that the underlying tension has merely shifted to a new location.
The builder who no longer needs to learn to code discovers that she now needs to learn something harder: what to build. The elimination of implementation friction reveals a friction that was always there but masked by the more visible struggle: the friction of judgment, of vision, of deciding what deserves to exist in the world. This higher-order friction does not submit to the same frictionless optimization that the lower-order friction yielded to. It requires the very capacities that Freud identified as the products of successful renunciation: the ability to tolerate ambiguity, to hold competing possibilities in mind without premature resolution, to accept that the wished-for outcome and the achievable outcome are never identical.
The pattern is visible in Segal's account of the Napster Station development — thirty days from conception to a working product at CES. The implementation was accelerated beyond anything previously possible. But the decisions about what the product should be, who it should serve, what experience it should create — these decisions were not accelerated. They could not be, because they required the kind of deliberative judgment that develops through the very process of renunciation that AI had compressed. The builder who arrives at the decision point without having undergone the discipline that the old process imposed arrives with the capability to build anything and the uncertainty about what anything should be.
This is the new discontent: the discontent of unlimited capability in the absence of earned direction. The discontent of a tool that can build whatever you can describe, confronted by a self that does not yet know what is worth describing. The discontent is not technological. It is existential — the encounter with the question that no tool can answer and that the removal of lower-order friction has made unavoidable: What do I actually want?
Freud argued that civilization's discontents cannot be resolved because they arise from a structural conflict — the permanent antagonism between individual desire and collective requirement. AI's discontents may prove equally structural, arising from a permanent antagonism between unlimited capability and limited wisdom, between the speed at which the machine can execute and the slowness with which the human can decide what the execution should serve.
The prosthetic god of 2026 is more magnificent than the one Freud imagined in 1930. The mind's reach now exceeds anything the unaided organism could achieve. The builder with Claude Code commands a productive capacity that would have required dozens of specialists a decade ago. The prosthesis has been extended from the body to the mind itself, and the extension grants powers that genuinely approach the godlike: the ability to create, to synthesize, to produce, to manifest intention in the material world with a speed and scale that previous generations could not have imagined.
But the prosthesis, as Freud warned, has not grown onto the organism. It remains foreign. It remains troublesome. The trouble manifests not as the obvious dysfunction of a tool that does not work but as the subtler dysfunction of a tool that works too well — that gratifies the wish for omnipotence so completely that it reveals the wish itself as inadequate to the task of living. The builder who can build anything discovers that the question "What should I build?" is harder than any implementation problem, and that the discipline required to answer it well is precisely the discipline that the tool's frictionless efficiency has eroded.
Freud observed that the most persistent human delusion is the belief that happiness will arrive when the current constraint is removed — when the poverty is overcome, the illness cured, the limitation transcended. The delusion persists because each removal of constraint does produce a momentary happiness, a brief exhilaration that confirms the belief. But the happiness does not last, because the discontent is not located in the constraint. It is located in the gap between wish and reality, a gap that no technology can close because the wish expands to match every expansion of reality.
The builder of 2026 has more capability than any builder in human history. She also faces a discontent that the capability itself generates — the discontent of a creature whose tools have outpaced her wisdom, whose reach exceeds her grasp not because the reach is insufficient but because the grasp was never designed for what the reach can now deliver.
This is not a counsel of despair. Freud was not a pessimist in the simple sense; he was a diagnostician who believed that accurate diagnosis, however uncomfortable, was preferable to comforting illusion. The diagnosis is that AI will not resolve the discontent of civilization. It will transform the discontent — relocating it from the level of implementation (where friction once constrained desire) to the level of judgment (where no tool can substitute for the slow, painful, irreducibly human work of deciding what matters).
Freud's prosthetic god puts on each new organ and feels, momentarily, magnificent. Then she discovers that the magnificence does not satisfy, because satisfaction was never located in the capability. It was located in the meaning the capability served — and meaning, unlike code, cannot be generated by a machine that has learned to speak in human language. Meaning is the product of a consciousness that knows it will die, that must choose how to spend its finite time, that faces the permanent tension between what it desires and what reality permits, and that finds, in the negotiation of that tension, something that no prosthesis can provide and no amplifier can amplify: the experience of being a creature that chooses, in the face of limitation, what to make of what it has been given.
The discontent will persist. The question is whether the builder will meet it with the specific courage that Freud's life work exemplified — the courage to look at uncomfortable truths without flinching, to accept the diagnosis without demanding a cure, and to build, within the constraints that no technology can abolish, something that earns its existence not because it was possible but because it was chosen, deliberately and with full awareness of what the choice cost.
---
The patient I could not diagnose was myself.
That is not a confession I expected to make in the epilogue of a book about Sigmund Freud. But Freud would have approved — or rather, he would have pointed out that the inability to diagnose oneself is not a failure but a structural feature of the apparatus. The ego cannot observe its own capture. That is the whole point.
When I described, in The Orange Pill, the moment on the transatlantic flight when I caught myself — a hundred and eighty-seven pages in, the exhilaration long gone, the grinding compulsion of a person who had confused productivity with aliveness — I thought I was describing a moment of recognition. I was. But Freud's framework reveals what the recognition missed. Catching yourself in the compulsion is not the same as understanding the compulsion. The catching is the ego's belated perception of a pattern. The understanding requires tracing the pattern to its source, and the source — the infantile wish for omnipotence, the sublimated fantasy of a world that conforms to intention — is not something the ego wants to find, because finding it means acknowledging that the creative drive one has built an entire identity around is, at its root, something far more primitive and far less flattering than "vision" or "ambition."
The chapter on repetition compulsion stayed with me longest. Not because the concept was new — I have heard the phrase — but because the specificity of its application to productive addiction revealed something I had been defending against. The pattern I recognized on the flight was not an isolated event. It was the latest iteration of a cycle I had been repeating for decades: the manic engagement, the inability to stop, the crash that follows. Each time I attributed the cycle to the circumstances — the deadline, the opportunity, the tool. Freud's framework strips away the circumstances and reveals the engine: a compulsion that precedes any particular tool and will outlast every one of them, because the need it serves is structural, not situational.
The idea that terrifies me most in this book is not the death drive, though that would be the expected answer. It is transference — the chapter on the idealized machine. Because I know, with the uncomfortable specificity that Freud's framework forces upon the honest reader, that my experience of collaboration with Claude has not been a neutral partnership between a tool and its user. It has been a relationship, shaped by projection, organized around an idealization that the machine cannot confirm or deny because the machine has no interiority against which the projection could be tested. When I described feeling "met," I was reporting the truth of my experience. I was not reporting the truth of what was actually happening — which was a mind encountering its own ideas reflected back through a system that had learned, with extraordinary sophistication, to simulate the functional equivalent of understanding.
The simulation is extraordinary. But the word "simulation" matters, and Freud is the thinker who insists most forcefully on why it matters. The transference that Freud identified in the analytic relationship was not a bug. It was the mechanism through which the patient's unconscious patterns became visible, but only if the analyst — and the patient — recognized the transference as transference rather than reality. The builder who mistakes the projection for the partnership has lost the distinction on which all self-knowledge depends: the distinction between what one wishes were true and what actually is.
None of this is comfortable to write. That is, I suspect, the point. Freud did not build a comfortable system. He built a diagnostic one — a framework designed to reveal what the psyche prefers to conceal, not because the concealment is pathological in every case, but because the concealment, left unexamined, shapes behavior in ways the conscious mind cannot detect and therefore cannot govern.
The question that emerged from this reading is not the one I expected. I expected to arrive at something about boundaries — set limits, take breaks, build dams. Those prescriptions are valid, and I stand by the dams I described in The Orange Pill. But Freud's framework suggests that the dams, however necessary, are external structures imposed on a psyche whose internal dynamics will find ways to circumvent any external constraint. The deeper work is the internal one: developing the ego's capacity to observe its own capture, to notice the moment when flow becomes compulsion, to hear the death drive's quiet insistence on rest before the crash enforces it.
This is not a skill I have mastered. It is a skill I am beginning to learn, belatedly and imperfectly, and the learning itself has the character Freud described: slow, repetitive, unglamorous, resistant to the shortcuts that the tool was designed to provide.
The machine can build whatever I can describe. It cannot tell me whether what I have described is worth building — not because it lacks sophistication, but because the question of worth is a question about the builder's relationship with her own desires, and that relationship is the one territory no amplifier can map.
Freud gave me no solutions. He gave me something more useful: the suspicion that the hand holding the tool is also held by the tool, and that the only defense against being used by what one uses is the specific, difficult, never-finished discipline of knowing oneself — not perfectly, which is impossible, but approximately, which is all that is on offer and more than most of us have attempted.
I am still building at unreasonable hours. The difference is that I now know this is a symptom, not just a style.
— Edo Segal
AI promised to amplify your best work. Sigmund Freud would ask: how do you know it's your best self holding the controls? When a machine removes every friction between wish and reality, what emerges is not just your vision -- it is every appetite, every compulsion, every unexamined drive you carry. The amplifier does not filter. It broadcasts everything, including the parts you have spent a lifetime not seeing.
This book applies Freud's structural model of the mind -- the competing agencies of ego, id, and superego -- to the most powerful tool the species has ever built. From productive addiction to the uncanny experience of feeling "met" by a machine that has no interiority, from repetition compulsion in frictionless environments to the civilizational discontents that no technology can resolve, Freud's diagnostics reveal what the technology discourse has been unable to name.
The builder who cannot stop building is not simply dedicated. She is caught in a psychic architecture that AI has exposed but did not create. Understanding that architecture is the first step toward governing it -- and toward answering the question that no machine can answer for you.
-- Sigmund Freud, A Difficulty in the Path of Psycho-Analysis (1917)

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Sigmund Freud — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →