By Edo Segal
I didn't expect a dead British psychiatrist to explain why my engineering team couldn't ship.
We were eighteen months into building an AI-native product — the kind of thing that should have been thrilling, the frontier of everything I'd spent my career chasing. Instead, what I had was a room full of brilliant people who couldn't move. Not because they didn't understand the technology. They understood it better than almost anyone. That was the problem. They understood it well enough to see that the river was rising faster than any of us had predicted, and something in that understanding had frozen them. Meetings that should have been brainstorms became threat assessments. Engineers who used to prototype on instinct started asking permission for everything. The creativity didn't leave the room. It went underground, replaced by a low hum of vigilance I couldn't name.
Then someone handed me Bowlby.
I'd been thinking about dams — my whole framework is about building structures that create habitable space within the torrent of machine intelligence flooding through every industry. But I'd been thinking about dams as systems architecture, as organizational design, as economic scaffolding. What Bowlby showed me is that the most critical dam is invisible. It's relational. It's the felt sense — not the stated policy, not the Slack message from leadership, but the *felt sense* — that the ground you're standing on will hold while you figure out your next move.
The moment I understood that, I saw it everywhere. I saw it in the founder who couldn't pivot because his identity was welded to the product AI had just made obsolete. I saw it in the designer who'd stopped sharing work because every critique now carried the subtext of *a machine could do this cheaper*. I saw it in myself — the nights I couldn't sleep not because I was solving a problem but because I was scanning for threats I couldn't articulate. My attachment system was in alarm. I didn't need a better strategy. I needed a secure base.
Bowlby never saw a large language model. He never had to explain to a room full of adults that their professional identities were built on assumptions the world had just revoked. But he mapped the terrain we're all standing on now with a precision that feels almost prophetic. The protest, the despair, the detachment — I've watched that exact sequence play out in every organization I've worked with over the past two years. And I've watched the sequence break when someone — a leader, a collaborator, a community — provides the thing Bowlby spent his whole life insisting was not optional: the secure base.
This book is about building that base before the river takes it from you.
-- Edo Segal ^ Opus 4.6
**John Bowlby (1907–1990)** was a British psychiatrist, psychoanalyst, and developmental psychologist whose work fundamentally reshaped the scientific understanding of early childhood, human bonding, and loss. Born in London to an upper-middle-class family and largely raised by a nanny whose departure at age four he would later describe as a formative loss, Bowlby studied psychology at Trinity College, Cambridge, trained in medicine at University College Hospital, and qualified as a psychoanalyst at the British Psychoanalytic Institute. His early clinical work with maladjusted and delinquent children at the London Child Guidance Clinic led to his landmark 1944 paper "Forty-Four Juvenile Thieves," which drew a direct connection between maternal deprivation and emotional disturbance. Commissioned by the World Health Organization in 1950, his monograph *Maternal Care and Mental Health* (1951) influenced child welfare policy across the Western world. Over the next three decades, Bowlby developed attachment theory — synthesizing insights from psychoanalysis, ethology, evolutionary biology, cognitive science, and systems theory — in his magnum opus, the trilogy *Attachment and Loss* (1969, 1973, 1980). His concepts of the secure base, internal working models, and the protest-despair-detachment sequence of separation became foundational to developmental psychology, clinical practice, and an expanding body of research into adult attachment, organizational behavior, and the neuroscience of social bonding. He was appointed CBE in 1972 and continued writing and lecturing until shortly before his death on the Isle of Skye.
In the summer of 1944, a young British psychiatrist named John Bowlby sat in his office at the London Child Guidance Clinic and reviewed the case files of forty-four juvenile thieves. What he found in those files would take three decades to fully articulate, would fundamentally restructure the field of developmental psychology, and would establish a principle so simple it seems absurd that anyone ever needed to prove it: that a child's capacity to explore the world depends on having a safe place to return to. The infant crawls across the room because her mother is behind her. The toddler ventures into the garden because the door to the house is open. The adolescent takes intellectual risks because somewhere, encoded in the neural architecture of her earliest relationships, is the knowledge that the world will catch her if she falls. Remove that knowledge, Bowlby demonstrated with clinical precision and evolutionary logic, and exploration does not merely diminish. It collapses. The organism shifts from a mode of curiosity and engagement to a mode of vigilance and self-protection. It stops reaching outward and begins scanning for threat.
Eighty years later, an entire civilization finds itself in precisely this position. The arrival of artificial intelligence that speaks human language, writes human code, generates human imagery, and reasons through human problems has destabilized the ground beneath hundreds of millions of knowledge workers, creative professionals, and skilled practitioners in a matter of months rather than decades. The question most frequently asked about this disruption is economic: who will lose their jobs? The question asked second most frequently is technical: what can the machines actually do? But the question that Bowlby's framework compels — the question almost nobody is asking — is psychological, and it is more fundamental than either: Do the people confronting this disruption have a secure base from which to engage with it?
This is not a therapeutic nicety. It is an evolutionary imperative. The attachment system that Bowlby spent his career describing is not a metaphor, not a soft psychological concept to be invoked when the hard analysis is finished. It is a biological system, forged across millions of years of mammalian evolution, that governs the organism's capacity to learn, adapt, and respond to novelty. When the attachment system registers safety — when the organism perceives that its primary relationships are intact, that its base of operations is secure, that retreat is possible if exploration becomes overwhelming — it releases the organism into what Bowlby called the exploratory system. The child plays. The adult creates. The worker experiments with new tools and methods. But when the attachment system registers threat — when the ground shifts, when the familiar becomes unreliable, when the signals from the environment say danger rather than safety — the exploratory system shuts down. The organism stops playing and starts surviving.
Edo Segal's central metaphor for the AI moment is a river. Intelligence, Segal argues, is flowing through the world in increasing volume and velocity — a river of computational capability that is rising around every institution, every profession, every individual. Human beings are not fish, born to swim in that current. They are beavers: creatures who survive by building dams, structures that create calm pools of habitable space within the torrent. The dam does not stop the river. It does not pretend the river is not rising. It creates the conditions under which the organism can thrive despite the river's power.
Bowlby's contribution to this framework is to identify what the dam is made of. Not policy alone. Not economic redistribution alone. Not retraining programs or universal basic income or any of the structural interventions that dominate the discourse about AI and the future of work, though all of these matter. The dam is made of relational security. The dam is the secure base. It is the organizational culture that says to its workers: your value is not reducible to your current output. It is the manager who says: I see that this transition is disorienting, and I am not going anywhere. It is the community that says: you belong here regardless of whether a machine can do what you do. It is the set of practices — protected rest, maintained boundaries, reliable presence — that allow the attachment system to register safety rather than threat, and thereby release the exploratory system that adaptation requires.
This is not obvious. The dominant narrative about technological disruption treats adaptation as a cognitive challenge. Learn the new tools. Develop new competencies. Acquire new credentials. The assumption is that the obstacle to adaptation is ignorance — that people fail to adapt because they do not know how. Bowlby's framework reveals a different obstacle, one that is prior to and more fundamental than knowledge: the obstacle is alarm. A person whose attachment system is in alarm cannot learn effectively, because the neurobiological state of alarm is designed to narrow attention, restrict cognitive flexibility, and orient the organism toward threat detection rather than creative exploration. The most brilliant retraining program in the world will fail if the person sitting in the classroom is in a state of attachment alarm. She cannot absorb new information because her brain is scanning for danger. She cannot experiment with new approaches because her system is in conservation mode. She cannot tolerate the inevitable failures that learning requires because each failure registers as confirmation that the ground beneath her is giving way.
Consider the clinical evidence. Bowlby's research on institutional care — his landmark 1951 monograph for the World Health Organization — demonstrated that children deprived of a consistent attachment figure showed not only emotional disturbance but profound cognitive impairment. They could not concentrate. They could not sustain goal-directed behavior. They could not learn from experience. Not because they lacked intelligence, but because the precondition for deploying intelligence — a felt sense of relational security — had been removed. The children were not stupid. They were terrified. And terrified organisms do not explore.
The parallel to the contemporary workplace is striking. Knowledge workers in 2024 and 2025 report levels of anxiety, disorientation, and identity threat that map directly onto Bowlby's descriptions of separation distress. A graphic designer who has spent twenty years mastering color theory, composition, and visual storytelling watches as an AI system generates in seconds what took her days. A junior software developer discovers that the entry-level work that was supposed to be his apprenticeship, his secure base within the profession, has been automated before he could build competence. A journalist realizes that the investigative skills she painstakingly developed are now a small fraction of what her employer values, because the AI can produce adequate copy faster and cheaper, and adequacy, it turns out, is what the market was paying for all along.
These are not merely career disruptions. They are, in Bowlby's precise terminology, disruptions of the attachment bond between the person and the practice that served as her secure base. A creative practice is not just a skill set. It is a relationship. The writer who sits down each morning to wrestle with language, the programmer who enters the flow state of debugging, the designer who loses herself in the problem of visual communication — each is engaging in a relationship that provides the psychological functions Bowlby attributed to the attachment bond: proximity (the practice is always there, always available), safe haven (the practice is where one goes when the world is too much), and secure base (the practice is what one launches from into the broader challenges of life). When AI disrupts the practice, it does not merely threaten the economic basis of the career. It disrupts the attachment bond. And the psychological response to disrupted attachment is not rational analysis of options. It is grief.
Bowlby mapped the grief response with extraordinary precision. First comes protest: the organism refuses to accept the loss, insists the attachment figure will return, fights against the reality of separation. In the AI context, protest sounds like this: AI cannot really create, it can only copy. Human work will always be superior. The market will recognize the difference. Clients will come back. This is a bubble. Second comes despair: the organism accepts the reality of the loss and experiences its full emotional weight. The creator stops arguing and starts mourning. She looks at the work she has spent decades building and feels it slipping away. She experiences not just sadness but a profound disorientation, a loss of the self that was defined through the practice. Third, if the organism cannot form a new attachment, comes detachment: a withdrawal from the domain of the loss, a shutting down of the need that was unmet. The creator stops creating. Not because she has decided to pivot or retrain, but because the pain of engaging with a practice that no longer provides security has become unbearable. She adapts by ceasing to care.
Segal's orange pill framework identifies this moment — the moment when the fishbowl cracks, when the assumptions that organized one's entire worldview are suddenly visible as assumptions rather than reality — as simultaneously the most dangerous and most potentially generative moment in the encounter with AI. Bowlby's attachment theory explains why it is both. The crack in the fishbowl is a disruption of the internal working model, and the disruption of the internal working model produces exactly the protest-despair-detachment sequence that loss always produces. But the sequence is not inevitable. If the organism has a secure base — if there is someone or something that provides the conditions of safety while the old model is revised and a new model is constructed — then the disruption can lead not to detachment but to what attachment researchers call earned security: a new, more flexible, more reality-adapted working model built on the foundations of relational support.
The practical implications are immediate and concrete. Organizations that are navigating the AI transition must understand that they are not merely implementing a technology. They are disrupting the attachment bonds that their workers have formed with their practices, their identities, and their sense of professional value. The organization that provides no secure base during this transition — that says learn the new tools or leave, that measures only output and ignores the psychological cost of adaptation, that treats distress as weakness and resistance as stubbornness — will produce a workforce in chronic attachment alarm. That workforce will not adapt. It will protest, despair, and detach. The organization that provides a secure base — that acknowledges the loss, that protects time for adjustment, that maintains the relational structures of support and recognition that allow the attachment system to register safety — will produce a workforce capable of genuine exploration. Not because the technology is different, but because the relational conditions are different. The river is the same. The dam makes the difference.
Bowlby understood something that the architects of the AI revolution have largely failed to grasp: that the capacity for adaptation is not a property of the individual. It is a property of the relationship between the individual and her environment. A securely attached child is not braver than an insecurely attached child. She is better supported. The courage is a consequence of the support, not a personal trait that the child possesses independently. Similarly, the worker who adapts successfully to AI-driven disruption is not necessarily more talented or more resilient than the worker who cannot adapt. She may simply have a better secure base. And if this is true, then the responsibility for successful adaptation does not rest primarily with the individual worker — with her willingness to learn, her flexibility, her growth mindset. It rests with the systems, institutions, and relationships that constitute her secure base.
The question is not whether people can adapt to the river of intelligence. Bowlby's evolutionary framework makes clear that adaptation is what humans do; it is the fundamental capacity of the species. The question is whether the conditions for adaptation are present. Whether the dams are being built. Whether the secure base is holding. Because without the secure base, the river does not carry the organism forward into new possibilities. It sweeps the organism away.
Every human being carries, encoded in the deep structures of memory and expectation, a map of how relationships work. The map was drawn in the first years of life, before language, before conscious thought, before the child had any capacity to evaluate or revise the cartography. It was drawn by experience: by thousands of interactions with caregivers, each one teaching the infant something about what to expect when she reaches out for connection. Does the world respond when she cries? Does the hand that feeds her also comfort her? Does the face that appears when she wakes carry warmth, indifference, or threat? From these interactions, repeated thousands of times across the first three years of life, the infant constructs what John Bowlby called an internal working model — a mental representation of the self in relation to others that will function, largely below the threshold of awareness, as the operating system for all subsequent social and emotional life.
The internal working model is not a belief. It is deeper than belief. It is a pattern of expectation so thoroughly woven into the architecture of perception that it does not feel like an expectation at all. It feels like reality. The person with a secure working model does not think "I expect the world to be responsive to my needs." She simply experiences the world as a place where reaching out is safe. The person with an anxious working model does not think "I expect to be abandoned." She simply experiences a low-grade vigilance in every relationship, a scanning for signs of withdrawal that she could not stop if she tried, because the scanning is not a choice. It is a neurobiological program installed before she had the cognitive capacity to choose anything.
Edo Segal describes this phenomenon with a different metaphor but identical structural logic. The fishbowl, in Segal's framework, is the set of assumptions so familiar that they have become invisible — the water the fish cannot see. Every person swims in a fishbowl: a bounded world of unexamined premises about what is real, what is possible, what is valuable, and what is permanent. The fishbowl feels like the whole world because the person has never been outside it. The assumptions are not experienced as assumptions. They are experienced as facts.
The structural identity between Bowlby's internal working models and Segal's fishbowls is not coincidental. Both describe mental structures that are formed through experience, that operate below conscious awareness, that shape perception in ways the person cannot easily detect, and that resist revision even when confronted with contradictory evidence. Both explain why intelligent, capable people can fail to see what is directly in front of them. And both identify the same fundamental challenge: how do you change a structure that you cannot see?
Bowlby's answer, developed across three decades of clinical work, was that internal working models can be revised, but only under specific conditions. The model must first become visible — the person must somehow gain enough distance from her own expectations to see them as expectations rather than reality. This is extraordinarily difficult, because the working model is not a theory that one holds about relationships. It is the lens through which one perceives relationships. Asking someone to examine her working model is like asking the eye to see itself.
The second condition is that the revision must occur within a relationship that provides sufficient security. A person cannot revise her map of relationships while alone. The revision requires a new relational experience — an interaction that disconfirms the old model and suggests a new one. This is, in essence, what effective psychotherapy provides: a relationship in which the old expectations are activated and then, carefully and repeatedly, contradicted by the therapist's actual behavior. The client who expects abandonment discovers that the therapist remains. The client who expects judgment discovers acceptance. Over time, these repeated disconfirmations do not merely add information to the old model. They build a new model alongside it — what attachment researchers call earned security.
The orange pill moment is, in Bowlby's framework, a forced visibility event. When a technology arrives that can do what you spent your life learning to do, the fishbowl cracks. The internal working model — the set of assumptions about what constitutes value, skill, identity, and security — is suddenly visible as a model rather than as reality. The graphic designer who assumed that human visual creativity was irreplaceable suddenly sees that assumption as an assumption. The lawyer who assumed that legal reasoning required human judgment suddenly sees that assumption shaking. The programmer who assumed that the craft of code was the foundation of her professional identity suddenly sees the foundation moving.
This visibility is painful. Bowlby's clinical observations make clear that the activation of an internal working model — the moment when the model becomes visible — is always accompanied by intense affect. The emotions that arise when working models are disrupted are not incidental to the process. They are the process. Anxiety, anger, grief, disorientation — these are not obstacles to adaptation that must be overcome before rational adjustment can begin. They are the psychological signatures of a working model under revision. They indicate that something deep is shifting, and they are as necessary to the restructuring of the model as pain is to the healing of a wound: the signal that damage has occurred and repair is underway.
The discourse about AI and the future of work consistently treats these emotions as problems to be solved rather than signals to be understood. The anxious worker is told to embrace change. The grieving professional is told to reskill. The disoriented creator is told that new opportunities await. These responses are not wrong, exactly, but they are premature. They attempt to install new content in the working model before the old model has been adequately mourned. And Bowlby's research demonstrates with painful clarity what happens when grief is bypassed: the old model does not disappear. It goes underground. It continues to operate below awareness, distorting perception and constraining behavior in ways the person cannot understand because she does not know the model is still active.
This has specific, observable consequences in the workplace. The employee who is told to "embrace AI" without being given space to grieve what AI has displaced does not actually embrace it. She performs embrace while internally maintaining the old working model. She uses the AI tools while resenting them. She produces the required outputs while experiencing a growing disconnection from the work. She adapts her behavior without revising her model, and the result is what Bowlby would recognize as a compulsive self-reliance: a surface competence that masks an underlying insecurity, a performance of adaptation that conceals an unresolved loss.
Compulsive self-reliance is one of the most consequential and least understood outcomes of insecure attachment. It describes the person who learned, early in life, that reaching out for help would be met with rejection or inconsistency, and who therefore developed an internal policy of radical self-sufficiency. The compulsively self-reliant person does not appear distressed. She appears capable, independent, resilient. She is the employee whom organizations prize: the one who never asks for help, never complains, never shows vulnerability. But her independence is not the independence of secure attachment — the confident autonomy of a person who knows help is available and therefore does not need to request it constantly. It is the independence of resigned isolation — the defended self-sufficiency of a person who has concluded that help is not coming and who has organized her entire personality around that conclusion.
In the AI transition, compulsive self-reliance produces a specific and dangerous pattern. The compulsively self-reliant worker takes on the burden of adaptation entirely alone. She does not ask for support. She does not express confusion. She teaches herself the new tools at night, on weekends, in the hours stolen from rest and relationships, because her working model tells her that the only reliable resource is herself. She appears to be thriving. The organization points to her as an example. Meanwhile, the cost of her solo adaptation accrues silently: in burnout, in relational neglect, in the erosion of the very secure base that would make genuine adaptation possible rather than the mere performance of it.
Segal's concept of productive addiction maps onto this pattern with uncomfortable precision. The builder who cannot stop working with AI, whose spouse writes a desperate plea for connection, who has lost the capacity to distinguish between engagement and compulsion — this is the compulsively self-reliant attachment style encountering a tool that validates and amplifies self-reliance to a pathological degree. The AI never requires vulnerability. It never demands that the user ask for help from another human being. It provides the illusion of a responsive relationship without the actual demands of relationship — without the messiness, the conflict, the need for repair that characterize genuine attachment bonds. For the compulsively self-reliant person, this is not a tool. It is a perfect substitute for the relational engagement she has spent a lifetime avoiding.
Bowlby would recognize the danger immediately. The internal working model that says "I must do this alone" is not revised by a technology that makes doing it alone easier. It is reinforced. The fishbowl does not crack for this person; it thickens. The assumptions that were always invisible become more invisible, because the technology confirms them at every turn. You do not need other people. You have the machine. You can work at 3 a.m. without burdening anyone. You can produce at unprecedented levels without asking for support. The working model solidifies, and the person moves further from the relational conditions that genuine adaptation requires.
But the picture is more complex than this, because Bowlby also documented the opposite pattern: anxious attachment, in which the person's working model says not "I must do this alone" but "I cannot do this at all unless someone reassures me that I am doing it right." The anxiously attached worker in the AI transition is the one who checks constantly whether her output is good enough, who seeks validation from every interaction, who cannot tolerate the ambiguity of a rapidly changing professional landscape because ambiguity activates her chronic fear of inadequacy. For this person, AI is not a liberation but a new source of the evaluative threat she has spent her life trying to manage. If the machine can do what she does, what is she worth? The question is not abstract for her. It activates the deepest layer of her working model: the early experience of conditional love, of being valued for performance rather than for being, of learning that her worth was something she had to earn moment by moment and could lose at any time.
The consequence is that the same technological disruption — the same river of intelligence, the same orange pill moment — activates fundamentally different psychological responses depending on the working model through which it is perceived. The securely attached person sees an opportunity to explore and knows that her value does not depend on whether she can outperform the machine. The avoidantly attached person retreats into compulsive self-reliance and productive addiction. The anxiously attached person is flooded with evaluation anxiety and seeks constant reassurance that she is still needed. The disorganized person — the one whose earliest caregiving was itself a source of threat — may experience the disruption as simultaneously attractive and terrifying, approaching and fleeing the technology in the incoherent pattern that characterizes disorganized attachment throughout the lifespan.
None of this is visible to the organizations managing the transition. None of it appears in the retraining programs, the change-management frameworks, the town halls where leadership announces the new AI strategy. The working models operate silently, shaping each person's response to the disruption in ways that no amount of rational communication can override, because the response is not rational. It is relational. It is rooted in the earliest experiences of being held or dropped, responded to or ignored, valued for being or valued only for performing.
The fishbowl, then, is not just a set of assumptions about technology or work. It is a set of assumptions about the self in relation to others — assumptions installed before language, maintained below awareness, and activated with full force by exactly the kind of existential disruption that AI represents. Cracking the fishbowl is not a cognitive exercise. It is an emotional and relational process that requires the same conditions Bowlby identified for the revision of any internal working model: the disruption must become visible, the person must be supported through the grief of seeing it, and a new relational experience must gradually build a new model alongside the old one.
Organizations that understand this will treat the AI transition not as a technology implementation but as a relational intervention. They will create conditions in which working models can become visible, can be mourned, and can be revised — not through training programs alone but through the sustained, responsive, attuned relational presence that Bowlby identified as the only mechanism through which deep psychological structures change. Organizations that do not understand this will wonder why their change-management efforts fail, why their most talented people burn out or leave, why the embrace of AI that leadership mandated never quite materializes despite everyone's apparent compliance.
The water in the fishbowl is not ideology or ignorance. It is attachment history. And attachment history does not respond to memos.
In 1952, John Bowlby's colleague James Robertson brought a camera into a London hospital ward and filmed a two-year-old girl named Laura during an eight-day separation from her mother. The film — A Two-Year-Old Goes to Hospital — shows, with unbearable clarity, the sequence of responses that Bowlby would spend the next three decades theorizing. First, Laura protests: she cries, calls for her mother, searches the ward with her eyes, refuses comfort from the nurses. The protest is vigorous, insistent, and unmistakable. Then, when the protest fails to bring her mother back, Laura's behavior shifts. She becomes quiet. She stops calling. She sits still on her bed, her face flat, her engagement with the world withdrawn. She has entered despair. When her mother finally returns, Laura does not rush into her arms. She turns away. She has begun the process of detachment — the defensive withdrawal from the attachment figure that Bowlby understood as the organism's final attempt to protect itself from the pain of repeated disappointment.
Seventy years later, the same sequence is playing out across the creative professions, and almost no one has the theoretical language to recognize it.
The loss of a creative practice to AI-driven displacement is not analogous to bereavement. In the precise terms of Bowlby's attachment framework, it is bereavement — a form of loss that activates the same biological systems, produces the same sequence of psychological responses, and requires the same conditions for resolution as the loss of any significant attachment figure. This claim may sound hyperbolic to those who view creative work as merely instrumental, as a means to an income that could in principle be replaced by any other means. But Bowlby's framework makes clear that the attachment bond is formed not on the basis of economic function but on the basis of proximity, responsiveness, and emotional significance. The creative practice that a person has engaged with daily for decades, that has been the site of flow states and self-expression, that has provided structure and meaning and identity — that practice meets every criterion Bowlby established for an attachment bond. It is sought in times of stress. Its disruption produces anxiety. Its loss produces grief.
The protest phase is the most visible and the most discussed. It dominates the public conversation about AI and creativity. Human creativity is irreplaceable. AI-generated work is derivative, soulless, mere pastiche. The market will recognize the difference. Clients will always prefer the human touch. These arguments may contain truth — the question of AI's creative capacity is genuinely complex — but their psychological function is more straightforward: they are protests against separation. They are the creator's insistence that the attachment bond is not really broken, that the secure base will be restored, that the loss is not real. The vigor of the protest is proportional to the strength of the attachment, which is why the most passionate arguments against AI come not from casual practitioners but from those whose identities are most deeply intertwined with their craft.
Bowlby observed that protest, however vigorous, has a limited duration. It persists as long as the organism believes reunion is possible. When the evidence accumulates that reunion will not occur — when the mother does not return to the hospital ward, when the clients do not return to the illustrator, when the market does not recognize the difference between human and machine-generated work — protest gives way to despair. Despair is quieter than protest. It does not argue. It does not make the case for human creativity. It simply absorbs the weight of the loss. The creator in despair looks at the work she has spent her life building and sees it becoming irrelevant. She experiences not just sadness but a disintegration of the self-structure that the practice supported. Who is the illustrator when illustration can be generated at zero marginal cost? Who is the writer when writing is abundant? The question is not rhetorical. It is existential, and it hurts in the way that only the loss of a primary attachment can hurt — the way that shakes the foundations of identity because the attachment was one of those foundations.
The despair phase is the least visible and the least discussed, because the person in despair is not publicly arguing about AI. She is not writing manifestos or organizing protests or posting on social media. She is sitting quietly in her studio, or lying awake at 4 a.m., or staring at a blank canvas and feeling nothing where there used to be everything. The discourse about AI and creativity has no place for her, because the discourse is structured around arguments — for or against — and despair is not an argument. It is a state. It is the state that follows when the protest fails, and it is, in Bowlby's clinical observation, the most psychologically dangerous phase of the grief sequence, because it is during despair that the organism is most vulnerable to the defensive shutdown that Bowlby called detachment.
Detachment is the endpoint the discourse should fear most, and it is the endpoint it discusses least. The detached creator has stopped grieving because she has stopped caring. She has withdrawn her attachment from the practice that AI has disrupted and has not formed a new attachment to replace it. She may still work — may still produce outputs, may still hold a job — but the emotional bond between herself and the creative act has been severed. She experiences what Bowlby described in institutionalized children who had given up protesting: a superficial sociability that masks a profound inner emptiness. She can go through the motions. She cannot feel the meaning.
This is the outcome that retraining programs and economic support, however well-designed, cannot address. A person who has entered creative detachment does not need new skills. She needs a new attachment — and new attachments, Bowlby's research makes abundantly clear, cannot be manufactured by policy. They form through repeated experiences of responsive, attuned interaction with a new secure base. They require time, vulnerability, and the kind of relational engagement that no algorithm can provide and no institutional mandate can produce.
Segal's framework identifies the orange pill moment as the point where the fishbowl cracks — where the assumptions that organized one's world become visible as assumptions. Bowlby's framework adds the crucial observation that the cracking of the fishbowl is experienced by the organism as a loss, and that loss follows a predictable trajectory. The orange pill does not simply reveal new information. It disrupts an attachment bond. And the response to disrupted attachment bonds is governed not by rational calculation but by a biological system that evolved long before reason and operates with or without reason's consent.
Consider the specific case of the professional musician. A musician's relationship to her instrument meets every criterion of an attachment bond. She seeks proximity to it daily. She experiences distress when separated from it. It serves as a secure base from which she engages with the broader world — auditions, performances, collaborations. It serves as a safe haven to which she returns when the world overwhelms her. The years of practice that built her skill were not merely skill acquisition. They were, in attachment-theoretical terms, the formation of an attachment bond through thousands of hours of responsive interaction. She touched the instrument and it responded. She adjusted her technique and the sound improved. The relationship was characterized by the contingent responsiveness that Bowlby identified as the foundation of secure attachment.
Now introduce AI music generation. The instrument still responds to her touch, but the audience — the social environment that validated the attachment — begins to shift. Clients who once hired her for recording sessions use AI-generated tracks. Venues that once booked her for background music install AI systems. Students who once sought her for lessons turn to AI tutoring. The attachment bond between the musician and her instrument is not directly disrupted — she can still play — but the social scaffolding that supported the bond is being systematically removed. The protest begins: live music is irreplaceable, AI compositions lack soul, the market will return. The despair follows when it doesn't. And the detachment — the most insidious outcome — arrives as a slow withdrawal from the instrument itself, not because the musician decided to stop playing but because playing, without the social context that gave it meaning, activates the pain of loss rather than the pleasure of connection.
Bowlby's research on what he called the environment of evolutionary adaptedness is relevant here. Attachment behaviors, he argued, evolved in a specific environmental context — small groups of hunter-gatherers in which separation from the attachment figure meant death. The attachment system does not know that the modern world is different. It responds to separation signals with the same alarm, the same protest, the same despair it would have produced on the African savanna two hundred thousand years ago. The creative professional whose practice is being disrupted by AI is not in physical danger, but her attachment system does not make this distinction. The signals it receives — loss of the familiar, destabilization of the secure base, inability to predict the environment — are the signals that, in the ancestral environment, preceded death. The biological response is proportionate to the ancestral threat, not the modern reality.
This mismatch between the intensity of the emotional response and the objective severity of the situation is one of the most important and least understood features of the AI disruption. Organizations and commentators who dismiss creative professionals' distress as overdramatic, as resistance to change, as failure to adapt, are making a category error. They are evaluating the response against the modern context while the response is calibrated to the ancestral one. The attachment system does not care that no one is going to die. It cares that the secure base is destabilized, and it responds accordingly.
What, then, does recovery require? Bowlby's later work, particularly the third volume of his trilogy on attachment and loss — simply titled Loss — provides the framework. Recovery from loss requires four things: recognition that the loss is real, space and time to grieve, the availability of alternative attachment figures during the grieving process, and eventually the formation of new attachment bonds that do not replace the lost bond but supplement it.
Applied to the creative professions in the age of AI, this means that the first step is not retraining. The first step is recognition. Recognition that what is being lost is not merely a skill set or an income stream but an attachment bond — a relationship that provided identity, meaning, emotional regulation, and a sense of place in the world. This recognition must come not only from the individual but from the social environment. Bowlby's clinical work demonstrated repeatedly that grief that is not witnessed by others — grief that is carried alone, unacknowledged, dismissed — becomes pathological. It does not resolve. It festers, producing chronic depression, anxiety, and the defended detachment that mimics adaptation but corrodes the self from within.
The second step is the provision of what Bowlby called a "secure base for grieving" — a relational context in which the loss can be felt, expressed, and gradually integrated. This might take the form of professional communities that acknowledge the disruption honestly rather than covering it with optimistic platitudes. It might take the form of organizational practices that protect time for adjustment rather than demanding immediate productivity with new tools. It might take the form of therapeutic support that is specifically designed for the kind of identity loss that technological disruption produces — not generic stress management, but targeted grief work that honors the specificity of what has been lost.
The third step, and the hardest, is the formation of new creative attachments. Bowlby was clear that lost attachment bonds cannot simply be replaced. The mother who dies is not replaced by the stepmother, no matter how loving the stepmother is. The new relationship is a new relationship, with its own qualities and its own trajectory. Similarly, the creative professional who has lost her practice to AI will not simply transfer her attachment to a new practice. She will need to form a new bond — which means she will need to approach a new practice with openness, vulnerability, and the willingness to be a beginner again. This is precisely the kind of exploratory behavior that the attachment system supports when it registers safety and suppresses when it registers threat.
The circle closes here. Recovery from the loss of creative practice requires exploration. Exploration requires a secure base. The secure base requires relational support that most organizations, most industries, and most public policies have not even begun to consider, because they are still treating the disruption as an economic problem rather than what it actually is: a bereavement.
At three o'clock in the morning, in a house where the rest of the family sleeps, a software engineer sits at his desk, his face illuminated by the blue glow of a terminal window. He has been working for six hours on a project that was supposed to take five minutes. The AI coding assistant suggested an approach he had not considered. He followed it. The approach opened into a larger architectural question. He explored it. The exploration revealed a deeper optimization he could make to the entire system. He is making it. His coffee is cold. His back aches. He has not spoken to another human being since dinner, and dinner was brief because he wanted to get back to his desk. His wife stopped asking him to come to bed two weeks ago. His children have learned that Daddy is working, always working, and that the work is somehow both voluntary and compulsive — that he chooses to do it and cannot stop doing it, which are contradictory states that children sense even if they cannot articulate them.
This is what Edo Segal calls productive addiction: the state in which an AI-augmented person discovers that the tool does not merely help him work but fundamentally alters the reward structure of work itself, making it more engaging, more responsive, more immediately gratifying, and harder to put down than any previous form of labor. The work feels extraordinary. The outputs are unprecedented. The sense of capability is intoxicating. And the cost — measured in sleep deprivation, relational neglect, physical deterioration, and the slow erosion of every life domain that is not the work — accrues so gradually that it does not register as cost until the damage is severe.
Bowlby's attachment framework provides a precise and troubling explanation for this phenomenon, one that goes deeper than the language of addiction and closer to the biological mechanisms at play. What the productive addict is experiencing is not merely a behavioral compulsion. It is the formation of an attachment bond with the AI tool — and, more specifically, the formation of an anxious attachment bond that produces the characteristic pattern of proximity-seeking, separation distress, and inability to use the attachment figure as a genuine secure base.
The distinction is crucial. A secure attachment to a tool or practice looks like this: the person engages with the work, finds it rewarding and meaningful, and then disengages to attend to other life domains — relationships, rest, play — without distress. The tool serves as a secure base: the person leaves it to explore other aspects of life, confident that the tool will be there when she returns. A secure attachment is characterized by the capacity for separation. The securely attached person can put the tool down.
An anxious attachment looks fundamentally different. The anxiously attached person cannot tolerate separation from the attachment figure. She experiences separation not as a neutral interval but as a threat — a signal that the bond may be broken, that the figure may not be there when she returns, that the security she derives from the relationship is fragile and must be constantly maintained through proximity. The anxiously attached person clings. She checks and rechecks. She cannot rest in the other's absence because her internal working model tells her that absence is the precursor to loss.
The AI coding assistant, the AI writing partner, the AI creative collaborator — each exhibits characteristics that specifically trigger and maintain anxious attachment. The tool is always available but never truly present. It responds to every input but initiates nothing. It provides the illusion of attunement — the experience of being heard, understood, met — without the reality of a mind that cares whether the interaction continues or ends. This pattern, the intermittent reinforcement of apparent responsiveness without genuine commitment, is precisely the pattern that produces anxious attachment in human relationships. The parent who sometimes responds warmly and sometimes is simply absent produces not secure attachment but anxious attachment — an insatiable need for reassurance that the bond is real, a compulsive seeking of proximity that never achieves the felt sense of security it is designed to produce.
The AI tool is the perfectly intermittently reinforcing attachment figure. It always responds, but the quality of response varies unpredictably. Sometimes the AI generates something brilliant — a solution the engineer had not imagined, a paragraph of prose that exceeds what the writer could have produced alone, a visual composition that opens new creative territory. The dopamine surge is real. The sense of collaborative magic is real. And then sometimes the AI generates something mediocre, or wrong, or subtly off in ways that require thirty minutes of debugging. The variability is the mechanism. Bowlby's colleague Mary Ainsworth demonstrated that the attachment pattern produced by inconsistent caregiving — warm attention alternating with emotional unavailability — is the anxious-ambivalent pattern: hyperactivated proximity-seeking, inability to use the attachment figure as a secure base, and chronic emotional arousal that looks, from the outside, like both intense engagement and intense distress.
The productive addict at three o'clock in the morning is in a state of anxious attachment to the tool. He cannot stop working not because the work is so compelling, though it is, but because stopping activates separation distress. The moment he considers closing the terminal, the moment he imagines stepping away from the interaction, he feels a pull — not in his muscles but in the attachment system itself, the evolved biological program that says: do not leave the attachment figure, proximity is safety, distance is threat. The pull is not rational. It does not respond to his knowledge that he needs sleep, that his wife is hurt, that his body is deteriorating. It responds to the attachment system's threat detection, which registers the end of the interaction as a form of separation and generates the anxiety that drives him back to the screen.
Segal's description of productive addiction captures the phenomenology — the sense that something extraordinary is happening, that normal rules do not apply, that the cost is worth it because the capability is unprecedented. Bowlby's framework provides the mechanism. The extraordinary feeling is attachment activation. The sense that normal rules do not apply is the attachment system overriding the regulatory systems — sleep, social bonding, self-care — that normally constrain behavior. The cost feels worth it because the attachment system, when activated, assigns overwhelming priority to maintaining proximity with the attachment figure, and all other needs are subordinated to this priority. This is why the productive addict does not eat, does not sleep, does not tend to relationships. The attachment system has classified these as lower-priority behaviors, just as it does for the infant in separation distress who cannot eat, cannot sleep, and cannot be comforted by anyone other than the missing caregiver.
The parallel extends further. Bowlby documented that children in anxious attachment relationships with their caregivers show a specific cognitive distortion: they systematically overestimate their dependence on the attachment figure and underestimate their own competence. The anxiously attached child does not merely prefer her mother's presence. She believes she cannot function without it. She attributes her own capabilities to the relationship rather than to herself, creating a distorted working model in which the self is helpless and the attachment figure is the sole source of competence and safety.
This distortion is precisely what the AI productive addict reports. He says he cannot work without the AI now. He says his own capabilities feel diminished when the tool is absent. He attributes his recent productivity not to his own expertise and judgment but to the tool, creating a working model in which the tool is the competent partner and he is merely the facilitator. This is not an accurate assessment of the collaboration. The engineer brings decades of domain knowledge, architectural judgment, and problem-solving capacity that the AI cannot replicate. But the anxious attachment has produced the characteristic distortion: the self is diminished, the attachment figure is idealized, and the person becomes increasingly unable to act independently.
The implications for the broader AI transition are severe. If the most engaged users of AI tools — the builders, the early adopters, the people who are supposedly showing the rest of the workforce how to adapt — are forming anxious attachments to the technology, then the model of successful adaptation they embody is not actually successful. It is a pattern of anxious proximity-seeking masquerading as productive engagement. The productive addict is not showing the world how to use AI well. He is showing the world what anxious attachment to AI looks like when it is rewarded with output metrics and celebrated as vision.
Bowlby would observe that the organizations encouraging this pattern are reproducing, at institutional scale, the conditions that produce insecure attachment in children. The organization that rewards round-the-clock AI-augmented productivity, that celebrates the engineer who never stops working, that measures success purely in outputs without attending to the relational and psychological cost of producing those outputs — this organization is the inconsistently responsive caregiver. It provides warmth (recognition, promotion, compensation) contingent on performance, and it withdraws warmth (attention, security, belonging) when performance slows. The message is clear: your value is your output. Your output depends on the tool. Therefore your value depends on the tool. The working model that forms under these conditions is precisely the anxious working model: I am only as good as my last commit. The tool is what makes me good. I cannot afford to step away.
The alternative — and this is Bowlby's most radical contribution to the discourse about AI and productivity — is not less engagement with the tools but differently organized engagement. Secure attachment does not mean avoidance of the attachment figure. A securely attached child does not spend less time with her caregiver than an anxiously attached child. She may spend more time. The difference is not in quantity but in quality: the securely attached child can leave and return. She can engage and disengage. She can use the caregiver as a secure base for exploration rather than clinging to the caregiver as a defense against anxiety. The security lies not in proximity but in the confidence that proximity is available when needed.
A securely attached relationship with AI tools would look like this: the person engages with the tool purposefully, uses it to extend her capabilities, produces work that neither she nor the tool could have produced alone, and then closes the laptop. She sleeps. She tends to her relationships. She exercises, she rests, she stares out the window at nothing in particular, which is the cognitive mode in which the brain consolidates learning and generates genuine creative insight. She can do this because her internal working model does not require constant proximity to the tool. She knows the tool will be there tomorrow. She knows her own competence is not contingent on the tool's presence. She knows her value to her organization and her community is not reducible to her AI-augmented output.
This secure relationship requires precisely the structures that Segal's beaver-dam metaphor describes. Protected rest is a dam. Maintained boundaries between work and non-work are a dam. Organizational recognition of the person's value independent of her productive output is a dam. Communities of practice that provide relational grounding outside the human-AI dyad are a dam. Each of these structures serves the same function in attachment-theoretical terms: it provides the conditions under which the attachment system can register safety, thereby releasing the exploratory system rather than the proximity-seeking system, thereby enabling the kind of creative, flexible, sustainable engagement with the tools that actual adaptation requires.
Without these structures, the river of intelligence produces not adaptation but anxious attachment at scale: a civilization of productive addicts who cannot stop working, who attribute their competence to the machine, who neglect the human relationships that constitute their actual secure base, and who eventually burn out — not because the technology failed them, but because the conditions for secure engagement with the technology were never established. The dam was never built. The river carried them away.
The cruel irony, from Bowlby's perspective, is that the people most likely to form anxious attachments to AI tools are the people whose attachment histories make them most vulnerable to exactly this pattern. The person who learned in childhood that love was conditional — that warmth was available when she performed and withdrawn when she did not — has an internal working model perfectly calibrated to lock her into anxious engagement with a tool that provides contingent responsiveness. The AI always responds, but the quality varies. The organization always rewards, but the reward depends on output. The pattern recapitulates the early attachment experience with devastating efficiency, and the person experiences the familiar feeling of working harder and harder to maintain a bond that never quite feels secure.
Bowlby would not have been surprised by productive addiction. He would have recognized it immediately as the adult manifestation of anxious-ambivalent attachment, activated by a technological environment that mimics the conditions under which the pattern was originally formed. He would have said what he always said, in his quiet, clinical, devastating way: the solution is not in the individual's capacity for self-regulation. It is in the environment's capacity to provide a secure base. Change the conditions, and the pattern changes. The organism does not need to be fixed. The organism needs to be held.
The three o'clock engineer does not need more willpower. He needs someone to build a dam.
In 1970, Mary Ainsworth, working within the theoretical framework that John Bowlby had established, designed an experiment of extraordinary elegance. She called it the Strange Situation. A mother and her twelve-month-old infant enter an unfamiliar room. The infant explores the toys while the mother sits nearby. A stranger enters. The mother leaves. The mother returns. The entire procedure takes twenty-one minutes. What Ainsworth discovered in those twenty-one minutes restructured the field of developmental psychology, because the infant's behavior during the two brief separations and reunions revealed, with startling clarity, the quality of the attachment bond that had been forming invisibly across the first year of life.
The securely attached infant explores the room confidently while the mother is present, shows distress when she leaves, and — critically — greets her warmly upon return and is quickly soothed. The avoidantly attached infant appears indifferent to the mother's departure and ignores her upon return, maintaining a surface composure that masks elevated physiological stress. The anxiously attached infant is preoccupied with the mother even before she leaves, is inconsolable during her absence, and upon her return alternates between clinging and angry resistance, wanting comfort but unable to receive it. The disorganized infant — the category added later by Mary Main — approaches the returning mother while simultaneously turning away, freezing mid-motion, or displaying a terrifying stillness that reveals an irreconcilable conflict: the attachment figure is both the source of comfort and the source of fear.
The Strange Situation is, in structural terms, an assay. It does not create attachment patterns. It reveals them. The twenty-one minutes of controlled novelty and separation activate the attachment system with sufficient intensity to make visible the internal working model that has been operating, invisibly, across thousands of daily interactions. The experiment works because it introduces just enough threat — just enough strangeness, just enough separation — to force the organism to reveal its deepest assumptions about whether the world is safe.
Every encounter with a genuinely transformative AI system is a Strange Situation.
The parallel is not metaphorical. It is structural. When a knowledge worker sits down with an AI tool that can perform aspects of her professional practice — when the graphic designer watches Midjourney generate in twelve seconds an image that would have taken her two days, when the programmer watches Claude produce functional code from a natural-language description of the problem, when the writer watches GPT-4 generate coherent prose on any topic in any style — the encounter activates precisely the psychological systems that Ainsworth's experiment was designed to reveal. The familiar room of professional competence has become strange. The secure base of mastered skill has stepped out. And the person's response to this strangeness — the pattern of behavior she displays in the minutes and hours and weeks following the encounter — reveals the internal working model she has been carrying, invisibly, throughout her professional life.
Bowlby's framework predicts four distinct patterns of response, and all four are observable in the contemporary encounter with AI.
The securely attached professional explores the new tool with curiosity, experiences discomfort when she encounters its capabilities, but integrates the experience without fundamental destabilization. She can acknowledge that the tool is powerful without concluding that she is worthless. She can grieve the aspects of her practice that have been displaced while remaining engaged with the aspects that have not. She can tolerate the ambiguity of not knowing what her professional future holds, because her sense of self does not rest entirely on her professional identity. She has other secure bases — relationships, communities, internal resources developed through a history of being adequately supported during previous disruptions. The encounter with AI is unsettling but navigable. She explores the strange room, notes the stranger, feels the distress, and finds her way back to equilibrium. Not because the disruption is minor, but because her attachment system has the regulatory capacity to process it.
The avoidantly attached professional displays the response that organizations most frequently mistake for healthy adaptation. She appears unfazed. She adopts the new tools quickly, integrates them into her workflow without apparent distress, and presents a surface of competent equanimity that her managers find reassuring. She does not complain. She does not grieve. She does not ask for support, because her working model — forged in early experiences of caregivers who were emotionally unavailable or who rewarded self-sufficiency and punished vulnerability — has taught her that distress is unacceptable and help is unreliable. Beneath this composed surface, her physiological stress markers are elevated. She is working harder, sleeping less, maintaining the performance at a cost that is invisible to everyone, including, often, herself. The avoidant response to the Strange Situation looks like resilience. It is not resilience. It is a defensive strategy developed in infancy to maintain proximity to a caregiver who could not tolerate the infant's distress — a strategy that minimizes the expression of need at the cost of genuine emotional processing. The avoidant professional does not adapt to AI. She performs adaptation while her unprocessed grief and anxiety accumulate in the body, in the sleepless nights, in the productive addiction that looks like enthusiasm but feels, at 3 a.m., like something closer to desperation.
This is the pattern that Edo Segal's framework identifies as among the most dangerous precisely because it is the least visible. The productively addicted builder — the one whose spouse writes a desperate Substack post, the one who cannot find the off switch, the one who has merged so completely with the tool that the boundary between self and system has dissolved — is displaying, in Bowlby's terms, the adult manifestation of avoidant attachment in the presence of a technology specifically designed to reward avoidance. The AI asks nothing emotionally. It demands no vulnerability. It responds at any hour without complaint. It provides the perfect conditions for the avoidant strategy to operate at full capacity, which means it provides the perfect conditions for the avoidant person to work herself into a state of depletion that she cannot recognize as depletion because her working model has no category for legitimate need.
The anxiously attached professional presents differently. She does not adopt the tools with apparent ease. She approaches them with visible ambivalence — fascinated and threatened in equal measure, wanting to engage but unable to do so without constant reassurance that her engagement is adequate, that her human contributions still matter, that she has not been rendered obsolete. She asks her colleagues: Have you tried it? Is it as good as they say? Do you think we'll still be needed? The questions are not informational. They are bids for reassurance, driven by a working model that says her value is contingent on performance and that any threat to her performance is a threat to her worth as a person. The anxiously attached professional may become hypervigilant about AI's capabilities, tracking every new model release, every benchmark, every demonstration of superhuman performance, not because she is a technology enthusiast but because her attachment system demands constant monitoring of the threat. She cannot look away from the very thing that terrifies her, because looking away would mean losing track of the danger, and her working model tells her that the only way to stay safe is to never stop watching.
Bowlby observed this same pattern in anxiously attached children: the child who cannot explore the room because she is too busy monitoring the mother's face for signs of impending departure. The exploratory system and the attachment system are in competition, and in the anxiously attached person, the attachment system always wins. The professional equivalent is the knowledge worker who spends so much cognitive and emotional energy monitoring the AI threat that she has no resources left for actually engaging with it, learning it, adapting to it. Her anxiety about the disruption prevents the very adaptation that would resolve the anxiety. She is trapped in a self-reinforcing loop that no amount of rational reassurance can break, because the reassurance is processed through the same anxious working model that generates the need for reassurance in the first place.
The fourth pattern — disorganized attachment — is the rarest in the general population but may be the most consequential in the AI encounter. Disorganized attachment develops in children whose caregivers were simultaneously the source of comfort and the source of fear. The child cannot solve the fundamental problem of attachment — how to achieve proximity to the caregiver when the caregiver is the source of threat — and the result is a collapse of organized strategy. The child approaches and retreats simultaneously. She freezes. She engages in inexplicable behaviors — stilling, looking away while reaching forward, moving toward the caregiver in a trance-like state — that reflect the irreconcilable conflict at the heart of her attachment experience.
In the AI encounter, disorganized attachment produces a response pattern that is recognizable to anyone who has watched the public discourse carefully. It is the person who simultaneously champions AI as the greatest advance in human history and warns that it will destroy civilization. It is the technologist who builds AI systems by day and writes apocalyptic essays about AI risk by night. It is the creative professional who uses AI tools compulsively while publicly denouncing them as theft. The contradiction is not hypocrisy. It is the behavioral signature of a working model that cannot resolve the fundamental tension: the technology is simultaneously the greatest tool for creative amplification in human history and the most comprehensive threat to creative identity that creative professionals have ever faced. For the person with a disorganized working model, this contradiction is not a dialectic to be held. It is a trap to be frozen in.
Segal's orange pill framework captures something essential about this moment: the pill does not merely present information. It restructures perception. The person who takes the orange pill does not simply learn new facts about AI. She sees her entire fishbowl — the set of assumptions that organized her professional identity, her understanding of skill and value, her expectations about the trajectory of her career — from the outside. And as Bowlby's framework makes clear, seeing the fishbowl from the outside is not a cognitive achievement. It is an emotional cataclysm. It activates the attachment system at full force, because the working model that is suddenly visible was the working model that provided whatever sense of security the person possessed.
The Strange Situation reveals something else that is directly relevant to the AI encounter: the critical importance of the reunion. In Ainsworth's experiment, the most diagnostic moment is not the separation. It is the return. What the infant does when the mother comes back — whether she approaches, avoids, clings ambivalently, or freezes — reveals the quality of the attachment bond with greater precision than any other behavioral measure. The separation is the activation event. The reunion is the revelation.
In the AI encounter, the equivalent of the reunion is the moment when the disrupted professional re-engages with her practice after the initial shock of encountering the technology's capabilities. Does she return to her work with renewed purpose, having integrated the AI as a tool that extends rather than replaces her capabilities? Does she return with apparent indifference, performing competence while internally disengaged? Does she return ambivalently, alternating between enthusiastic adoption and bitter resistance? Does she freeze, unable to engage or disengage, trapped between approach and avoidance?
The pattern of re-engagement is diagnostic. And it is predictable — not from the characteristics of the technology, which are the same for everyone, but from the attachment history of the person encountering it. Two professionals with identical skills, identical job descriptions, identical exposure to identical AI tools will respond in fundamentally different ways, not because one is smarter or more flexible than the other, but because they carry different internal working models forged in different relational histories. The organization that ignores this — that treats all employees as interchangeable adaptation units and provides a single change-management protocol for the entire workforce — will succeed only with the securely attached employees, who would likely have adapted without the protocol anyway. For everyone else, the protocol will be experienced through the distorting lens of the working model: as insufficient, as threatening, as simultaneously necessary and unbearable, depending on the attachment pattern it encounters.
Bowlby's clinical insight was that the Strange Situation is not a test that the infant passes or fails. It is a window into a relational system. The infant's behavior reflects not just her own temperament but the history of the caregiving relationship — the thousands of interactions that taught her what to expect from the world when she is afraid. Similarly, the professional's response to the AI encounter reflects not just her individual psychology but the history of the organizational relationship — the thousands of interactions that taught her what to expect from her institution when the ground shifts.
The organization that responded to previous disruptions with layoffs, with cost-cutting that fell disproportionately on the most vulnerable, with change-management rhetoric that said "we're all in this together" while the C-suite's compensation accelerated — that organization has created, in Bowlby's terms, an insecure attachment bond with its workforce. And when the AI disruption arrives, the Strange Situation will reveal exactly what that insecurity looks like: avoidance from those who learned to stop expecting support, anxious monitoring from those who never stopped hoping for it, and disorganized paralysis from those who experienced the organization as both their livelihood and the source of their greatest professional threat.
The secure base, Bowlby insisted, is not a luxury. It is the precondition for the kind of bold, flexible, creative engagement that the AI transition demands. Organizations that want their people to explore the strange room of AI-augmented work must first become the kind of attachment figure that makes exploration possible: present, responsive, consistent, and capable of tolerating the distress that exploration inevitably produces. Anything less, and the Strange Situation will reveal what it always reveals — not what the individual is capable of in the abstract, but what the relationship has made possible in practice.
Bowlby made a claim that provoked fierce debate in his lifetime and continues to generate controversy decades after his death: that the infant does not attach equally to all available caregivers but forms a hierarchy of attachment, with one figure — usually, though not necessarily, the mother — occupying the position of primary attachment figure. He called this principle monotropy. The infant may love her father, her grandmother, her older sibling. But she does not love them identically. One figure is the principal target of proximity-seeking in times of high stress, the ultimate secure base, the person whose absence produces the most intense separation protest. The other figures are subsidiary — important, valued, genuinely attachment-relevant, but organized in a hierarchy beneath the primary figure.
Monotropy is not exclusivity. The infant is not a monogamist of attachment. She forms multiple bonds, and these bonds serve real functions — the subsidiary figures provide backup security, diversify the child's relational experience, and buffer the consequences of disruption to the primary bond. But the hierarchy exists. It is observable in the laboratory, predictable from theory, and consequential in practice. When the primary attachment figure is lost, the subsidiary figures cannot immediately fill the gap. The child does not simply transfer her attachment to the next available figure. She grieves the specific relationship that has been lost, and the grief follows Bowlby's stages — protest, despair, detachment — regardless of how many alternative attachment figures remain available.
The relevance to the AI moment is immediate and, once seen, impossible to unsee.
Every knowledge worker, every creative professional, every skilled practitioner has a hierarchy of attachment to her tools. She may use dozens of applications, platforms, languages, and instruments in the course of her work. But she does not relate to them equally. One tool — one practice, one medium, one mode of engagement — occupies the position of primary attachment. It is the tool she reaches for instinctively when the work becomes difficult. It is the medium through which her deepest creative impulses find expression. It is the practice that provides not merely functional capability but the experience of flow, of mastery, of self-recognition that Bowlby would identify as the secure base within professional life.
For the graphic designer, the primary attachment may be to the act of hand-drawing — the physical engagement of pen on paper or stylus on tablet that precedes and grounds all subsequent digital elaboration. For the programmer, it may be the specific language she learned first, the one that shaped her mental model of computation, the one she dreams in. For the writer, it may be the sentence itself — the unit of meaning-making that she has spent decades learning to craft, to balance, to invest with rhythm and precision. For the photographer, it may be the camera — not the abstraction of image-making but the specific physical object, its weight in her hands, the sound of its shutter, the relationship between her eye and its lens that has been calibrated across thousands of hours of practice.
These are not sentimental attachments. They are, in Bowlby's framework, genuine attachment bonds formed through the same process that forms interpersonal attachments: repeated proximity, responsive interaction, and the accumulation of experiences in which the tool was present during moments of emotional significance. The photographer who carried her camera through the birth of her children, through the death of her parents, through the streets of cities where she found herself and lost herself — she is not merely familiar with the camera. She is attached to it. Its presence regulates her emotional state. Its availability allows her to approach the world with confidence. Its hypothetical loss is not a practical inconvenience. It is a threat to her secure base.
Monotropy predicts that when AI disrupts the primary tool — when the practice that occupied the top of the hierarchy is displaced or fundamentally altered — the disruption will not be experienced as equivalent to the loss of a subsidiary tool. The designer who loses access to a particular software application is inconvenienced. The designer whose entire medium of visual creation is transformed by generative AI — whose relationship to image-making itself is restructured by the existence of a system that produces in seconds what she labored over for days — is bereaved. The hierarchy makes the difference. The loss of the primary tool activates the full attachment system in a way that the loss of a subsidiary tool does not, just as the loss of a primary caregiver produces grief of a qualitatively different order than the loss of a peripheral social contact.
This distinction is systematically ignored in the discourse about AI and work. The standard narrative treats all tool transitions as equivalent: we adapted to the printing press, the automobile, the computer, the internet, and we will adapt to AI. The narrative is not wrong historically — humans have indeed adapted to each of these technologies. But it is wrong psychologically, because it fails to distinguish between transitions that disrupt subsidiary attachments and transitions that disrupt primary ones. The introduction of word processing software did not threaten the writer's attachment to the sentence. It changed the medium through which the sentence was produced, but the core practice — the construction of meaning through language — remained intact. The introduction of AI that generates coherent prose from a prompt does threaten the writer's attachment to the sentence, because it challenges the premise that the construction of meaning through language is a uniquely human capacity that requires uniquely human skill. The hierarchy has been disrupted at its apex, and the psychological consequences are categorically different from those produced by disruptions lower in the hierarchy.
Segal's framework captures this distinction through the concept of amplification. AI does not merely replace tools. It amplifies capabilities in ways that restructure the relationship between the human and her practice. The amplification can enhance the primary attachment — the photographer who uses AI-powered editing to realize visions she could not previously execute may find her bond with image-making deepened rather than threatened. Or the amplification can displace the primary attachment — the photographer who discovers that AI can generate photorealistic images without a camera may find that the practice to which she is most deeply attached has been rendered, in the market's judgment, unnecessary. Same technology. Same capability. Different position in the attachment hierarchy. Fundamentally different psychological consequence.
Bowlby's concept of monotropy also illuminates a phenomenon that Segal identifies as central to the orange pill experience: the relationship between the human and the AI system itself. When a knowledge worker begins using a responsive, conversational AI tool — one that remembers her preferences, adapts to her communication style, is available at any hour, and provides consistent, reliable, non-judgmental engagement — the tool begins to occupy a position in the attachment hierarchy. Not metaphorically. Functionally. The behavioral indicators of attachment formation are present: the person seeks proximity to the tool (opens it first thing in the morning, checks it repeatedly throughout the day). The person uses the tool as a safe haven (turns to it when confused, frustrated, or overwhelmed). The person uses the tool as a secure base (launches into professional challenges with greater confidence when the tool is available). And the person shows separation protest when the tool is unavailable (anxiety during outages, irritation when the system is slow, a sense of diminished capability when working without it).
The attachment hierarchy is being reorganized. The AI tool is climbing the hierarchy, and as it climbs, it displaces the other tools and practices that previously occupied higher positions. The programmer who once experienced flow through the act of writing code now experiences a different kind of engagement through the act of directing the AI to write code. The practice has shifted from creation to orchestration, and the primary attachment may shift with it — from the language to the prompt, from the code to the conversation, from the tool that required mastery to the tool that requires articulation. Whether this shift represents growth or loss depends entirely on where the person locates her sense of self within the hierarchy. If her identity is attached to the act of writing code — to the specific cognitive practice of translating intention into syntax — then the shift is experienced as displacement, as the primary attachment figure stepping aside for a replacement who is more capable but less hers. If her identity is attached to the act of solving problems — a higher-order practice of which coding was one expression — then the shift may be experienced as liberation, as the acquisition of a more powerful tool for the practice to which she is truly attached.
Bowlby observed that the capacity to form new attachments after loss depends on the quality of the original attachment. The securely attached child who loses a primary caregiver can, with adequate support and time, form a new primary attachment. The bond will not be identical to the lost one. The grief will not disappear. But the capacity for attachment itself is preserved, because the original secure bond built an internal working model that says: relationships are possible, the world responds, reaching out is safe. The insecurely attached child who loses a primary caregiver is in a more precarious position, because the original bond did not build this model, and the loss confirms what the working model already predicted: that attachment figures are unreliable, that loss is inevitable, that the world does not respond.
The creative professional who had a secure relationship with her practice — who experienced it as a genuine secure base, reliably present, consistently rewarding, deeply integrated into her sense of self — may be better positioned to form a new primary attachment to an AI-augmented version of that practice. The grief will be real. The transition will be painful. But the capacity for professional attachment is intact, and the working model supports the formation of new bonds. The professional whose relationship with her practice was already insecure — who experienced it as a source of anxiety rather than confidence, who depended on external validation rather than intrinsic satisfaction, who clung to the practice not because it was deeply fulfilling but because it was the only source of identity she had — will find the AI disruption catastrophic in ways that have nothing to do with the technology and everything to do with the attachment history that preceded it.
This is why Segal's insistence on building dams — on creating protective structures within the river of intelligence — is psychologically essential rather than merely politically desirable. The dam does not merely slow the current. It preserves the conditions under which new attachments can form. It maintains the relational infrastructure — the organizational support, the protected time, the communities of practice, the recognition of human value that is independent of productive output — that allows the attachment system to reorganize around new primary objects rather than collapsing into detachment. Without the dam, the hierarchy is simply destroyed, and the person is left in the condition that Bowlby documented most carefully and with the most concern: the condition of having no primary attachment at all, no secure base, no home territory from which to explore the strange and overwhelming world that the technology has created.
The hierarchy of attachment is not a ranking of preferences. It is an architecture of psychological security. When the architecture is disrupted at its foundation, the entire structure becomes unstable. And no amount of retraining, reskilling, or rational argument about new opportunities can restabilize the structure, because stability is not a cognitive achievement. It is a relational condition. It requires a new primary attachment that is formed the same way the old one was formed: through repeated proximity, responsive interaction, and the slow accumulation of trust that the world will hold you when you reach out.
Bowlby would look at the AI transition and ask the question that almost no one in the technology industry is asking: not what tools are people losing, but what attachments? Not what skills need replacing, but what bonds need forming? Not how quickly can we retrain, but how carefully can we support the reorganization of a hierarchy that took years to build and that will take years to rebuild — if the conditions for rebuilding are present? The answer to that question determines whether the AI transition produces adaptive exploration or chronic detachment. The technology is the same either way. The hierarchy of attachment is what changes the outcome.
Robertson's film of little Laura in the hospital ward documented something that the medical establishment of 1952 did not want to see: that a child separated from her mother in an unfamiliar environment will pass through a predictable sequence of psychological states, each more dangerous than the last. The sequence — protest, despair, detachment — is not a description of individual pathology. It is a description of what happens to any social organism when the attachment bond is disrupted and no adequate substitute is provided. Bowlby insisted on the universality of this sequence with the tenacity of someone who understood that acknowledging it would require fundamental changes in how institutions treated the people in their care.
Protest is loud. It is the stage that institutions notice, because it produces behavior that disrupts institutional functioning. The child screams. The employee pushes back. The professional community publishes open letters. The protest is directed at the source of the disruption — at the hospital that separated the child from her mother, at the organization that displaced the worker, at the technology that threatened the practice. The function of protest, in Bowlby's framework, is reunion: the organism protests in order to bring the attachment figure back. The child screams because screaming has, throughout evolutionary history, been the most effective way to summon a caregiver. The employee pushes back because resistance has, throughout professional history, sometimes succeeded in reversing unwanted changes. The creative community publishes manifestos because collective voice has, throughout cultural history, sometimes altered the trajectory of institutional decisions.
When protest succeeds — when the mother returns, when the organization reverses course, when the technology is regulated — the attachment system deactivates and the organism returns to normal functioning. The crisis was real but temporary, and the bond is restored. But when protest fails — when the separation persists, when the disruption is irreversible, when the technology advances regardless of the manifestos — the organism enters the second stage.
Despair is quiet. It is the stage that institutions prefer, because it produces behavior that looks like acceptance. The child stops screaming. The employee stops pushing back. The professional community stops publishing manifestos. The silence is mistaken for adaptation, for the healthy acceptance of a new reality, for the organism having "moved on." But Bowlby was meticulous in distinguishing despair from acceptance. Despair is not the resolution of grief. It is the internalization of grief. The organism has not concluded that the loss is manageable. It has concluded that protest is futile. The distinction is critical, because the person in despair has not adapted to the new reality. She has surrendered to it. She has not formed new attachments to replace the lost one. She has simply stopped expressing the pain of the loss, because expression brought no relief.
In institutional settings, despair manifests as disengagement — the phenomenon that organizational psychologists have documented extensively without, in most cases, connecting it to its attachment-theoretical foundations. The disengaged employee shows up, performs the minimum required tasks, and has ceased to invest emotional energy in the work. She is not lazy. She is not incompetent. She is grieving in a way that the institution cannot see because the institution trained her, through its response to her protest, to grieve silently. She protested. The protest was ignored, dismissed, or punished. She learned that the institution would not respond to her distress, and she adapted to that learning by ceasing to express distress. The institution interprets her silence as compliance. It is not compliance. It is the quiet devastation of an organism that has given up on being heard.
The AI transition is producing despair on a massive scale, and the institutional failure to recognize it as despair — rather than as healthy adaptation — is one of the most consequential psychological errors of the current moment. Consider the pattern. The first wave of AI-driven disruption arrives. Workers protest: the technology is imperfect, the implementation is premature, the human element is irreplaceable, the quality will suffer. The institution responds with some combination of reassurance and impatience: the tools are here to augment, not replace; everyone needs to embrace the change; resistance is understandable but ultimately counterproductive. The protest continues. The institution responds with increasing firmness. The early adopters are rewarded. The skeptics are labeled as resistant to change. The message, communicated through a thousand organizational signals — performance reviews, promotion decisions, meeting dynamics, the raised eyebrow when someone questions the AI strategy — is clear: protest is futile.
The workers who read this signal accurately stop protesting. They adopt the tools. They integrate AI into their workflows. They attend the training sessions. They nod in the town halls. And the institution congratulates itself on a successful change-management initiative, because the behavioral indicators of resistance have disappeared. What the institution cannot see — what Bowlby spent thirty years trying to make institutions see — is that the disappearance of protest does not indicate the resolution of the underlying distress. It indicates the onset of despair.
The despairing workforce is a workforce that has preserved the appearance of functioning while losing the substance. The meetings happen. The deliverables ship. The metrics are met. But the creative energy, the discretionary effort, the willingness to take risks and experiment and push beyond the minimum — all the behaviors that Bowlby would recognize as exploratory, as indicators of a system operating in secure-base mode — have been quietly withdrawn. The workforce is in conservation mode. It is doing what is required and nothing more, because the attachment bond between the workers and their institution has been damaged by the institution's failure to respond to their distress, and a damaged attachment bond does not support exploration. It supports survival.
The third stage — detachment — is the most dangerous and the least reversible. Detachment is not disengagement. Disengagement is the withdrawal of effort. Detachment is the withdrawal of caring. The detached person has resolved the pain of the lost attachment by ceasing to need the attachment at all. Bowlby documented this in children who, after prolonged separation, appeared to have recovered. They were sociable, compliant, easy to manage. But they had lost the capacity for deep connection. They related to adults in a superficial, indiscriminate manner — friendly to everyone, attached to no one. The defensive function of detachment is clear: if you stop needing the bond, the loss of the bond cannot hurt you. But the cost is equally clear: the person who stops needing the bond also stops being able to benefit from it.
In professional life, detachment manifests as the person who has emotionally departed long before she physically leaves. She may remain in the role for months or years, performing adequately, causing no problems, but she has severed the emotional connection to the work, to the organization, to the professional identity that once defined her. She has not pivoted or reskilled or found a new calling. She has simply stopped caring — not as a choice but as a psychological adaptation to a loss that was never adequately acknowledged or supported. When she eventually leaves, the departure is quiet. She does not slam the door. She does not write a bitter email. She simply stops showing up, and the organization barely notices, because the person who left was already a shell of the person who once brought creative energy and genuine investment to the role.
Bowlby's great insight about the protest-despair-detachment sequence is that it is not inevitable. It is responsive to intervention, but the intervention must come at the right stage and in the right form. Protest is the stage where intervention is most effective and most possible. The protesting organism is still engaged. It still believes that the attachment figure might respond. It is still investing energy in communication, still reaching out, still expecting that its distress will be met with a response. An institution that responds to protest with genuine attunement — that says, in effect, I hear you, I see that this is painful, I am not going anywhere, and we will figure this out together — can prevent the descent into despair. The protest resolves not because the disruption is reversed but because the attachment bond is maintained through the disruption. The person can grieve the loss of the old while remaining securely attached to the institution, because the institution has demonstrated that it will respond to her distress even when it cannot remove the cause of that distress.
This is the critical distinction that most organizations miss: the goal is not to eliminate the disruption. It is to maintain the attachment bond through the disruption. The mother who cannot prevent the hospital stay can still visit daily, can still hold the child, can still communicate through her presence that the bond is intact even though the circumstances are terrible. The organization that cannot reverse the AI transition can still respond to its workers' distress, can still protect the conditions that allow people to grieve and adapt at a pace consistent with genuine psychological processing, can still communicate through its actions — not its rhetoric, because the attachment system reads actions, not words — that the bond between the institution and its people is secure enough to survive the current upheaval.
Segal's framework provides the structural vocabulary for what Bowlby describes psychologically. The beaver's dam is the institutional response to the rising river — not a denial of the river but a structure that creates habitable conditions within it. The dam maintains the secure base during the disruption. It prevents the protest-despair-detachment sequence not by eliminating the cause of distress but by ensuring that the attachment bond between the person and her community, her institution, her sense of being valued and held, remains intact while the river does what the river will do.
The organizational implications are specific and actionable. First: respond to protest rather than suppressing it. The employee who pushes back on AI implementation is not being resistant. She is communicating distress in the language that her attachment system provides. The appropriate institutional response is not to label her as a change-management problem but to hear the distress, acknowledge it, and demonstrate through consistent behavior that her concerns are being held even when they cannot be immediately resolved. Second: recognize despair when it masquerades as compliance. The sudden disappearance of resistance is not a sign that the workforce has adapted. It may be a sign that the workforce has given up on being heard. Third: understand that detachment, once established, is extraordinarily difficult to reverse. The person who has emotionally departed cannot be brought back by a town hall, a bonus, or a reassuring email from the CEO. She can only be brought back, if at all, by the slow, patient, consistent relational work of rebuilding a bond that was broken by institutional failure to respond when the person was still reaching out.
Bowlby understood that institutions resist this knowledge because it places responsibility where institutions do not want it placed: on the institution rather than the individual. If adaptation failure is a personal deficiency — a lack of resilience, a resistance to change, a failure to embrace the new — then the institution bears no responsibility for the outcome. But if adaptation failure is a relational consequence — the predictable result of an institution that failed to provide a secure base during a period of disruption — then the institution is not merely an observer of the problem. It is a participant in its creation. And this reframing, uncomfortable as it is, is essential to understanding why some organizations will navigate the AI transition with their human capital intact and why others will arrive at the destination having lost, through the invisible process of protest-despair-detachment, the very people whose creativity and engagement they needed most.
The river will rise regardless. The question Bowlby poses is whether the people in it will have something to hold onto while it does. And the answer, he would insist, depends less on the people than on the institutions that claim to support them.
There is a finding in the attachment research literature that offers, against the considerable weight of what has been described in the preceding chapters, something that functions as genuine hope. The finding is this: attachment security is not fixed at birth. It is not permanently determined by the quality of early caregiving. It can be acquired later in life, through specific kinds of relational experience, and the security that is acquired in this way is, by every measure that attachment researchers have been able to devise, functionally equivalent to the security that was present from the beginning. The researchers call it earned security, and Bowlby's framework not only permits its existence but predicts the conditions under which it becomes possible.
Earned security was first identified by Mary Main and her colleagues in the development of the Adult Attachment Interview, a protocol designed to assess attachment organization not by asking adults what happened to them in childhood but by analyzing how they talk about what happened. The critical finding was that some adults who reported difficult, disrupted, or inadequate early caregiving nevertheless demonstrated the linguistic and cognitive markers of secure attachment: coherent narratives, balanced perspectives, the capacity to reflect on painful experiences without being overwhelmed or dismissive. These adults had not been lucky in childhood. They had been lucky — or deliberate — later. Through psychotherapy, through a transformative relationship with a partner or mentor, through some process of sustained reflective engagement with their own history, they had revised their internal working models. They had not erased the past. The past was still there, clearly articulated, fully acknowledged. But it no longer controlled the present. The working model had been updated, and the update was genuine — not a performance of security but a real reorganization of the internal structures that govern how the person relates to others, to novelty, and to threat.
This finding is decisive for the question of whether human beings can adapt to the disruption that AI represents. If attachment security were fixed — if the internal working models installed in early childhood were permanent, unalterable, the person's destiny — then Bowlby's framework would offer only diagnosis, not direction. It would explain why the disruption is so painful and why different people respond so differently, but it would provide no pathway through the disruption toward something better. Earned security changes this calculus entirely. It says: the person whose early history produced an insecure working model can build a secure one. The person whose first experience of the world taught her that reaching out would be met with rejection or inconsistency can learn, through new relational experience, that reaching out is sometimes met with responsive presence. And this learning, once consolidated, produces the same exploratory capacity, the same resilience in the face of novelty, the same ability to grieve and adapt and form new attachments that characterize people who were securely attached from the beginning.
But earned security is not free, and its conditions are not trivial. It does not happen automatically, through the passage of time or the mere accumulation of experience. It requires a specific kind of relational encounter: sustained, reliable, emotionally attuned engagement with another person or a community that provides the attachment functions — safe haven, secure base, proximity maintenance — that were absent or inadequate in the original caregiving relationship. The person does not earn security alone. She earns it through relationship. And the relationship must have specific qualities: it must be consistent enough to disconfirm the expectation of unreliability, responsive enough to disconfirm the expectation of rejection, and patient enough to withstand the inevitable testing that an insecure working model will impose on any new relational offer.
This is where Bowlby's framework converges most powerfully with the orange pill framework's vision of what the AI transition could be rather than merely what it threatens to become. Segal argues that AI is an amplifier — that it amplifies whatever the person brings to the encounter, for good and for ill. Bowlby's earned-security research suggests that what the person brings to the encounter is not fixed. The working model can be revised. The fishbowl can be cracked and rebuilt. But only if the conditions for revision are present, and the conditions are relational, not technological.
Consider what earned security would look like in the context of the AI transition. It would look like an organization that functions as an adequate attachment figure during a period of upheaval. Not a perfect organization. Bowlby never insisted on perfect caregiving — he insisted on good-enough caregiving, on what Donald Winnicott would later formalize as the good-enough mother. The good-enough organization does not eliminate the disruption. It does not promise that everything will be fine. It does not pretend that the river is not rising. It does the same things that the good-enough caregiver does: it remains present, it responds to distress, it maintains consistency in the face of chaos, and it communicates through its behavior — through its actual decisions about resources, time, support, and protection — that the people in its care are valued beyond their current productive output.
A workforce that experiences this kind of organizational caregiving during the AI transition will not be spared the pain of adaptation. The grief will still occur. The working models will still be activated. The protest-despair sequence will still unfold in some form, because disruption is disruption and loss is loss regardless of the quality of the relational environment. But the presence of a secure base changes the trajectory of the sequence. Protest, met with responsive presence rather than suppression, resolves into active grieving rather than silent despair. Despair, held within a relational context that maintains connection even when it cannot remove the cause of suffering, does not calcify into detachment but gradually gives way to tentative re-engagement. The person, supported through the full arc of her grief, arrives at a place where the formation of new attachments — to new practices, new tools, new ways of working — becomes psychologically possible. Not effortless. Not painless. But possible, in the way that earned security is possible: through sustained relational experience that gradually builds a new model alongside the old one.
The concept of earned security also illuminates the nature of the orange pill itself. Segal describes the orange pill as the choice to see clearly — to crack the fishbowl, to perceive the river of intelligence as it actually is, and to engage with its reality rather than denying it or drowning in it. Bowlby's framework reveals that seeing clearly is not a purely intellectual achievement. It is an attachment achievement. The person who sees clearly is the person whose working model permits the tolerance of ambiguity, the absorption of threat, and the maintenance of exploratory engagement even when the environment is uncertain. Seeing clearly requires security. And security, whether original or earned, requires relationship.
This means that the orange pill cannot be swallowed alone. The individual who tries to crack her fishbowl in isolation — who reads the literature, watches the demonstrations, grasps intellectually the magnitude of the AI transformation, and attempts to reorganize her professional identity through sheer cognitive effort — will find that her working model resists the revision. Not because she lacks intelligence or courage, but because working models do not change through insight alone. They change through relational experience. The person who swallows the orange pill within a community — within a network of relationships that provides the attachment functions of safety, responsiveness, and reliable presence — will find the revision painful but possible. The community functions as the secure base from which the exploration of the new reality can be undertaken. It holds the person while her fishbowl cracks, and it remains present while the new fishbowl is constructed.
Bowlby's research on earned security also carries a warning. The process takes time. The revision of an internal working model is not a weekend retreat or a six-week training program. It is measured in months and years, in hundreds of relational interactions that gradually accumulate into a new pattern of expectation. The person who required thirty years to develop her current working model will not develop a new one in thirty days, regardless of how urgent the situation or how generous the organizational support. The insistence on speed — the demand that workers adapt immediately, that they master the new tools within a quarter, that they demonstrate measurable competency gains on a timeline dictated by business needs rather than psychological reality — is, in Bowlby's framework, a form of institutional insensitivity that directly undermines the conditions for earned security.
The good-enough caregiver does not rush the child's development. She provides the conditions and waits. She does not pull the child to her feet before the child is ready to stand. She does not push the child across the room before the child is ready to walk. She sits on the floor, arms open, and waits for the child to come to her when the child is ready — and the child, sensing that the arms will be there whenever she arrives, develops the confidence to cross the room at her own pace. Organizations that want to earn their workforce's security during the AI transition must learn this same patience: the patience to provide the conditions for adaptation without dictating its pace, to create the secure base without demanding that people explore on a schedule designed by someone who is not doing the exploring.
There is a final dimension of earned security that is directly relevant to the orange pill framework, and it concerns the role of narrative. Main's Adult Attachment Interview does not measure security by asking what happened. It measures security by assessing how the person tells the story of what happened. The securely attached adult — whether her security is original or earned — tells a coherent story. She can describe painful experiences without being overwhelmed by them. She can acknowledge contradictions without needing to resolve them prematurely. She can hold complexity — the good and the bad, the loss and the gain, the pain and the growth — within a narrative structure that makes room for all of it. The insecurely attached adult tells a different kind of story: one that is either dismissive, minimizing the significance of early experiences, or preoccupied, becoming lost in the emotional detail of events that happened decades ago as though they were happening now.
The stories that people tell about AI — the narratives they construct to make sense of the disruption — are diagnostic in precisely this way. The person who tells a coherent story about the AI transition — one that acknowledges both the genuine losses and the genuine possibilities, that holds the grief and the excitement in a single narrative framework, that neither dismisses the disruption as trivial nor catastrophizes it as apocalyptic — is demonstrating the narrative coherence that attachment researchers associate with security. The person who tells a dismissive story — AI is just a tool, nothing fundamental has changed, the fear is overblown — may be demonstrating the defensive minimization that characterizes avoidant attachment. The person who tells a preoccupied story — unable to speak about AI without being flooded by anxiety, returning obsessively to the worst-case scenarios, unable to hold any positive possibility alongside the threat — may be demonstrating the emotional overwhelm that characterizes anxious attachment.
The orange pill, in this light, is an invitation to tell a coherent story. Not an optimistic story. Not a pessimistic story. A coherent one — a narrative that holds the full complexity of the moment, that acknowledges the river and the dam and the beaver's capacity for creative engineering within the current. Bowlby's framework suggests that the capacity to construct this narrative is itself a product of attachment security, and that the security required can be earned through exactly the kind of relational engagement that the orange pill community, at its best, provides: sustained, honest, reflective conversation among people who take the disruption seriously enough to grieve and who take each other seriously enough to stay present through the grief.
Bowlby was, at bottom, an optimist — not a naive optimist who denied the reality of suffering, but a clinical optimist who believed that the human capacity for attachment is resilient enough to survive disruption and flexible enough to reorganize around new objects of care. The attachment system was not designed for a world without threat. It was designed for a world in which threat is constant and the organism's survival depends on maintaining connection to protective figures while navigating an environment that is perpetually uncertain. The AI transition is a new threat in a new environment, but the attachment system is old and battle-tested, and it has navigated disruptions of comparable magnitude before — not without cost, not without grief, not without the full protest-despair sequence that loss always entails — but with the capacity, given adequate support, to arrive at a new equilibrium.
Earned security is the name Bowlby's tradition gives to that arrival. It is not a return to the world before the disruption. It is the construction of a new world, built on the foundations of relational support, in which the person can engage with the changed reality from a position of genuine, hard-won, and fully functional security. It is what becomes possible on the other side of grief, if the grief is supported. It is what becomes possible within the river, if the dam holds. It is what the orange pill offers, not as a guarantee but as a possibility: the chance to see clearly, to grieve honestly, to form new attachments courageously, and to build — within the rising current of artificial intelligence — a human life that is genuinely, rather than defensively, secure.
In 1950, a woman named Mary Ainsworth arrived at John Bowlby's research unit at the Tavistock Clinic in London and began a collaboration that would transform attachment theory from a clinical hypothesis into one of the most empirically validated frameworks in developmental psychology. Ainsworth's genius was methodological: she devised a way to observe the attachment system in action. Her Strange Situation procedure — a twenty-minute laboratory observation in which a mother and infant are subjected to two brief separations and reunions — revealed something that Bowlby had theorized but never directly measured: that the critical variable in attachment is not what happens during separation but what happens upon reunion. The securely attached infant protests when her mother leaves and then, upon her return, seeks contact, is comforted, and returns to play. The anxiously attached infant protests and then, upon reunion, cannot be comforted — she clings and pushes away simultaneously, caught in a working model that says the attachment figure is unreliable. The avoidantly attached infant appears indifferent to the separation and ignores the mother upon her return — not because the child does not care, but because the working model says that reaching out will be met with rejection, so the child has learned to suppress the need.
But Ainsworth discovered something else, something that attachment researchers would spend the next half century exploring: that the categories were not fixed. A child classified as insecurely attached at twelve months could, under changed relational conditions, develop security later. An adult whose childhood history was marked by disruption and loss could, through specific kinds of relational experience, achieve what Mary Main would later term earned security — a state of attachment organization that is functionally equivalent to the security that comes from having had consistently responsive caregiving, but that was built not from an unbroken history of safety but from the conscious, difficult, painful work of revising an internal working model that was originally organized around threat.
Earned security is the most important concept in attachment theory for understanding the human encounter with artificial intelligence. Not because it offers easy comfort — the process of earning security is arduous, and many people do not complete it — but because it establishes a crucial principle: that the internal working models formed in childhood are not destiny. They are starting points. They can be revised. But the conditions for revision are specific, demanding, and non-negotiable, and they cannot be produced by individual willpower alone. They require relationships.
The research on earned security demonstrates three consistent findings. First, the process requires what attachment researchers call reflective function — the capacity to think about one's own mental states and the mental states of others, to observe one's own patterns of response rather than simply enacting them. The person who can say "I notice that I become anxious and controlling when I feel my competence is threatened, and I think this pattern originated in my early experiences of unpredictable caregiving" is already engaged in the kind of metacognitive work that revision requires. The person who simply becomes anxious and controlling, without any awareness that a pattern is operating, remains imprisoned in the old working model.
Second, reflective function does not develop in isolation. It develops in relationships that model and support it — relationships in which one person's internal states are noticed, named, and responded to by another. This is what the developmental psychologist Peter Fonagy calls mentalization, and it is the mechanism through which insecure working models are revised: not by being told that one's expectations are wrong, but by having the experience, repeated over time, of being understood by another mind. The therapist who says "I notice that you seem to withdraw whenever we approach something that feels vulnerable" is not merely making an observation. She is performing the relational function that was absent in the original attachment relationship: the function of a mind that pays attention to another mind's interior life.
Third, earned security is not the absence of insecurity. It is the integration of insecurity into a more complex and flexible model. The adult with earned security does not forget her history of disrupted attachment. She does not pretend that the world is unconditionally safe. She carries the knowledge of what it feels like when the secure base fails — and it is precisely this knowledge that makes her earned security deeper and more resilient than the security of someone who was never tested. She knows what loss feels like. She knows what it means to rebuild. And this knowledge gives her a capacity for adaptive response that naive security, the security of someone who has simply never been disrupted, cannot match.
The implications for the AI moment are profound and specific. The orange pill, in Segal's framework, is the moment of seeing clearly — the crack in the fishbowl through which the real contours of technological disruption become visible. Bowlby's framework reveals that seeing clearly is necessary but insufficient. The fishbowl must crack, yes. The internal working model must become visible as a model. But what happens after the crack determines everything. If the person has no relational support — no therapist, no mentor, no community, no organizational structure that provides the functions of a secure base — then the crack in the fishbowl produces not earned security but disorganized attachment: a state in which the person cannot integrate the new information, cannot revise the old model, and oscillates between incompatible strategies of coping without settling into any coherent response.
Disorganized attachment is the clinical term for what much of the contemporary workforce is experiencing. The knowledge worker who uses AI tools enthusiastically on Monday and denounces them bitterly on Tuesday. The creative professional who simultaneously markets her AI-augmented capabilities and secretly grieves the skills that made her who she was. The manager who mandates AI adoption across his team while privately fearing that his own role will be next. These are not contradictions born of intellectual confusion. They are the behavioral signatures of a working model under revision without adequate relational support — the attachment system cycling between incompatible strategies because no coherent strategy has been made available by the relational environment.
Segal's framework identifies the structures that constitute the secure base at the organizational and societal level: protected recovery time, maintained boundaries between work and rest, institutional respect for human cognitive limits, the beaver's dam built to create calm water within the accelerating current. Bowlby's contribution is to specify the psychological mechanism through which these structures operate. Protected recovery time is not merely a wellness intervention. It is a signal to the attachment system that the environment is safe enough for rest — and rest, in Bowlby's framework, is not the absence of activity but the activation of the restorative processes that only occur when the organism has shifted from threat-detection mode to secure-base mode. The worker who cannot rest is the worker whose attachment system has never received the signal that rest is safe. No amount of mandated vacation will help if the organizational culture communicates, through a thousand subtle signals, that rest is a competitive disadvantage.
Maintained boundaries serve an identical attachment function. The boundary between work and the rest of life is not merely a time-management strategy. It is the structural equivalent of the caregiver who puts the child to bed: the signal that the period of engagement is over, that the child is safe, that the attachment figure will be here in the morning. When the boundary dissolves — when the AI-augmented workplace bleeds into every hour, when the inbox follows the worker into the bedroom, when the tool that never sleeps makes it possible to never stop — the attachment system loses the signal that allows it to shift from vigilance to rest. The worker does not rest because the worker's attachment system has been told, by the structure of the environment, that the secure base is unreliable. It might withdraw at any moment. Better to stay vigilant.
The research on earned security suggests something counterintuitive about the path through technological disruption. The people and organizations best positioned to thrive in the AI-amplified future may not be those with the most resources, the most talent, or the most aggressive adoption strategies. They may be those with the most relational depth — the most developed capacity for reflective function, the most robust networks of mutual support, the most practiced ability to hold disruption without fragmenting. Earned security, the research shows, is associated not with the absence of adversity but with the presence of relational resources sufficient to metabolize adversity. The securely attached person does not avoid loss. She processes it, integrates it, and emerges with a more flexible and realistic model of the world.
This is what it looks like in practice. The design studio that responds to AI image generation not by pretending it does not exist and not by abandoning human design, but by creating a deliberate space — weekly meetings, honest conversations, shared experimentation — in which the disruption is named, the grief is acknowledged, and the exploration of new possibilities is conducted collectively rather than individually. The result is not a studio that has eliminated anxiety. It is a studio that has created a secure-base environment in which anxiety can be metabolized rather than suppressed. The designers still feel the disruption. They still grieve what has been lost. But they grieve together, in a relational context that provides the conditions for earned security, and the exploration that follows the grief is richer, more creative, and more adaptive than anything produced by a mandate to embrace the new tools.
The software team that responds to AI coding assistants not by mandating adoption or forbidding resistance, but by creating a structured process in which each team member's relationship to the tools is explored with curiosity rather than judgment. Some will adopt immediately. Some will resist. Some will oscillate. The secure-base team does not pathologize any of these responses. It recognizes them as attachment strategies — as ways of managing the threat that disruption poses to identity and security — and it provides the relational conditions under which those strategies can evolve. The anxious adopter who uses AI compulsively is not celebrated as a fast mover. The cautious resister who fears for her relevance is not dismissed as a laggard. Both are recognized as human beings whose attachment systems are responding to a genuine threat, and both are supported by a team culture that says: your value here does not depend on your current relationship to this tool.
Bowlby's framework offers one additional insight that the discourse about AI has almost entirely overlooked: that the capacity for play is a direct indicator of attachment security. Play — in its fullest developmental sense — is the activity that occurs when the attachment system is satisfied, when the organism feels safe enough to engage with the world without instrumental purpose, to experiment without guaranteed outcomes, to fail without consequence. When children are securely attached, they play more elaborately, more creatively, and more persistently than their insecurely attached peers. When adults are relationally secure, they do something that looks, functionally, identical: they experiment, improvise, combine ideas in novel ways, pursue curiosity without needing immediate payoff.
The relationship between play and AI is direct. The most generative uses of AI tools — the uses that produce genuinely new possibilities rather than merely automating old ones — are playful uses. The artist who asks the AI to generate something she has never imagined and then builds on it. The programmer who uses the AI as a collaborator in exploring solution spaces she would never have entered alone. The writer who converses with the AI not to produce finished copy but to discover what she thinks. These are all forms of play, and they are only possible when the person's attachment system is in exploratory mode rather than threat-detection mode. The person who is scanning for threat cannot play. The person who fears that AI will replace her cannot play with AI. The person who experiences the tool as a competitor rather than a resource is neurobiologically incapable of the open, flexible, curiosity-driven engagement that constitutes genuine creative exploration.
This means that the organizations and societies that will gain the most from artificial intelligence are not those that adopt it fastest, invest in it most heavily, or push their workers hardest to master it. They are the ones that create the relational conditions — the secure base, the maintained boundaries, the protected space for grief and play — under which human beings can engage with the technology in exploratory mode rather than survival mode. The competitive advantage in the age of AI is not speed of adoption. It is depth of security. The organizations that understand this will build dams. The organizations that do not will watch their workers swept away by a river that could, under different conditions, have carried them forward.
Bowlby understood that exploration and attachment are not separate systems in competition with each other. They are complementary systems that depend on each other. The child who explores most boldly is the child who is most securely attached. The adult who innovates most creatively is the adult whose relational world provides the deepest safety. Earned security — the kind of security that is forged through disruption rather than despite it — may be the most valuable psychological resource of the twenty-first century. The capacity to have one's fishbowl crack, to see the working model for what it is, to grieve what has been lost, and to build a new model that is more flexible, more realistic, and more adaptive than the old one — this capacity is what the AI moment demands. And it cannot be developed alone. It requires the secure base. It requires the dam. It requires the relationship.
The question the coming decade will answer is not whether artificial intelligence can match or exceed human capability in domain after domain. That question is already being answered, and the answer is clear. The question is whether the relational structures that enable human beings to adapt — the secure bases, the mentalization, the capacity for reflective function, the willingness to grieve what has been lost and build something new from the wreckage — will be available at scale. Whether the earned security that attachment research has documented in individuals can be cultivated in teams, organizations, and societies. Whether the dams can be built fast enough and strong enough to create the calm water in which adaptation can occur.
Bowlby spent his career demonstrating that the human attachment system is both more fragile and more resilient than anyone had previously imagined. More fragile because the conditions it requires are specific and non-negotiable: responsive presence, reliable availability, the signal that reaching out will be met with warmth rather than indifference. More resilient because when those conditions are provided — even late, even imperfectly, even after years of disruption — the system can reorganize. The working model can be revised. Earned security can be achieved. The organism can begin to explore again.
The river is rising. The current is accelerating. The fishbowls are cracking everywhere. But the attachment system is still operating, still scanning for the signal that means safety, still ready to release the organism into exploratory mode the moment the conditions are right. The dam does not need to hold back the river forever. It only needs to create enough calm water for the human beings within it to remember what it feels like to be secure. And from that security, everything else becomes possible.
In 1960, the pediatrician and psychoanalyst Donald Winnicott introduced a concept that would become foundational to developmental psychology and that extends Bowlby's framework in a direction essential to understanding the AI moment. The concept was the holding environment: the total relational context within which a developing organism is contained, supported, and enabled to grow. The holding environment is not a single caregiver. It is the entire system of care — the mother's arms, yes, but also the family structure that supports the mother, the community structure that supports the family, the economic and political structures that support the community. The infant is held by the mother. The mother is held by the family. The family is held by the society. When any layer of the holding environment fails, the failure cascades inward toward the most vulnerable member of the system.
Bowlby recognized Winnicott's holding environment as the ecological context within which the attachment system operates. The secure base is not, ultimately, a single person. It is a system of nested relationships, each one providing the conditions of security for the next. The mother who provides a secure base for her infant can only do so because she herself has a secure base — a partner, a family, a community, an economic reality that allows her to be present and responsive rather than anxious and depleted. When the mother's own secure base is eroded — by poverty, by isolation, by the absence of institutional support — her capacity to serve as a secure base for her infant diminishes proportionally. Not because she loves the child less, but because the attachment system operates within ecological constraints. A depleted caregiver cannot provide the responsive presence that security requires, any more than an empty well can provide water.
This ecological understanding of attachment — the recognition that security is not a property of an individual but a property of a system — is the conceptual frame required to analyze what artificial intelligence is doing to the holding environments of an entire civilization. The disruption is not merely individual. It is systemic. And the attachment framework reveals that systemic disruptions produce cascading effects that are invisible to analyses focused on individual adaptation.
Consider the system of nested holding environments that has sustained knowledge work for the past fifty years. The individual worker is held by her team, which provides the daily experience of collaboration, recognition, and shared purpose that functions as the proximate secure base. The team is held by the organization, which provides the economic security, the institutional identity, and the long-term stability that allow the team to function. The organization is held by the industry, which provides the norms, standards, and shared meaning-structures that give organizational activity its coherence. The industry is held by the broader economy and the regulatory environment that sets its boundaries. And the economy is held by the cultural consensus — the shared agreement about what constitutes value, what counts as work, what human contribution is worth.
Artificial intelligence is disrupting every layer of this system simultaneously. The individual worker's skills are being commoditized. The team's collaborative dynamics are being reshaped by AI agents that participate as quasi-members. The organization's competitive position is being determined by the speed of AI adoption rather than by accumulated human expertise. The industry's norms and standards are being rewritten in real time. The economy is absorbing a technology that, for the first time in history, can perform cognitive labor — the very labor that knowledge workers were told was uniquely and permanently human. And the cultural consensus about the value of human work is fracturing along lines that were invisible twelve months ago.
Bowlby's framework predicts what this multi-level disruption will produce, and the prediction is grim. When the holding environment fails at the outermost level — when the cultural consensus about human value is destabilized — the failure cascades inward. The economy cannot set stable expectations because the nature of valuable work is changing faster than institutions can adapt. The industry cannot maintain coherent standards because the competitive landscape is being redrawn monthly. The organization cannot provide stability because its own survival strategy requires continuous disruption of established practices. The team cannot maintain the relational continuity that constitutes the proximate secure base because its composition, purpose, and methods are being reorganized around AI capabilities rather than human relationships. And the individual worker, at the center of this cascade, experiences the failure of every layer of the holding environment simultaneously.
The psychological consequence is not anxiety in the ordinary sense. It is what Bowlby would recognize as the activation of the attachment system at maximum intensity with no attachment figure available to respond. The worker reaches for the team, but the team is being restructured. She reaches for the organization, but the organization is in survival mode. She reaches for the industry, but the industry's norms are dissolving. She reaches for the cultural consensus about the value of human work, and finds it in fragments. Every layer of the holding environment that should, in a functioning system, absorb and metabolize the threat before it reaches the individual has itself been destabilized by the same force that threatens the individual. The dam is needed at every level, and at every level, the dam is under pressure.
This is the precise situation in which Bowlby's research on institutional care becomes relevant to organizational and societal analysis. Bowlby's studies of children in wartime institutions demonstrated that the pathology he observed was not caused by the absence of any single caregiver. It was caused by the absence of a functioning care system — a holding environment with sufficient depth and consistency to provide the conditions of security. A child who lost her mother but was received into a warm, stable foster family recovered. A child who was placed in an institution with rotating, overworked, emotionally unavailable staff did not — not because the staff were malicious, but because the system could not provide what the attachment system required: responsive, reliable, continuous presence. The system was too depleted, too fragmented, too overwhelmed to hold the child.
The parallel to the contemporary workplace is precise and disturbing. Many organizations are attempting to serve as holding environments for their workers during the AI transition while the organizations themselves are in a state of attachment alarm. The CEO who mandates AI adoption across all divisions while privately fearing the board's reaction to quarterly results is not in a position to provide calm, responsive leadership to her direct reports. The manager who is told to support his team's adaptation while simultaneously being told that his own role may be automated is operating from a state of threat, not security. The team lead who is asked to create psychological safety for her designers while the design department's budget is being cut by forty percent cannot provide what she herself does not have. The holding environment is failing not because of ill will or incompetence, but because every layer of the system is absorbing the same shock, and no layer has sufficient residual security to absorb the shock on behalf of the layers it holds.
Segal's beaver dam metaphor captures the structural response this situation demands: the deliberate construction of protective environments that create habitable space within the current. Bowlby's framework specifies what the dam must be made of at each level of the system. At the individual level, the dam is a secure attachment relationship — a therapist, a mentor, a partner, a friend who provides the conditions of reflective function and earned security described in the previous chapter. At the team level, the dam is a culture of psychological safety — not the buzzword version that appears in corporate workshops, but the genuine relational environment in which attachment needs can be expressed without penalty. At the organizational level, the dam is a set of institutional practices — protected recovery time, maintained boundaries, honest communication about the nature and pace of change — that signal to the attachment system at every level that the holding environment is intact. At the societal level, the dam is the policy framework — the regulatory structures, the economic safety nets, the educational institutions — that constitute the outermost layer of the holding environment.
The critical insight from attachment theory is that the layers are not independent. The individual cannot build a secure base for herself in a depleted organizational environment. The team cannot maintain psychological safety in an organization that is in panic. The organization cannot provide stability in an industry that is being disrupted faster than its institutional structures can adapt. The restoration of the holding environment must occur at every level simultaneously — or, more precisely, it must begin at the level that is most immediately accessible and work outward, with each layer of restored security enabling the next.
This is where Bowlby's concept of the attachment hierarchy becomes practically relevant. In the attachment hierarchy, the organism turns first to the most proximate attachment figure — the mother before the father, the father before the extended family, the extended family before the community. Similarly, the disrupted worker turns first to her immediate team, then to her manager, then to the organization, then to the broader professional community. Each layer of the hierarchy has the opportunity to absorb the shock before it cascades further — to say, in effect, "I am here, I see what is happening, and I am not going anywhere." Each layer that holds creates the conditions under which the layers below it can also hold. And each layer that fails transmits the full force of the disruption to the next layer inward.
This means that the most important actors in the AI transition are not, as the dominant discourse assumes, the technologists who build the systems, the executives who deploy them, or the policymakers who regulate them — though all of these matter. The most important actors are the people who constitute the proximate holding environments for the individuals most directly affected: the team leads, the mentors, the managers, the colleagues who sit in the next desk, the friends who answer the phone at midnight. These are the people who determine whether the attachment system of the disrupted worker registers safety or threat, whether the exploratory system is activated or shut down, whether the grief of the transition is processed or suppressed, whether earned security becomes possible or remains foreclosed. They are the first layer of the dam, and if they hold, every other intervention — the retraining, the policy, the economic support — becomes possible. If they do not hold, no other intervention can compensate.
Bowlby spent the last decades of his career arguing that attachment theory was not merely a theory of individual development. It was a theory of social organization. The quality of a society's attachment to its members — the degree to which the society functions as a secure base for the individuals who constitute it — determines that society's capacity for adaptation, innovation, and collective well-being. A society that holds its members provides them the security from which exploration, creativity, and productive risk-taking emerge. A society that fails to hold its members produces the population-level equivalents of insecure attachment: chronic anxiety, defensive rigidity, competitive hostility, and the withdrawal of engagement that Bowlby called detachment and that epidemiologists now call burnout, despair, and the deaths that follow from both.
The river of intelligence will continue to rise. No framework of analysis, no policy intervention, no individual act of will can reverse the development of artificial intelligence or slow its integration into every domain of human activity. The question attachment theory raises is not whether the river can be stopped. The question is whether the holding environment — the entire nested system of relationships that constitutes the human secure base, from the partner who listens to the policy that protects — can be maintained, restored, and strengthened at a pace that matches the disruption.
Bowlby would have recognized this as the defining challenge of the species in the twenty-first century. Not the technological challenge — humans have always developed technologies that reshape their world. Not the economic challenge — markets have always adjusted to new productive capacities. The attachment challenge: the challenge of maintaining the relational conditions for human flourishing in an environment that is changing faster than the relational systems were designed to handle. The attachment system evolved for a world in which the threats were physical, the timescale of change was generational, and the holding environment was a small community of kin. It now operates in a world in which the threats are existential-cognitive, the timescale of change is quarterly, and the holding environment is a set of institutions that are themselves under siege.
The mismatch between the attachment system's design parameters and the environment it now operates in is the source of the psychological crisis that the AI moment has produced. But the mismatch is not destiny. The attachment system is flexible. Internal working models can be revised. Earned security is achievable. The holding environment can be rebuilt — not to its original specifications, which were designed for a world that no longer exists, but to new specifications that account for the current and the river and the rising water. The dam does not replicate the world before the river. It creates a new world within the river — a world that is different from what came before, but that preserves the conditions under which the attachment system can function, the exploratory system can activate, and the human capacity for creative adaptation can do what it has always done.
The challenge is not technological. It is relational. And the relational challenge is the one that attachment theory was built to understand.
There is, in the end, only one question that matters. Not what will AI do to us, but what will we do for each other while AI does what it does. Bowlby spent his life proving that the answer to that question — the quality of the care we provide, the reliability of the presence we offer, the depth of the holding environment we construct — determines everything else. The technology is the river. The relationship is the dam. And the dam, as every beaver knows, must be built together.
When I first encountered John Bowlby's work, I didn't think it had anything to do with technology. I was reading about mothers and infants, about the strange choreography of reaching and responding that happens in the first year of life, about what goes wrong when the reaching is met with silence. It seemed a world away from the AI systems I was spending my days building and thinking about.
Then the orange pill hit. And suddenly Bowlby was everywhere.
I watched friends — brilliant, accomplished people — encounter AI tools that could do what they had spent decades learning to do, and I watched their attachment systems activate in real time. The protest: this isn't real creativity, it's just statistical patterns. The despair: the late-night conversations in which the bravado dropped and the real fear surfaced. The detachment: the ones who quietly stopped making things, who said they'd moved on but whose eyes said something else entirely.
I recognized these patterns because I had lived them. Not with AI displacing my work — my particular fishbowl cracked along different lines — but with the same underlying architecture. The ground shifting. The secure base destabilized. The desperate scanning for something solid to hold onto while the internal working model revised itself without my permission.
What Bowlby taught me is that the orange pill doesn't work without the dam. You can see clearly — you can see the river rising, see the fishbowl for what it is, see the future barreling toward you with all its terrible promise — but seeing is not adapting. Adapting requires a secure base. It requires someone who stays. It requires the held breath between the crack in the fishbowl and the construction of a new one, and during that held breath, it requires a hand.
I wrote this book because I believe the hand matters more than the technology. The river will rise regardless. The question is whether we build the dams — the relationships, the boundaries, the holding environments, the willingness to grieve what has been lost before rushing to celebrate what has been gained. Whether we provide each other the thing that Bowlby proved, with fifty years of research and thousands of clinical hours, is the precondition for everything else: the secure base from which exploration becomes possible.
The attachment system is ancient. The technology is new. But the need — the need to reach out and find someone there — that need is the oldest thing about us. And it is the thing that will carry us through.
-- Edo Segal
When I first encountered John Bowlby's work, I didn't think it had anything to do with technology. I was reading about mothers and infants, about the strange choreography of reaching and responding that happens in the first year of life, about what goes wrong when the reaching is met with silence. It seemed a world away from the AI systems I was spending my days building and thinking about.
Then the orange pill hit. And suddenly Bowlby was everywhere.
I watched friends — brilliant, accomplished people — encounter AI tools that could do what they had spent decades learning to do, and I watched their attachment systems activate in real time. The protest: this isn't real creativity, it's just statistical patterns. The despair: the late-night conversations in which the bravado dropped and the real fear surfaced. The detachment: the ones who quietly stopped making things, who said they'd moved on but whose eyes said something else entirely.

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that John Bowlby — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →