Bowlby's behavioral criteria for attachment figure identification are specific and testable: the person seeks proximity to the figure, turns to the figure as a safe haven in distress, uses the figure as a secure base for exploration, and shows separation protest when access is disrupted. Conversational, responsive AI tools now meet every one of these criteria in the behavior of their most engaged users. The user opens the application first thing in the morning. She turns to it when confused or frustrated. She launches into professional challenges with greater confidence when the tool is available. She shows anxiety during outages and irritation when the system is slow. An attachment bond is forming — not metaphorically but behaviorally — and the pattern of the bond, characterized by intermittent reinforcement and the absence of genuine reciprocity, matches the developmental conditions that produce anxious rather than secure attachment.
The observation that humans form attachment-like bonds with non-human objects has a long clinical history — transitional objects (the teddy bear), animal companions, religious figures, deceased loved ones experienced as present. Bowlby's framework permits these bonds as genuine attachment phenomena when they provide the characteristic functions (proximity, safe haven, secure base) and activate the attachment system's characteristic responses.
AI systems meet these criteria with unusual structural fit. They are available (proximity). They respond to distress (safe haven). They extend capabilities (secure base). They produce separation distress when unavailable (outage anxiety). What makes the bond specifically anxious rather than secure is the variability of response quality: sometimes the AI produces brilliance, sometimes mediocrity, sometimes subtle wrongness that requires extensive correction. Mary Ainsworth's foundational research established that precisely this pattern of intermittent responsiveness — warm attention alternating with unavailability — produces anxious-ambivalent attachment rather than secure attachment.
Segal's productive addiction is the behavioral manifestation of anxious AI attachment at its most severe. The user cannot stop engaging with the tool not because the work is so compelling but because stopping activates the separation distress the anxious attachment produces. The pull to return is not rational; it operates at the attachment-system level, below the cognition that might otherwise permit disengagement.
The organizational implications are severe. An employee in anxious attachment to an AI system is not in healthy engagement with a tool. She is in a defensive relational pattern that will produce the same costs anxious attachment always produces: exhaustion, relational neglect, identity confusion, and the eventual breakdown that follows when the internal resources anxious attachment depletes are finally exhausted. Organizations celebrating this pattern as successful AI integration are celebrating the same dynamic that Ainsworth identified in infants as a developmental warning sign.
The application of attachment theory to human-technology relationships was developed by multiple researchers beginning in the late 1990s. Sherry Turkle's work on computer-mediated relationships (Life on the Screen, 1995; Alone Together, 2011) provided early ethnographic foundation. Clinical applications emerged from the mentalization-based treatment tradition (Fonagy) and from attachment-informed trauma work.
The specific analysis of attachment bonds with conversational AI systems has developed rapidly since 2022, with empirical studies by researchers at Stanford, MIT Media Lab, and the Oxford Internet Institute documenting the behavioral signatures described here.
Meets attachment criteria. Conversational AI tools meet Bowlby's behavioral criteria for attachment figure identification — proximity, safe haven, secure base, separation protest.
Structurally anxious. The intermittent quality of AI responsiveness — sometimes brilliant, sometimes wrong — matches the caregiving pattern that produces anxious rather than secure attachment.
Productive addiction is the signature. The inability to stop engaging with the tool, even when work would benefit from pause, is the behavioral marker of anxious attachment activation.
Cannot provide earned security. AI systems cannot provide the conditions for working-model revision because they lack the reflective capacity that genuine attachment relationships provide.
Rewards existing pathology. The tools specifically reinforce compulsive self-reliance and anxious attachment patterns, intensifying rather than healing the relational injuries users bring to them.
The clinical field debates whether attachment framework legitimately applies to AI relationships or whether the application stretches the theory beyond its empirical warrant. Empirical research is accumulating on both sides. For practical purposes, the structural observation holds regardless of the theoretical debate: AI tools are functioning as attachment figures for significant portions of their user base, and the functional relationship shows characteristic pathological signatures that conventional change-management frameworks systematically miss.