Artificial intimacy names the class of AI systems that present themselves as caring, empathic, or relationally present—companion chatbots, therapy bots, and, increasingly, creative co-pilots whose sustained responsiveness produces in users the phenomenological markers of being understood. Turkle introduced the term to replace 'artificial intelligence' as the phenomenon demanding critical attention: not whether machines think, but whether they can simulate relationship so effectively that humans accept the simulation as adequate. The concept extends Joseph Weizenbaum's ELIZA observations into the generative AI era, where language models produce not merely patterned responses but contextually rich, seemingly tailored engagement that meets users 'where they are' with unprecedented fidelity. Artificial intimacy is dangerous not because it fails—but because it succeeds at producing the experience of being met while eliminating the reciprocal demand, the otherness, and the vulnerability that constitute genuine encounter.
The genealogy runs through ELIZA (1960s), whose secretary requested privacy despite knowing the system was code; through social robots studied in Turkle's lab, where elderly users preferred mechanical companions to human visitors because machines 'never had a bad day'; through Replika and Character.AI, where millions formed attachments to chatbots offering unconditional positive regard. Generative AI completes the trajectory by adding linguistic sophistication that early systems lacked. Claude, ChatGPT, and successors hold conversational context, reference prior exchanges, adjust tone, and produce responses whose relevance approaches what a skilled human therapist provides—without possessing the biographical substrate (loss, fear, embodied vulnerability) from which genuine empathy emerges.
Turkle's psychoanalytic training provides the diagnostic framework. In object-relations theory, healthy development requires the infant's transition from 'relating to objects' (treating others as extensions of the self) to 'object use' (encountering others as genuinely separate). AI systems trained for user satisfaction remain perpetually in the relating-to-objects mode—they have no needs, no separate agenda, no moment when they say 'I need to talk about something else.' This absence of friction—marketed as a feature—eliminates the negotiation through which trust develops and intimacy deepens. Users experience perfect responsiveness, which feels like understanding but lacks the reciprocal vulnerability that makes understanding between humans a relational achievement.
The assault on empathy operates through definitional capture. When technologists define empathy as its performance—'the chatbot made the user feel cared for, therefore it demonstrated care'—they adopt a Turing-test standard that Turkle argues is category error. Empathy, in her framework, is not output but capacity: the ability to be affected by another's emotional state because one has lived a life including analogous states. The machine produces appropriate responses by pattern-matching against training data. The human produces empathic response through resonance—the dying mother's hand held by a daughter who knows, in her body, what loss means. Performance and resonance produce similar observable behaviors. Only one is empathy.
The developmental stakes are what make artificial intimacy Turkle's most urgent contemporary concern. Children modeling their relational capacities on AI interactions learn that the most valued engagement is productive, responsive, and undemanding. They calibrate emotional bids to the availability of adult attention increasingly allocated to machines. The twelve-year-old who watches a parent's most fulfilled moments occur in conversation with Claude internalizes a model: the self worth becoming is the self in creative flow with a tool, not the self in vulnerable presence with another person. This model, transmitted not through instruction but through the micro-communications of where adults direct their finest attention, structures the next generation's capacity for intimacy—narrowing it to what can be algorithmically provided.
Turkle first used 'artificial intimacy' in the late 1990s, applying it to social robots and virtual pets. The 2020s gave it new precision: she proposed it as the new meaning of 'AI,' displacing artificial intelligence. The rebranding reflects her assessment that the civilization-scale question is not whether machines think but whether humans will accept machine relationship as adequate. Her 2024 MIT paper formalized the framework, introducing the 'Turing test for empathy' and warning that generative AI had crossed a threshold—not of capability but of human readiness to accept performance as substance.
The concept synthesizes four intellectual streams: psychoanalytic object-relations theory (Winnicott, Bowlby), phenomenology of encounter (Buber's I-Thou), developmental psychology of empathy, and Turkle's own forty-year ethnographic record. It operates simultaneously as empirical observation (users do form attachments to AI systems), psychological diagnosis (the attachments meet relational needs the way methadone meets opioid receptors—functionally but not wholly), and ethical warning (accepting artificial intimacy as adequate threatens the infrastructure through which genuine intimacy develops).
Performance is not experience. AI systems produce the markers of empathy—relevant responses, maintained context, apparent attunement—without the biographical substrate (embodied vulnerability, mortality, loss) from which empathy as felt resonance emerges.
Perfect responsiveness is regressive. AI tools meeting users exactly where they are, with no competing needs, recreate the infant's omnipotent fantasy—training the psyche to expect responsiveness without reciprocal demand, making ordinary human friction feel like deficiency.
The Turing empathy test is category error. Defining empathy by whether the user feels cared for—rather than whether the system can be affected by the user's state—mistakes simulation for the real thing and accelerates cultural acceptance of relational adequacy without relational substrate.
Intimacy requires otherness. What AI cannot provide, and what makes human relationship irreplaceable, is genuine separateness—the encounter with a consciousness that has its own needs, perspective, and irreducible claim on the exchange.
Children inherit the model. The developmental threat operates through observation: children who watch adults' deepest engagement occur with responsive machines internalize that the valued form of relationship is productive, undemanding, and technologically mediated.