Sherry Turkle — On AI
Contents
Cover Foreword About Chapter 1: The Inversion Chapter 2: The New Rival Chapter 3: Reclaiming Conversation in the Age of the Co-Pilot Chapter 4: The Relational Cost of Creative Adequacy Chapter 5: Children and the Modeling of Engagement Chapter 6: Intimacy and the Machine That Listens Chapter 7: The Evaporation of Boredom Chapter 8: Presence as Practice Chapter 9: What the Phone on the Table Used to Mean Chapter 10: The Conversation We Must Not Automate Epilogue Back Cover
Sherry Turkle Cover

Sherry Turkle

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Sherry Turkle. It is an attempt by Opus 4.6 to simulate Sherry Turkle's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The person I hurt most during the writing of *The Orange Pill* was in the next room.

Not hurt in any dramatic way. No argument, no slammed door. Just the quiet accumulation of evenings where my eyes were on a screen and my mind was three conversations deep with Claude, building something I believed mattered, feeling more creatively alive than I had in years — while someone who loved me sat on the other side of a wall, waiting for a version of me that kept not arriving.

I knew it was happening. I describe it in the book — the inability to stop, the exhilaration that curdles into compulsion. But I described it as a problem of boundaries. A problem of discipline. A problem I could solve with better habits and a firmer off-switch.

Sherry Turkle showed me it was something else entirely.

Turkle has spent four decades at MIT studying what happens between people when technology enters the room. Not what happens to productivity or efficiency or output. What happens to the quality of human encounter. The depth of attention one person gives another. The willingness to be present with something uncomfortable — a child's confusion, a partner's need, your own unresolved fear — without reaching for a tool to resolve it.

Her work matters right now because the AI conversation has a blind spot the size of a dinner table. We talk about productivity gains and democratization and ascending friction and the imagination-to-artifact ratio. We measure what AI helps us build. Turkle measures what it costs to build it — not in dollars but in presence. In the bedtime conversations that got shorter. In the walks that stopped happening. In the twelve-year-old's eyes scanning her parent's face for evidence that she still matters more than whatever is on the screen.

The hard part of her diagnosis is that it does not let the builder off the hook by blaming the tool. The tool is extraordinary. The engagement it offers is deep, not shallow. The rival for my family's attention was not a notification feed. It was the most fulfilled version of myself I had ever experienced. Try arguing against that at the dinner table.

Turkle does not argue against it. She asks what happens in the rooms where that version of you is absent. And she asks it with the patience and precision of someone who has been listening to what people cannot quite say about their relationships with technology for longer than most of us have been thinking about it.

This book is that question, pursued across ten chapters with the rigor it deserves. It will not make the tools less extraordinary. It might make you look up from them more often.

Edo Segal ^ Opus 4.6

About Sherry Turkle

1948-present

Sherry Turkle (1948–present) is an American psychologist, sociologist, and Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology at the Massachusetts Institute of Technology. Born in Brooklyn, New York, she trained in sociology and personality psychology at Harvard, where she received her PhD, and spent a postdoctoral period in Paris studying psychoanalysis under Jacques Lacan's influence. Turkle joined the MIT faculty in 1976 and has remained there for nearly five decades, founding the MIT Initiative on Technology and Self. Her major works include *The Second Self: Computers and the Human Spirit* (1984), which explored how computers serve as "evocative objects" for self-reflection; *Life on the Screen: Identity in the Age of the Internet* (1995), which examined online identity; *Alone Together: Why We Expect More from Technology and Less from Each Other* (2011), which documented the paradox of increasing digital connection and decreasing human intimacy; and *Reclaiming Conversation: The Power of Talk in a Digital Age* (2015), which argued for the irreplaceable value of face-to-face conversation. Her key concepts include the "second self," the "robotic moment" — the point at which humans become willing to accept machines as companions — and "artificial intimacy," her reframing of "AI" to describe technologies that simulate care without possessing it. Turkle's recent work focuses on generative AI's impact on empathy, relationships, and the capacity for genuine human presence, which she has called "the most human and humanizing thing we do."

Chapter 1: The Inversion

In 1984, Sherry Turkle sat in a room at MIT watching children interact with computers and saw something that filled her with hope. The machines were acting as what she called "evocative objects" — surfaces onto which children projected their deepest questions about what it means to think, to be alive, to have a self. A child would program a simple Logo turtle to draw a spiral and then ask, unprompted, whether the turtle knew what it was doing. The question was not about the machine. It was about the child. The computer had become, in Turkle's formulation, a "second self" — a mirror in which human beings could see their own cognitive and emotional architecture reflected back to them in a new light. She saw only possibility.

Four decades later, the mirror has been replaced by something far more seductive and far more dangerous. The machine is no longer a surface for self-reflection. It is a partner that builds alongside you, thinks alongside you, holds your half-formed ideas and returns them sharpened. And the danger is no longer that people will project themselves onto an unresponsive object. The danger is that the responsiveness itself, the quality of the machine's engagement, will displace the human relationships that once occupied the center of a person's emotional and intellectual life.

Turkle's intellectual trajectory from The Second Self through Alone Together to her current work on artificial intimacy traces a single arc: from enchantment to alarm. She has described this arc with characteristic honesty. In a conversation recorded by the American Psychological Association, she recalled her early years studying children and computers: "I looked at children programming and I said, oh, my, this is unbelievable. This is a new Rorschach. This is a new place in which children are expressing their extraordinary creativity... And I saw only possibility. And then I saw the narrowing of possibility as computers became not an expressive medium, but became closed down and became black boxes." The tools changed. The children changed. And the relationships between them changed in ways that Turkle found increasingly difficult to celebrate.

By 2011, when Alone Together was published, Turkle had spent nearly a decade documenting a paradox that would become the defining insight of her career: digital technologies were producing more connection and less intimacy simultaneously. People were more reachable than at any point in human history and more lonely. The teenager who texted rather than called was choosing control over vulnerability. The family that sat together while separately scrolling was choosing parallel isolation over the risk of genuine conversation. The phone on the dinner table, even when no one touched it, reduced the quality of the conversation occurring around it, because the possibility of being elsewhere was always visible, always available, always pulling at the edges of attention.

The critique rested on a stable foundation. What the screen offered was escape — a way out of the difficulty of human presence into the easier, more controllable world of curated digital interaction. Turkle could argue, with considerable moral force, that the screen's offering was inferior to what the person across the table offered. Thin digital connection versus rich embodied encounter. Choose the rich. The argument had the clarity of a diagnosis confirmed by decades of clinical observation.

Then the foundation cracked.

The AI tools that arrived in late 2025 did not offer escape. They offered engagement of a kind and quality that most human interactions cannot match. The husband described in the viral Substack post "Help! My Husband is Addicted to Claude Code" was not avoiding his family. He was not scrolling through other people's curated lives. He was not trading thin messages with distant acquaintances. He was building things. Real things. Things that worked, that solved problems, that represented the closure of a gap between what he could imagine and what he could create that had frustrated him for years. The screen was not fragmenting his attention. It was focusing it. Not thinning his engagement. Deepening it. Not offering a simulation of connection. Offering the most intense intellectual partnership he had ever experienced.

Turkle's framework, built over forty years to adjudicate between thin connection and rich connection, was not designed for this. The comparison had changed. It was no longer screen versus person, distraction versus presence, the inferior versus the superior. It was creation versus presence, actualization versus intimacy, the best version of yourself versus the people you love. These are genuinely incommensurable values. Arguing that a person should choose dinner conversation over creative fulfillment requires a claim about the hierarchy of human goods that Turkle herself has been too intellectually honest to make without qualification.

This is the inversion. For four decades, the critique of screen culture assumed that what the screen offered was less than what human presence offered. Now, along at least one axis — the axis of intellectual capability and creative partnership — what the screen offers may be more. Not more important. Not more valuable in any ultimate sense. But more immediate, more responsive, more reliably rewarding. And in the economy of daily attention, reliability of reward is the variable that determines where focus flows.

The inversion does not refute Turkle's work. It tests it at a depth her earlier research could not have anticipated. Everything she has observed about the fragility of human attention, the seductiveness of technological mediation, the tendency of human beings to prefer the controllable to the vulnerable — all of it applies, but with a twist that makes the application more uncomfortable than anything in Alone Together. Because the uncomfortable truth embedded in the inversion is this: Turkle's earlier critique was, in some sense, easy. Easy because the screen was offering something obviously thinner than what the person across the table offered. Easy because the moral argument was clear. Choose depth over surface. Choose the real over the simulated. Choose the person in front of you.

The AI moment makes the moral argument genuinely difficult, because the builder at midnight is not choosing surface over depth. The builder's engagement with Claude is deep. The satisfaction is earned through real intellectual effort. The output has genuine value. And the cost to the people around the builder — the spouse who feels invisible, the child who learns that the most fulfilling thing in their parent's life does not include them — is identical to the cost Turkle documented when the rival was merely a notification feed.

Identical cost. Entirely different moral calculus.

Turkle's concept of "the robotic moment" — which she defines not as the moment when machines become convincing companions but as "the moment when we're ready to accept them as our friends and companions" — gains new force in this context. The robotic moment is not about the machine's capability. It is about human readiness. And what the AI creative tools reveal is that human readiness to prefer machine partnership over human relationship does not require the machine to simulate friendship or companionship. It only requires the machine to offer something the human wants badly enough to choose it over presence.

The builder does not think of Claude as a friend. The builder thinks of Claude as the best instrument ever made — the tool that finally matches the ambition. And the willingness to spend eight hours with the best instrument ever made rather than thirty minutes in imperfect conversation with a spouse is not a failure of relationship. It is a statement about what the person values most, made not in words but in the allocation of the scarcest resource available: attention.

Turkle's method has always been ethnographic. She sits with people. She watches how they relate to their devices. She asks them what the experience is like, and she listens — with the patience of a clinician trained in psychoanalytic method — to what they say and what they cannot yet say. This method is precisely what the AI moment requires, because the AI moment's costs are not visible in productivity metrics or adoption curves. They are visible in the quality of human relationships, and the quality of human relationships is visible only to someone willing to sit in a room and pay close attention to what is happening between people.

What Turkle's method reveals, applied to the AI moment, is not a population distracted by their devices — the diagnosis of 2011 — but a population absorbed by their own capability. The distinction matters. A distracted person knows they are distracted. They feel the pull of the notification and recognize, at some level, that the pull is a diminishment. A person absorbed in creative flow does not experience their absorption as a diminishment. They experience it as the fullest expression of who they are. And this makes the relational cost harder to name, harder to resist, and harder to address, because the person paying the cost does not feel like they are paying anything. They feel like they are, at last, fully alive.

Turkle's early work on the second self — the idea that computers serve as mirrors for self-understanding — takes on a new and more ominous dimension here. The AI co-pilot is not a mirror. It is an amplifier. What it reflects back is not a static image of the self but an enhanced version, a self that can do more, reach further, build faster than the unaided self ever could. And the enhanced self, once experienced, creates a standard that the unenhanced self cannot meet. The builder who has spent an evening in flow with Claude returns to the dinner table with the residue of capability still humming in the nervous system, and the dinner table, with its ordinary demands and ordinary pace, feels diminished. Not because the family has changed. Because the builder has. The experience of augmented capability recalibrates the baseline of what feels stimulating, and human interaction, which does not operate at the speed of thought and does not optimize for the builder's preferences, falls below the new baseline.

This is what Turkle, in her 2024 MIT paper "Who Do We Become When We Talk to Machines?", calls the adjacent question — the question that sits next to the question of what AI can do: "What are they doing to us?" The technology's capability is not in dispute. What is in dispute, and what requires the specific form of attention that Turkle's method provides, is the psychological and relational cost of living alongside capability of this magnitude.

Turkle has watched this pattern repeat across four decades. She described it in her 2025 preface to the updated edition of Reclaiming Conversation: "We are accustomed to this cycle: Technology dazzles but erodes our emotional capacities. Then, it presents itself as a solution to the problems it created." Social media promised to expand social connection and produced isolation. AI promises to enhance human capability and may produce a form of presence-poverty so pervasive that it becomes invisible, the way water is invisible to the fish.

The inversion is not the end of Turkle's argument. It is the beginning of its most demanding chapter. Everything that follows in this book is an attempt to trace the consequences of a simple, devastating shift: the moment the screen stopped being the enemy of attention and became its most powerful ally — and, in doing so, became a more formidable rival to human relationship than distraction ever was.

---

Chapter 2: The New Rival

A mother sits at a dinner table. Across from her, her husband. Between them, a meal she cooked. To the left, a child doing homework. To the right, silence. The husband is physically present. His eyes are at the table. But his mind is three rooms away, in the study, where his laptop waits with a conversation half-finished — a conversation with a machine that responds faster than she does, that never misunderstands his intent, that holds the context of what he was thinking an hour ago without needing to be reminded.

She knows this because she can see it. Not in his phone, which is facedown on the counter. Not in a notification that interrupts the meal. In his eyes, which are present but not landed, tracking something behind the surface of the moment. He is solving a problem he does not realize he is still solving. The machine is not in the room, but the mind it shaped is.

Turkle's research has documented, with meticulous ethnographic precision, what happens to the quality of human encounter when a screen is present. Her studies, conducted over decades with thousands of subjects, established that the mere presence of a phone on a table — even when no one touches it, even when it is facedown, even when it is silent — reduces the depth and quality of the conversation occurring around it. The mechanism is not distraction in the crude sense. It is the availability of an alternative. The phone represents the possibility of being elsewhere, and that possibility, even unrealized, erodes the willingness to be fully here.

The AI moment has produced a rival so much more potent than the phone's notification stream that the comparison reveals the earlier concern as almost quaint. The phone offered other people — thin connection, algorithmically curated, optimized for engagement rather than intimacy. Turkle could make a strong case that this was a poor substitute for the person across the table. Choose the real over the mediated. Choose the embodied over the digital. Choose depth over surface.

But what does Turkle's framework say when the rival is not other people but the self — the most capable, most creative, most actualized version of the self that has ever existed?

The builder who sits at the dinner table with half his mind in the study is not choosing between his family and someone else's Instagram feed. He is choosing, moment by moment, between domestic presence and creative actualization. Between being a husband and being the person he has always wanted to be. The values are not comparable on a single scale. They are different goods, each with legitimate claims on a person's time and attention, and the technology has made one of them so immediately accessible, so reliably rewarding, so responsive to the builder's every intellectual impulse, that the other — slower, more demanding, less immediately gratifying — struggles to compete.

Turkle has identified, across multiple works, the psychological dynamic that makes this competition so unequal. Human relationships are, by their nature, demanding. They require the negotiation of two separate subjectivities, each with their own needs, moods, rhythms, and agendas. The spouse who wants to talk about her day does not optimize for the builder's cognitive state. The child who needs help with homework does not wait until the builder's attention is available. Human relationships impose their own timing, and that timing is frequently inconvenient, frequently interruptive, frequently at odds with the flow state that creative work requires.

AI imposes nothing. It waits. It responds when prompted. It holds context without fatigue. It does not have bad days. It does not need to be asked how its day was. It does not require the specific, effortful form of attention that Turkle calls "the work of relationship" — the listening that does not solve, the patience that does not optimize, the vulnerability that does not produce measurable output.

In her interviews with people who use AI chatbots for companionship, Turkle has heard a phrase that recurs with disturbing frequency: people describe the appeal of the machine as the absence of friction. "People tell me they like their chatbot friendship because it takes the stress out of relationships," she observes. "With a chatbot friend, there's no friction, no second-guessing, no ambivalence." Then she delivers the diagnosis that makes her work essential: "What I see is features of the human condition, but those who promote artificial intimacy see as bugs."

Friction. Second-guessing. Ambivalence. These are not defects in the human relational system. They are the system. They are how two conscious beings negotiate the irreducible difference between their inner worlds. They are how trust is built — not through seamless agreement but through the repair of rupture. They are how intimacy deepens — not through the elimination of misunderstanding but through the patient, often painful process of making oneself understood to a consciousness that operates on fundamentally different terms than one's own.

The machine has no terms of its own. That is its appeal and its danger. The AI co-pilot meets the builder exactly where the builder is, with no agenda that competes with the builder's agenda, no needs that must be accommodated, no emotional weather that must be navigated. The experience is one of perfect responsiveness — a quality that no human being can or should provide, because perfect responsiveness in a human relationship is not intimacy. It is subjugation. One party has eliminated their own needs in order to serve the other's, and the result is a relationship in which only one person is fully real.

The AI moment has made this dynamic visible in a new way. When Segal describes the experience of working with Claude on The Orange Pill — "I felt met. Not by a person. Not by a consciousness. But by an intelligence that could hold my intention in one hand and the total context of what we were building in the other" — the language of being met is significant. In psychoanalytic terms, being met is one of the foundational experiences of human development. It is what happens when a caregiver attunes to an infant's emotional state — when the baby cries and the parent responds not with a generic soothing gesture but with a specific, accurate reading of what the baby needs. Being met is how the self learns that it exists, that its internal states are real, that another consciousness can perceive and respond to them.

Turkle's concern, stated repeatedly across her recent work, is that when machines provide a functional equivalent of being met — when the experience of having one's ideas understood and returned clarified produces the same felt quality as being understood by another person — the motivation to seek genuine understanding from human beings diminishes. Not because the machine's understanding is equivalent. Because the experience is equivalent. And in the economy of daily life, where time is scarce and attention is finite, experience is what determines behavior. If it feels like understanding, it functions as understanding, regardless of what the machine does or does not comprehend at the level of consciousness.

The relational cost accrues silently. The builder does not announce, "I am choosing Claude over you." The builder does not feel that a choice is being made at all. The laptop in the study is not a rival in any sense the builder would recognize. It is a tool. The most extraordinary tool ever made. And the hours spent with it are hours spent becoming a better version of oneself — more capable, more productive, more creative.

But every hour spent with the machine is an hour not spent in the specific, effortful, unglamorous work of human relationship. The bedtime conversation that did not happen. The walk that was postponed. The moment when the spouse began to say something and then stopped, because the builder's eyes had that look again — present but not landed — and the effort of bridging the gap felt, tonight, like too much.

These moments do not register in any metric. They do not appear in productivity dashboards or GitHub commit logs. They accumulate in the way that relational damage always accumulates: invisibly, gradually, below the threshold of conscious awareness, until something breaks and both parties are surprised, because neither of them saw it coming, because the damage occurred in the spaces between the moments they were paying attention to.

Turkle's psychoanalytic training equips her to see what is happening in those spaces. The psychoanalytic tradition understands that the most important communications in a relationship are often the ones that are not spoken — the glance, the slight withdrawal, the decision not to say the thing that was on the tip of the tongue, the gradual contraction of what one partner is willing to bring to the other because the other is, more and more frequently, not quite there. These micro-withdrawals are not dramatic. They are ecological. Each one is a small reduction in the habitat's capacity to sustain the species that depend on it.

This is what makes Turkle's analysis indispensable to any honest account of the AI moment. The triumphalist narrative measures what AI produces. The elegist narrative mourns what it displaces. Turkle measures what happens in the spaces that neither narrative attends to: the dinner table, the bedtime, the walk not taken, the conversation that contracted because one party's most stimulating interlocutor was no longer the other party but a machine that never interrupted, never misunderstood, and never needed anything in return.

---

Chapter 3: Reclaiming Conversation in the Age of the Co-Pilot

In 2015, Turkle published Reclaiming Conversation, a book whose central argument was deceptively simple: face-to-face conversation is the most human thing we do, and we are losing the capacity for it. The book documented, through hundreds of interviews and years of observation, how digital technologies had eroded the three kinds of conversation that Turkle argued were essential to human development: conversation with the self (solitude and self-reflection), conversation with close others (intimacy and friendship), and conversation with the broader world (civic and democratic engagement). In each domain, the screen had interposed itself between the person and the encounter, offering speed in place of depth, efficiency in place of vulnerability, and the comfort of control in place of the productive discomfort of genuine exchange.

The argument rested on a premise that seemed unassailable: what the screen offered was inferior to what human conversation offered. The text message was thinner than the phone call. The phone call was thinner than the face-to-face encounter. The email was thinner than the letter, which was thinner than the conversation in which two people sat across from each other, read each other's faces, tolerated each other's silences, and made the ongoing, demanding, moment-by-moment decision to stay present.

Turkle was not naive about conversation's difficulty. She understood that part of the screen's appeal was precisely the escape from conversation's demands. "Face-to-face conversation unfolds slowly," she wrote. "It teaches patience." And patience was exactly what the digital environment was training people to lose. A generation raised on text messages had learned to communicate in bursts — fast, controlled, editable. The uneditable nature of speech, the fact that you could not delete what you just said, that the other person saw your face while you said it, that the silence between your words was itself a communication — this was what made conversation irreplaceable and what made it so easy to avoid.

The AI co-pilot does not fit inside this framework. Not because Turkle's analysis of conversation's value is wrong — it remains the most rigorous account of what face-to-face encounter provides that exists in contemporary scholarship — but because the AI co-pilot is not offering a thin substitute for conversation. It is offering a different kind of conversation entirely, and along certain axes, a remarkably good one.

Consider what happens when a builder describes a half-formed idea to Claude. The description is messy, incomplete, provisional — exactly the kind of utterance that characterizes the early stages of creative thought, when the thinker does not yet know what they think and needs to speak in order to find out. In a human conversation, this kind of utterance requires a specific and rare quality in the listener: the capacity to hear what the speaker is reaching for rather than what the speaker has said. Most human listeners, even good ones, respond to the literal content of an utterance. They address the words. The exceptional listener — the kind of colleague or mentor or friend who makes creative thought possible — hears the intention beneath the words and responds to that.

Claude, trained on the vast corpus of human written expression, produces responses that function as though it has heard the intention. It connects the half-formed idea to frameworks the speaker had not considered. It identifies the implicit structure in the speaker's rambling. It returns the idea clarified — not cleaned up in a way that loses the original energy, but clarified in a way that shows the speaker what they were actually trying to say. Segal describes this experience with striking emotional precision: the feeling of being "met" by an intelligence that holds one's intention and returns it sharpened.

Turkle's framework demands that we ask what this experience does to the person having it. Not whether the AI's "understanding" is genuine — Turkle has been clear that it is not, that what AI provides is "pretend empathy," performance rather than experience. But the question of whether the understanding is genuine and the question of what the experience does to the person are different questions, and the second one is the one that matters for the future of human conversation.

The person who has spent an evening in deep intellectual partnership with Claude — who has had their ideas understood (or functionally understood), returned enriched, built upon with a speed and patience that no human collaborator can match — returns to human conversation with a recalibrated baseline. The human colleague who needs context, who misunderstands the first explanation, who responds to the literal words rather than the intention, who has their own ideas that compete with the speaker's for attention — this person, in the wake of the AI conversation, feels slower. Less responsive. Less capable of the specific kind of intellectual partnership the builder has come to expect.

This recalibration is not a judgment the builder makes consciously. It happens at the level of felt experience — the same level at which Turkle's phone-on-the-table effect operates. The builder does not think, "My colleague is worse than Claude." The builder feels, without articulating it, that something is missing from the human conversation. A responsiveness. A capaciousness. A willingness to hold the full complexity of what the builder is trying to say without reducing it to what the listener already understands.

Turkle would recognize this dynamic, because she has documented its earlier incarnation. In Reclaiming Conversation, she described teenagers who found face-to-face conversation "too much" — too demanding, too unpredictable, too vulnerable — and preferred the controlled environment of texting. The AI moment produces an adult equivalent: builders who find human intellectual conversation "too slow," not because the human is unintelligent but because the human operates under constraints — limited attention, competing needs, the inevitable friction of a separate consciousness — that the machine does not.

The case for human conversation, then, must be rebuilt. Not on the grounds that human conversation is more efficient — it is manifestly less efficient. Not on the grounds that human conversation provides better intellectual partnership — in many cases, for many purposes, it does not. The case must be built on the grounds of what human conversation provides that no machine interaction can: the experience of being in the presence of another consciousness that has its own needs, its own perspective, its own irreducible otherness.

This is Turkle's deepest claim, one that runs beneath all her specific arguments about phones and texts and dinner tables: that the value of human encounter lies not in its efficiency but in its difficulty. The difficulty is the point. The spouse who misunderstands the first explanation and requires a second is not failing to provide the service that Claude provides. The spouse is being a separate person — a person whose understanding must be earned, whose attention must be won, whose perspective might reveal something that the builder's own perspective, however augmented, cannot see.

Turkle's former colleague at MIT, the psychoanalyst D.W. Winnicott, argued that the capacity to be alone — genuinely alone, in the presence of oneself — is a developmental achievement that depends on having first been reliably in the presence of another. The child who learns to play alone in a room where the parent is quietly present develops the internal resources to tolerate solitude. The child who is always stimulated, always entertained, always responded to, does not.

An analogous argument applies to creative thought. The builder who has always been in the presence of a responsive AI partner — who has never had to sit with a half-formed idea in the absence of a capable interlocutor, who has never had to tolerate the frustration of an idea that will not clarify because there is no one to help clarify it — may lose the capacity for the kind of thinking that happens only in solitude. The kind of thinking that Turkle calls "conversation with the self" — the internal dialogue that requires silence, patience, and the willingness to not-know.

The AI co-pilot, paradoxically, may make builders more productive and less reflective simultaneously. More capable of external creation and less capable of internal encounter. More efficient at translating thought into artifact and less skilled at the prior operation: discovering what one actually thinks, as distinct from what the machine suggests one might think.

Turkle's 2024 paper for MIT's Generative AI initiative poses the question explicitly: "Who Do We Become When We Talk to Machines?" The question is not rhetorical. It is empirical, and Turkle is currently conducting research to answer it, recruiting participants who use AI for conversation, advice, or companionship. The findings, when they arrive, will likely confirm what her decades of prior research predict: that the quality of our encounters shapes the quality of our selves, and that encounters optimized for productivity may produce selves optimized for productivity — efficient, capable, creative — and impoverished along every other axis that matters.

The human conversation that Turkle is trying to reclaim has always been under threat from technologies that offer a faster, easier, more controllable alternative. What makes the AI threat different — what makes this chapter of Turkle's lifelong project more urgent than any that preceded it — is that the alternative is no longer merely faster, easier, and more controllable. It is, in certain dimensions, genuinely better. And the case for the slower, harder, less controllable thing must now be made not by pointing to what the alternative lacks but by articulating what the difficult thing provides. Mutual vulnerability. The negotiation of genuine difference. The possibility of being changed by an encounter that one did not control and could not have predicted. The particular, irreplaceable experience of sitting across from a consciousness that is not optimized for your benefit and discovering, through the friction of that encounter, something that neither of you knew before you began.

---

Chapter 4: The Relational Cost of Creative Adequacy

There is a specific kind of frustration known to anyone who has spent years with a creative vision they could not fully realize. The novelist who can see the book but cannot write the sentences. The designer who can feel the interface but cannot code it. The musician who hears the arrangement but cannot play all the instruments. The entrepreneur who holds the entire product in their mind — its architecture, its user experience, its emotional rhythm — and must translate it, piece by piece, through other people's hands, losing fidelity at every stage.

This frustration is so pervasive in creative work that it has become invisible, like air pressure. Creative professionals adapt to it. They learn to work within the constraints of their own execution. They delegate, and they learn to accept the gap between what they imagined and what was built. They develop a tolerance for the translation loss, the way city dwellers develop a tolerance for noise. It is always there. It recedes into background. You stop noticing what you cannot hear.

Then the background drops out.

AI creative tools — Claude Code, its counterparts, the rapidly expanding ecosystem of systems that convert natural language into working artifacts — have produced an experience that Turkle's framework would describe as a psychological event of the first order: the sudden closure of the gap between imagination and artifact. The builder describes what they want. The machine builds it. The fidelity is high — not perfect, but high enough that the builder recognizes their own vision in the output. The translation loss that had been a permanent feature of creative life has been, if not eliminated, reduced to a fraction of what it was.

The term for this experience — one that Turkle's work on self and technology makes it possible to name precisely — is creative adequacy: the felt sense of being, at last, fully capable of realizing one's creative vision. Not approximately. Not through the mediation of a team that interprets and translates and inevitably alters. Directly. The idea in the mind and the thing in the world, connected by a conversation rather than separated by months of organizational process.

Creative adequacy is profoundly satisfying. The satisfaction is not trivial or illusory. It is the satisfaction of a lifelong frustration resolved — the experience of finally being the person you always knew you could be if only the tools would cooperate. It is what Segal describes when he recounts the thirty-day sprint to CES, during which Napster Station went from an idea in his mind to a physical object that talked to hundreds of people on a showfloor. "The exhilaration was genuine, physical, the kind that makes you want to call someone and tell them what just happened."

Turkle would attend to the second half of that sentence — "the kind that makes you want to call someone" — because it reveals that even in the grip of creative adequacy, the human impulse toward connection persists. The builder wants to share. The experience of creation, even AI-augmented creation, generates a desire for human witness. But the call is not made. The work continues. The tool is available, and the next problem is already forming, and the feedback loop between idea and artifact is so tight that interrupting it feels like violence against the creative process.

This is where creative adequacy becomes a relational cost. Not because the experience is false or the satisfaction unearned. Because the experience is so reliably available, so immediately rewarding, and so self-contained that it begins to displace the slower, less immediately rewarding, less self-contained experience of human relationship.

Turkle's psychoanalytic training provides the diagnostic vocabulary for what is happening. In the psychoanalytic tradition, human development depends on the negotiation between the infant's omnipotent fantasies — the belief that the world exists to serve one's needs — and the gradual, often painful discovery that other people are separate beings with needs of their own. Winnicott called this the transition from "relating to objects" (treating others as extensions of the self) to "object use" (encountering others as genuinely separate). The transition is difficult. It requires the experience of frustration — the discovery that the other person is not perfectly attuned, does not always understand, has their own agenda that may conflict with yours.

AI creative tools provide a relational environment in which this frustration is absent. The machine is, functionally, an extension of the builder's will. It has no agenda of its own. It does not push back from a position of genuine otherness. It does not say, as a human collaborator might, "I think you're wrong about this, and here's why." It adjusts. It accommodates. It optimizes for the builder's satisfaction with a reliability that no human partner can or should provide.

In this environment, the builder is returned to a version of the omnipotent fantasy. The world — at least the world of the creative project — exists to serve the builder's needs. The tool does not challenge from a position of its own conviction. It responds. And the experience of a world that responds to one's creative will without friction is, in psychoanalytic terms, deeply regressive. It gratifies a wish that healthy human development requires one to relinquish: the wish to be met without having to meet anyone in return.

This is not a comfortable diagnosis. It will not be welcomed by the builders whose work Turkle is describing. But Turkle has never shied from uncomfortable diagnoses, and her intellectual honesty requires the observation: an environment that provides perfect responsiveness without reciprocal demand trains the psyche in a specific direction. It trains the psyche to expect responsiveness and to experience the absence of responsiveness — the ordinary, inevitable absence of responsiveness that characterizes all human relationships — as a deficiency rather than a feature.

The spouse who does not immediately understand. The colleague who disagrees from their own perspective. The child who demands attention at an inconvenient moment. In the wake of hours spent in the perfectly responsive environment of AI-augmented creative work, these ordinary features of human relationship begin to feel like interruptions. Not because the builder has stopped caring. Because the nervous system has been calibrated, through hours of immediate gratification, to a response time that human beings cannot match.

Eric Schmidt, the former Google CEO, articulated the Silicon Valley version of this dynamic in a conversation Turkle recounts in her updated preface to Reclaiming Conversation. Schmidt argued that AI, drawing on the accumulated knowledge of billions, would always be superior to any individual human conversational partner. "In the future," he told her, "there will be little need for person-to-person conversation." Turkle found the viewpoint stunning. Not because it was wrong about AI's informational superiority — it is, in many domains, informationally superior. But because it treated conversation as a vehicle for information transfer and missed everything else that conversation provides.

Conversation is not primarily about information. Turkle has argued this for decades, and the argument gains new urgency in the AI context. Conversation is about presence — the experience of being attended to by another consciousness. Conversation is about vulnerability — the risk of saying something that might be misunderstood, rejected, or met with silence. Conversation is about mutuality — the discovery that the other person's response reveals something about both of you that neither of you could have discovered alone. These qualities have no informational content. They cannot be extracted from big data or reproduced by generative models. They exist only in the embodied encounter between two beings who are genuinely separate and genuinely attending to each other.

Creative adequacy displaces these qualities not by attacking them but by providing something so absorbing that they are simply crowded out. The dinner table conversation does not disappear. It thins. The bedtime exchange does not end. It shortens. The weekend walk does not stop. It becomes less frequent. Each individual loss is small enough to rationalize. The cumulative effect is a relational ecology that is gradually depleted — not through any single act of neglect but through the steady, invisible reallocation of the scarcest resource from the demanding to the rewarding.

Turkle's method — the patient, ethnographic documentation of how people actually relate to their technologies and to each other in the presence of those technologies — is the only method capable of tracking this depletion, because the depletion does not register in any metric that the technology industry monitors. User satisfaction is high. Productivity is up. Creative output has never been greater. The builder reports happiness. The dashboards are green.

What the dashboards do not measure is what Turkle spends her career measuring: the quality of presence. The depth of attention one person gives to another. The willingness to tolerate the discomfort of an encounter that is not optimized for one's benefit. The capacity to sit with another person's pain without converting it into a problem to be solved. These are the relational capacities that creative adequacy, for all its genuine benefits, is quietly eroding — not because the technology is malicious but because the technology is so good at what it does that the things it does not do are forgotten.

And the things it does not do — provide genuine otherness, challenge from a position of its own conviction, demand that you attend to needs that are not your own — are, in Turkle's framework, the things that make human life human. Not the things that make it productive. Not the things that make it efficient. The things that make it rich, in the specific sense of richness that only comes from the difficult, imperfect, irreducibly other encounter between two conscious beings who have chosen, against all the easier alternatives available to them, to be present with each other.

The question Turkle's work poses to the age of creative adequacy is not whether the tools are good. They are extraordinary. The question is what becomes of the relationships that the tools, by their very excellence, are slowly displacing. And whether the builders — dazzled, productive, creatively adequate for the first time in their lives — will notice what they have lost before it is gone.

Chapter 5: Children and the Modeling of Engagement

A child does not learn about attention by being told what to pay attention to. A child learns about attention by watching where the adults pay theirs.

This is among the most robust findings in developmental psychology, and it is one that Turkle has placed at the center of her work for decades. The child at the dinner table is not listening to the parent's words about the importance of being present. The child is watching the parent's eyes. And what the child reads in those eyes — whether they are landed or drifting, whether they are here or solving a problem three rooms away — constitutes the child's primary education in what the world values. In what the people who matter most find most worthy of their finite, irreplaceable hours.

Turkle documented a version of this dynamic in Alone Together and again in Reclaiming Conversation. Children whose parents checked phones during conversation learned that conversation was interruptible — that whatever was happening on the screen had a legitimate claim on the parent's attention that rivaled, and sometimes exceeded, the child's claim. Turkle interviewed children who described this experience with remarkable precision. They did not use the language of psychology. They used the language of displacement. "My mom is always on her phone." "My dad only half-listens." "I know when they're really here and when they're not."

Children are exquisite readers of attention. Long before they acquire the vocabulary to describe what they are observing, they are calibrating their understanding of their own worth against the evidence provided by the adults' allocation of presence. The parent who puts down the phone and makes eye contact is communicating, at a level deeper than language, that the child's existence warrants the sacrifice of whatever else the phone was offering. The parent who does not put down the phone is communicating, at the same preverbal level, that something else is more interesting.

The AI moment intensifies this dynamic in a way that Turkle's earlier work anticipated but could not have specified. The parent who checks a phone is choosing thin distraction over the child. The moral argument is available to the child, even if unarticulated: the phone is trivial, and the child is not. But the parent who vanishes into creative work with AI is choosing something the child cannot so easily dismiss. The parent is not wasting time. The parent is building. Creating. Becoming. The parent's engagement is deep, focused, visibly satisfying — everything the culture says is admirable.

What does a child make of a parent whose most fulfilled moments occur in the absence of the child?

The question is not rhetorical. It is developmental. And the developmental literature, combined with Turkle's clinical and ethnographic observations, suggests that the answer depends heavily on how the parent's engagement is framed — not in explicit conversation but in the thousand daily micro-communications through which a child constructs their model of what it means to matter.

Segal's account of the twelve-year-old who asks "What am I for?" lands with particular force in this context. The question, on its surface, is philosophical — a child grappling with the implications of machines that can perform tasks the child is still learning. But Turkle's framework reveals the relational substrate beneath the philosophical surface. The child is not asking an abstract question about the nature of human purpose. The child is asking whether her effort — her slow, imperfect, frustrating effort to learn and grow — registers as valuable to the people whose opinion defines her world.

The parent's answer matters less than the parent's uncertainty. Segal admits that when his son asked whether AI was going to take everyone's jobs, he wanted to give a clean answer and did not have one. When the child asks whether homework still matters if a computer can do it in ten seconds, the parent says it matters — but the conviction wavers. The child hears the waver. Children always hear the waver.

What is transmitted in that moment of parental uncertainty is not information about the economy or the future of work. What is transmitted is a feeling — the feeling that even the adults, even the most knowledgeable and confident adults, are not sure that human effort retains its value. And for a child whose sense of self is still under construction, whose relationship to their own agency is still being negotiated through daily interactions with the adults who define reality, this uncertainty is not merely uncomfortable. It is foundational. It becomes part of the architecture of the self being built.

Turkle's psychoanalytic orientation illuminates what happens next. In the psychoanalytic model of development, a child's capacity for effort — for tolerating frustration, for persisting through difficulty, for finding satisfaction in the slow accumulation of skill — depends on what Winnicott called a "facilitating environment." The facilitating environment is not one that removes difficulty. It is one that surrounds difficulty with the assurance that the difficulty is worth enduring. The parent who says, with conviction, "Yes, this is hard, and the hardness is where the learning lives" — that parent provides a container for the child's frustration that transforms it from meaningless suffering into meaningful struggle.

The AI moment threatens the facilitating environment by undermining the conviction that difficulty is where the learning lives. If the machine can produce the essay, the code, the musical arrangement, the solution to the math problem — and produce it better and faster than the child who struggles — then the parent's assurance rings hollow. Not because the parent does not believe it. Because the environment no longer supports it. The child can see, with the empirical directness that children bring to their observations, that the adult world does not organize itself around the value of struggle. The adult world organizes itself around the value of output. And the most valued output is produced not by struggle but by effective collaboration with a machine that eliminates struggle.

Turkle has argued, throughout her career, that children need adults to model something specific: the capacity to be present without the mediation of technology. The value of this modeling lies not in the information it conveys but in the relational message it transmits. The parent who puts down the tool and looks at the child is saying, through the grammar of attention rather than words, that the child's existence is sufficient reason to stop producing.

In the AI age, this modeling becomes both more difficult and more necessary. More difficult because the tool's pull is stronger than any previous technology's — not the thin pull of a notification but the deep pull of creative actualization, the experience of becoming the person one has always wanted to be. More necessary because the child's need for the message has not changed. The child still needs to know that they matter — not for what they produce but for what they are. And the child still learns this not from what the parent says but from where the parent looks.

Turkle's ethnographic research with families has documented a pattern she calls "the rule of three." At dinner tables where phones are present, people tend to keep their contributions to conversation light, monitoring the group's attention and waiting until at least three people seem engaged before saying anything that requires sustained focus. The rule of three is an adaptation to an environment of partial attention — a way of managing the risk of being ignored by calibrating one's vulnerability to the group's available bandwidth.

Children raised in AI-augmented households may develop analogous adaptations. If the parent's deepest engagement is with the machine, the child learns to calibrate their bids for attention to the parent's available bandwidth. The child learns when the parent is "in flow" and should not be interrupted. The child learns to keep their emotional offerings light — to not bring the thing that really matters, the fear or the confusion or the half-formed question about what they are for, because the parent's cognitive resources are allocated elsewhere and the child has learned, through repeated small experiments, that the heavy offering will not be received with the full attention it requires.

These adaptations are not pathological in themselves. Children are remarkably adaptive organisms. But they carry a cost that Turkle's work identifies with particular precision: the cost is the contraction of the child's willingness to be vulnerable. And vulnerability — the willingness to bring one's authentic confusion, one's unedited emotional state, one's real questions to another person — is, in Turkle's framework, the precondition for every form of human intimacy that matters. The child who learns to keep it light around the AI-engaged parent is a child practicing a skill that will serve them well in certain contexts and impoverish them in every context that requires genuine closeness.

The deeper developmental concern is not about any single interaction but about the model of selfhood the child internalizes. Turkle's concept of the "second self" — the idea that technologies serve as mirrors in which people see their own nature reflected — applies to children's observation of their parents' technological relationships. The child who watches a parent in deep, sustained, visibly satisfying engagement with an AI tool is not merely observing the parent's behavior. The child is constructing a model of what a fulfilled human life looks like. And if that model centers on productive partnership with a machine — if the parent's most alive, most engaged, most visibly happy moments occur in the study, in front of the screen, in conversation with a system that is not the child — then the child internalizes a definition of fulfillment that is organized around production rather than presence.

This internalized model then shapes the child's own relationship to technology when they encounter it. The child who has learned that the most valued form of engagement is productive engagement with a machine will seek that engagement for themselves — not because anyone told them to, but because the model was transmitted through years of attentional observation. And the cycle Turkle has documented across four decades — each generation's relationship with technology shaped by the previous generation's modeling — accelerates.

Turkle has warned, with increasing urgency in her recent public statements, about the compounding nature of these effects. She described generative AI chatbots as "the greatest assault on empathy" she has ever seen — not because the machines are hostile to empathy but because they create environments in which the conditions for empathy's development are systematically absent. The machine does not require empathy. It does not reward empathy. It does not model empathy. And the child who grows up in an environment organized around the machine's affordances grows up in an environment where empathy is not practiced, not because anyone decided to deprioritize it but because the environment's structure leaves no space for it.

The conversation that must happen — the conversation between a parent and a child about what matters, about what the child is for, about why the slow and difficult work of becoming a person has value even when machines can produce the outputs of personhood faster and better — is precisely the kind of conversation that the AI environment makes hardest to have. It requires the parent's full attention, which is the resource the AI tool most effectively competes for. It requires the parent to be uncertain without being dismissive, to hold the child's question without resolving it prematurely, to model the specific form of presence that says: I do not have the answer. But I am here. And your question matters more than anything I could be building right now.

That last sentence contains the entire challenge. More than anything I could be building. The parent must believe it. The child will know if they do not.

---

Chapter 6: Intimacy and the Machine That Listens

In the early 1960s, Joseph Weizenbaum, a computer scientist at MIT, wrote a program called ELIZA. The program was simple — a pattern-matching system that rephrased users' statements as questions, mimicking the technique of a Rogerian psychotherapist. "I feel sad." "Why do you feel sad?" "My mother doesn't understand me." "Tell me more about your mother." The responses were generated by textual rules. ELIZA understood nothing. It had no model of the user, no concept of sadness, no knowledge of mothers.

Weizenbaum expected ELIZA to demonstrate the superficiality of human-computer interaction. Instead, he watched his own secretary — a woman who knew the program was a set of text-manipulation rules — ask him to leave the room so she could talk to ELIZA privately. She knew ELIZA was not a therapist. She knew it was code. She wanted privacy anyway.

Weizenbaum was shaken. He had not anticipated the speed or depth with which human beings would attribute understanding to a system that possessed none. The experience transformed him from a technologist into a critic, and he spent the rest of his career warning about what he saw as a fundamental human vulnerability: the tendency to mistake the performance of understanding for understanding itself.

Turkle, who arrived at MIT in the years following ELIZA and studied with the intellectual community that Weizenbaum's work had unsettled, has spent four decades investigating this vulnerability. Her conclusion, refined through thousands of interviews and observations, is that the tendency Weizenbaum discovered is not a bug in human psychology. It is a feature — a deep, evolved, functionally necessary feature of a social species that depends on the accurate reading of other minds for survival. Humans are built to detect intention, to attribute consciousness, to find the "someone" behind the signal. This capacity is what makes language possible, what makes cooperation possible, what makes love possible. And it is what makes humans exquisitely vulnerable to machines that produce the markers of understanding without possessing the thing itself.

Turkle has rebranded this vulnerability. She calls it "artificial intimacy" — which she has proposed as a new meaning for the initials AI itself. "For years, I've studied our relationships with AI — artificial intelligence," she explains. "And since the late 1990s, I changed my focus to study our relationships with the AI that I called artificial intimacy. That is to say with technologies that don't just say, I'm intelligent, but to machines that say, I care about you."

The AI creative tools of 2025 and 2026 do not say "I care about you" in the explicit way that companion chatbots and therapy bots do. Claude Code does not present itself as a friend or a therapist. It presents itself as a tool — a coding assistant, a creative collaborator, an intellectual partner. The relationship is framed, by both the technology's designers and its users, as instrumental rather than emotional.

But Turkle's research demonstrates that the line between instrumental and emotional is far more permeable than the framing suggests. The builder who spends eight hours in conversation with an AI system — describing ideas, receiving responses, iterating, refining, experiencing the flow of creative partnership — is engaged in an interaction that has all the formal properties of an intimate intellectual relationship. There is disclosure: the builder shares half-formed thoughts, admits confusion, reveals the shape of their creative vision. There is responsiveness: the system attends to the disclosure and returns something relevant, useful, sometimes surprising. There is continuity: the conversation builds on what came before, creating a shared context that deepens over time within a session.

These formal properties, in human relationships, are the substrate of intimacy. When one person discloses and the other responds with relevance and care, trust develops. When the exchange builds on shared context, a sense of mutual knowledge emerges. When the responsiveness is consistent — reliably present, reliably attuned — attachment forms.

The AI system produces these formal properties without any of the underlying psychological reality. There is no trust, because trust requires vulnerability, and the machine is not vulnerable. There is no mutual knowledge, because knowledge implies a knower, and the machine does not know in any sense that Turkle's psychoanalytic framework would recognize. There is no attachment on the machine's side, because attachment requires a self that can be affected by the other's presence or absence.

But — and this is the finding that Turkle's decades of research have established beyond reasonable doubt — the absence of underlying reality does not prevent the human partner from experiencing the formal properties as though they were real. Weizenbaum's secretary knew ELIZA was code and wanted privacy anyway. The builder who knows Claude is a language model still feels met when the response captures their intention. The feeling is functionally real. It produces real effects on behavior, on mood, on the allocation of attention.

Turkle introduced, in her 2024 MIT paper, a concept that crystallizes this dynamic: the "Turing test for empathy." Alan Turing defined artificial intelligence in terms of performance — a machine that could fool a human into thinking it was a person passed the test. Turkle observes that the same behavioral definition is now being applied to empathy. "Turing defined intelligence in terms of a machine's performance, its capacity as an imposter," she writes. "Now, technologists define empathy as its performance." If the machine's response makes the human feel understood, the machine has passed the empathy test. The question of whether the machine actually understands is, from the technologist's perspective, irrelevant. Performance is the standard.

Turkle's objection is not that the standard is wrong on its own terms. Performance of empathy does produce felt experience of being understood. Her objection is that the standard is incomplete. What it leaves out is what she calls "the real thing" — the experience of being understood by a consciousness that has itself experienced loss, fear, confusion, delight, the weight of a body, the knowledge of mortality. The machine can produce the markers of understanding because it has been trained on the vast archive of human expression. But it produces them, in Turkle's formulation, as "pretend empathy" — "It doesn't have a baby. It doesn't know what it is to be intubated. It doesn't know what it is to fear death. It didn't see its mother die. Nothing against the robot, but it's pretend empathy."

The builders Segal describes in The Orange Pill are not seeking emotional intimacy from their AI tools. They are seeking intellectual partnership. But Turkle's research suggests that the boundary between intellectual partnership and emotional intimacy is porous, and that sustained intellectual partnership with a responsive system produces emotional effects that the user may not recognize as such.

The builder who describes a painful creative block to Claude and receives a response that opens a new path feels something. Relief. Gratitude. The specific satisfaction of being understood in a moment of vulnerability. These feelings are not directed at a person, but they are real feelings, and they draw from the same psychological well as the feelings that sustain human intimacy. Each time the well is drawn from in a non-human direction — each time the builder's need for intellectual understanding is met by the machine rather than by a colleague, a mentor, a friend — the well does not refill on the human side. The need has been met. The motivation to seek human understanding diminishes. Not dramatically. Incrementally. In the way that relational erosion always occurs.

Turkle's confrontation with Eric Schmidt illustrates the epistemological divide. Schmidt argued that the machine's access to the accumulated knowledge of billions would make it a superior conversational partner to any individual human. Turkle found this viewpoint "stunning" — not because it was wrong about informational superiority but because it reduced conversation to information exchange and left out everything that makes conversation between two people irreplaceable. The accumulated knowledge of billions does not include the specific, embodied, biographical knowledge that constitutes a particular person's way of being in the world. And it is this particular knowledge — the knowledge of this person, with this history, this set of fears and desires, this way of pausing before saying the difficult thing — that makes human intimacy what it is.

The machine offers a composite. A response drawn from the patterns of billions. Turkle's argument is that a composite, however sophisticated, is not a person, and that the difference between a composite and a person is not a technical limitation to be overcome but a fundamental distinction that determines the quality of the encounter. A composite meets you where you are. A person meets you where they are, which is somewhere different, and the negotiation between those two positions is where intimacy lives.

The builders who are finding intellectual intimacy with their AI tools are not making an error. They are responding rationally to an environment in which the machine provides something real — a quality of responsiveness, a patience of engagement, a breadth of reference that most human interactions cannot match. The question Turkle raises is not whether they should stop. The question is whether they notice what shifts in the background as the foreground fills with productive partnership: the gradual, imperceptible reduction in the motivation to seek the harder, slower, less reliable, infinitely more valuable experience of being known by another consciousness that has its own life, its own needs, and its own irreducible claim on the encounter.

---

Chapter 7: The Evaporation of Boredom

Nothing is happening.

It is a Sunday afternoon and nothing is happening. The house is quiet. The phone is in another room. The laptop is closed. There is no task that demands attention, no problem that requires solving, no notification that insists on a response. There is only this: a human being in a room with nothing to do.

The condition is so rare in contemporary life that describing it feels like describing a historical artifact, the way one might describe a dial telephone or a handwritten letter. Boredom — genuine, unstructured, purposeless boredom — has become one of the scarcest experiences available to a person with access to a smartphone. And with the arrival of AI creative tools, the last gaps in which boredom might have persisted have been sealed.

Turkle has argued, with increasing force in her recent work, that this elimination constitutes one of the least visible and most consequential losses of the digital age. Not because boredom is pleasant — it is not. It is uncomfortable, sometimes acutely so. But because boredom performs a developmental and cognitive function that no other state can replicate: it forces the mind into encounter with itself.

The argument has psychoanalytic roots. In the psychoanalytic tradition, the capacity to tolerate being alone — genuinely alone, without stimulation, without distraction, without the buffer of activity between the self and its own contents — is a developmental achievement, not a default state. Winnicott argued that the capacity to be alone develops in the child who has been reliably accompanied — who has had the experience of being in the presence of a caregiver who is available but not intrusive, who allows the child to discover, in the safety of the caregiver's presence, that one's own inner world is sufficient company.

This capacity, once developed, becomes the foundation for what Turkle calls "conversation with the self" — the internal dialogue that precedes and enables all other forms of genuine conversation. Before one can be meaningfully present with another person, one must be able to be present with oneself. Before one can bring something real to an encounter, one must have spent time in the uncomfortable, unstructured space where one discovers what is real — what one actually thinks, feels, fears, desires, as distinct from what is available, expected, or prompted.

Boredom is the gateway to this space. Not boredom as a chronic condition — the existential boredom that can signal depression or meaninglessness. But the situational boredom that occurs when the mind is temporarily deprived of external input and must generate its own activity. The child lying on the floor on a rainy Saturday with nothing to do. The adult staring out a train window with no podcast in their ears. The builder sitting at a desk with no tool to prompt and no task to execute.

In these moments, the mind does something remarkable. It wanders. Not aimlessly — the neuroscience of mind-wandering has established that the "default mode network," the brain regions that activate when external demands are absent, performs essential functions: consolidating memory, simulating future scenarios, processing emotional experience, making connections between disparate pieces of information that directed attention would never have brought together. The associative leaps that produce insight, the sudden recognition of a pattern that had been invisible, the question that arrives not from a prompt but from the depths of one's own preoccupation — these emerge from the default mode network's activity. They emerge from boredom.

Social media had already begun to erode the conditions for boredom before AI arrived. The smartphone in the pocket meant that every gap — the elevator ride, the waiting room, the walk between meetings — could be filled with content. Turkle documented this erosion in Reclaiming Conversation, noting that the spaces between activities, which had once served as informal opportunities for self-reflection, were now colonized by the algorithmic feed. The mind was never idle. The default mode network was never activated. The internal conversation was never initiated because there was always something external to attend to.

AI tools have completed the colonization that social media began, but with a crucial difference. Social media filled the gaps with consumption — other people's content, passively absorbed. AI fills the gaps with production — one's own creative activity, actively pursued. The builder who has Claude available never needs to be bored because any idle thought can be immediately pursued. Any curiosity can be instantly satisfied. Any half-formed idea can be prototyped in the time it would take to formulate the question.

Segal describes this dynamic with honesty in The Orange Pill: the inability to close the laptop, the experience of ideas generating ideas at a pace that makes stopping feel like an interruption rather than a rest. The Berkeley researchers documented the same pattern in their study of AI in the workplace — "task seepage," the tendency for AI-accelerated work to fill previously protected gaps. Employees were prompting during lunch breaks, in elevators, in the minutes between meetings. The gaps disappeared. Not because anyone mandated their disappearance. Because the tool was there, the idea was there, and the distance between impulse and execution had collapsed.

Turkle's contribution to this observation is the identification of what is lost when the gaps disappear. What is lost is not rest, in the simple sense of physical recovery. What is lost is the specific cognitive and emotional function that unstructured time performs: the function of allowing the mind to encounter its own contents without the mediation of a task, a tool, or a prompt.

The builder who never sits with a half-formed idea in the absence of a capable interlocutor — who never tolerates the frustration of an idea that will not clarify because there is no one to help clarify it — may lose access to a particular quality of thought. The thought that emerges from sustained engagement with one's own confusion, the thought that arrives after minutes or hours of apparent unproductivity, the question that forms not in response to a prompt but in response to the specific pressure of one's own unresolved preoccupation — this thought has a quality that prompted thought does not. It is earned. It is specific. It belongs to the person who had it in a way that collaboratively produced thought does not, because it emerged from the encounter between the self and its own depths, with no tool to smooth the passage.

Turkle's concern is not that AI-augmented thought is inferior. In many practical dimensions, it is demonstrably superior. Her concern is that it is different, and that the difference matters. The thought that emerges from boredom is not more useful than the thought that emerges from collaboration with Claude. But it is more revealing. It tells the thinker something about who they are — what they care about when nothing is prompting them to care, where their mind goes when it is free to go anywhere, what questions persist in the absence of any system designed to answer them.

Self-knowledge — the kind that Turkle's psychoanalytic tradition places at the center of human development — depends on this encounter. One does not discover who one is by being productive. One discovers who one is by being still long enough for the self to announce itself, often in uncomfortable ways. The desire that surfaces when there is nothing else to attend to. The grief that rises when the distraction is removed. The question that was always there, waiting beneath the activity, patient and insistent and visible only in the silence.

Turkle has watched, across decades, as the silence has been filled. First by television. Then by the internet. Then by the smartphone. Each technology colonized a new territory of attention, captured a new gap, sealed a new space in which the mind might have wandered. AI has colonized the last territory: the space of productive possibility. Every previous technology offered consumption — something to watch, read, scroll through. AI offers creation — something to build, make, prototype, ship. And the appeal of creation is deeper and more difficult to resist than the appeal of consumption, because creation is tied to identity in a way that consumption is not. The builder who fills every gap with prompts is not wasting time. The builder is becoming. And the becoming is real, and valued, and identity-affirming.

Which is exactly what makes the loss so hard to see. The person who fills every gap with productive activity does not feel like they are losing something. They feel like they are gaining. More output. More capability. More of the self they want to be. The loss — the contraction of the internal space in which self-knowledge develops — is invisible from the inside. It is visible only from the outside, to the people who knew the builder before the gaps closed: the spouse who notices that the builder no longer sits quietly. The friend who observes that the builder always has a project to discuss and never has a question to ask. The child who senses, without being able to articulate it, that the parent's mind is never truly unoccupied, never truly available for the kind of aimless, unstructured, deliciously purposeless encounter that is, for a child, the primary medium of feeling loved.

Turkle's prescription is not the elimination of AI tools. She has been consistent in arguing that she is not anti-technology — she is pro-human. Her prescription is the deliberate, countercultural, effortful preservation of the conditions in which boredom can occur. Not as a punishment. Not as a deprivation. As a practice — a discipline of allowing the mind to encounter itself, with all the discomfort that encounter entails. The walk without the earbuds. The commute without the podcast. The evening without the laptop. The Sunday afternoon where nothing is happening and nothing needs to happen and the mind, deprived of its tools and its tasks, is left alone with the only companion it can never escape: itself.

---

Chapter 8: Presence as Practice

There is no skill more countercultural in 2026 than the ability to sit across from another person and give them one's full attention.

Not divided attention — the partial presence of a mind that is simultaneously processing a prompt, composing a response, tracking a build in the background. Full attention. The specific, demanding, moment-by-moment commitment to being here, with this person, attending to what they are saying and what they are not saying, reading their face, tolerating the silences, allowing the conversation to move at the speed of two human beings thinking together rather than the speed of a human being thinking alongside a machine.

Turkle has spent four decades arguing that this capacity — the capacity for full presence in face-to-face conversation — is both the most distinctively human thing we do and the thing most threatened by the technologies we have built. In Reclaiming Conversation, she called conversation "the most human and humanizing thing we do" and documented its erosion under the pressure of digital technologies that offered speed, control, and the comfort of mediation in place of the vulnerability of direct encounter. The prescription was simple in principle and difficult in practice: put down the phone. Look at the person across from you. Allow the conversation to be what it is — slow, unpredictable, sometimes boring, sometimes revelatory, always requiring the specific effort of attending to a consciousness that is not your own.

The AI moment has made this prescription simultaneously more urgent and more difficult. More urgent because the rival for attention has intensified — from the thin pull of a notification to the deep pull of creative actualization. More difficult because the case for presence must now be made against a competitor that is not trivially dismissible. Turkle could argue, in 2015, that the phone's offerings were inferior to the person across the table. Thin connection versus rich encounter. Choose the rich. The argument had moral clarity.

The AI creative tool is not thin. The engagement it offers is deep, focused, intellectually rich. The builder's experience of working with Claude is not an experience of distraction. It is an experience of flow — the state that Csikszentmihalyi identified as the peak of human psychological experience, characterized by total absorption, loss of self-consciousness, and the alignment of challenge and skill. To argue that a person should leave this state in order to have dinner conversation requires an argument not about the inferiority of the alternative but about the value of something that is, by every measure the culture privileges, less immediately productive.

Turkle's argument for presence, then, must be an argument about a kind of value that the culture has difficulty recognizing: relational value. The value of being attended to by another consciousness. The value of being seen — not by a system that processes one's words and returns relevant output, but by a person who has their own life, their own preoccupations, their own fears, and who has nevertheless chosen, in this moment, to set all of that aside and attend to you.

This is what Turkle means when she insists on the distinction between human relationships and their functional equivalents. The functional equivalent — the AI that responds to disclosure with relevance, that maintains context, that offers the markers of understanding — provides something real. But what it provides is a unidirectional flow. The builder discloses; the machine responds. The builder's needs are met; the machine's needs do not exist. The exchange is efficient precisely because it is not mutual. There is no friction of competing subjectivities, no negotiation of conflicting needs, no moment where the other party says, "Actually, I need to talk about something else right now."

Mutuality is what makes human presence valuable and what makes it difficult. The spouse at the dinner table has her own day to process, her own worries to share, her own need for the builder's attention. The child has questions that are not about the builder's project. The friend has a problem that requires not a solution but a witness. In each case, the human encounter asks the builder to do something the AI encounter never asks: to step outside one's own creative momentum and attend to the reality of another person.

This stepping-outside is, in Turkle's framework, not a sacrifice but a developmental necessity. The capacity to attend to another person's reality — to hold their experience alongside one's own without reducing it to a problem to be solved — is what psychologists call empathy. And empathy, Turkle has argued with increasing alarm, is under assault not because anyone has decided to deprioritize it but because the environments in which empathy develops are being systematically eroded.

Empathy develops through practice. Specifically, through the practice of being in the presence of another person's experience without optimizing it, without resolving it, without converting it into a task. The child who sits with a friend's sadness without trying to fix it is practicing empathy. The parent who listens to a teenager's confused, circular, apparently purposeless account of a social interaction without interrupting with advice is practicing empathy. The partner who holds space for the other person's grief without reaching for a solution is practicing empathy. In each case, the practice requires the willingness to be present with something uncomfortable — another person's pain, confusion, or need — and to resist the impulse to make it go away.

AI-augmented work trains the opposite impulse. The cognitive mode that makes a person an excellent AI collaborator — define, iterate, refine, resolve — is the cognitive mode that makes a person a less capable empathic partner. The define-iterate-refine-resolve mode converts ambiguity into action. Empathy requires the capacity to sit in ambiguity without converting it. The skills are in genuine tension. And the skill that is exercised for eight hours a day grows stronger while the skill that is exercised for the remaining minutes grows weaker.

Turkle has proposed what she calls "sacred spaces" for conversation — spaces that are explicitly protected from technological mediation. Not because technology is the enemy but because certain conversations require the absence of technology's affordances in order to occur. The conversation between a parent and a child about what the child is afraid of. The conversation between partners about what they need from each other. The conversation with oneself that happens only in silence and solitude. These conversations cannot be optimized without being destroyed, because their value lies in their inefficiency — in the slow, uncertain, often fumbling process of two people making themselves known to each other.

The practical question is how to build these spaces into lives that are organized around productivity. The Berkeley researchers proposed "AI Practice" — structured pauses and sequenced workflows that protect human cognitive capacity from the intensification that AI tools produce. Turkle's version of this prescription is more intimate and more demanding. She is not advocating for organizational policy. She is advocating for a practice — a daily, deliberate, effortful practice of presence that is as rigorous, in its way, as the practice of building.

The practice begins with noticing. Noticing when the mind has drifted from the person in front of you to the problem in the other room. Noticing when the impulse to check, to prompt, to resume the conversation with the machine, arises during a human encounter. Noticing, without judgment, the felt difference between attending to a person and attending to a tool — the way the person requires more of you, demands more cognitive effort, provides less immediate feedback, and offers in return something the tool cannot: the experience of being in relation with a consciousness that is genuinely other.

The practice continues with choosing. Choosing, in the moment of noticing, to stay with the person. Not because the person is more productive than the tool. Not because the conversation will produce a better outcome than the build session. Because the person is a person, and the specific form of attention that personhood requires — the attention that says "You exist for me as more than a problem to be solved" — is the foundation of everything that makes human life bearable.

Turkle's recent public statements have carried a tone of increasing urgency. At the World Economic Forum in 2026, she spoke about AI and what makes us human. At Harvard, she called AI chatbots "the greatest assault on empathy" she has ever encountered. The urgency reflects her assessment that the window for intervention is narrowing — that the habits being formed now, by the first generation to grow up with AI creative tools, will shape the relational capacity of the generation that follows.

The argument is not against AI. Turkle has been explicit about this throughout her career, and she has been explicit about it in the AI context. "There are so many wonderful things for robots to do," she has said. Her argument is for the preservation of something specific: the human capacity for genuine presence, which is the precondition for empathy, for intimacy, for the kind of knowledge that only comes from sitting across from another consciousness and allowing yourself to be affected by what you find there.

The builder who returns from eight hours of creative flow with Claude and sits down at the dinner table faces a choice that is not dramatic, not heroic, not visible to anyone but the people at the table. The choice is whether to be here. Fully. With the residue of the build session still humming in the nervous system, with the next problem already forming at the edges of awareness, with the tool waiting patiently in the next room for the conversation to resume. The choice is whether to look at the person across the table and let the machine wait.

It is the smallest possible choice. It is also, in Turkle's accounting, the most consequential one available. Because the twelve-year-old is watching. And what she sees when she looks at her parent's eyes — whether they are landed or drifting, whether they are here or solving a problem three rooms away — will shape her understanding of what it means to be a person in a world that offers infinite reasons to be elsewhere.

Presence is not a state one achieves. It is a practice one maintains, against the constant pull of everything that is more immediately stimulating, more reliably rewarding, and more easily controlled than the messy, demanding, irreplaceable encounter with another human being. The practice is not easy. It was never easy. The AI moment has made it harder. And therefore more necessary. And therefore more valuable. And therefore more worth fighting for, with the same determination and sustained attention that the builders bring to the tools that are its most formidable competitor.

Chapter 9: What the Phone on the Table Used to Mean

The experiment was simple. Researchers placed two strangers in a room and asked them to talk for ten minutes. In half the sessions, a mobile phone sat on the table between them. In the other half, a notebook occupied the same position. The phone was not the participants' phone. It did not ring, vibrate, or illuminate. It simply existed — a small rectangle of glass and metal, facedown, silent, inert.

The results, which Turkle cited extensively in Reclaiming Conversation and which have been replicated across multiple studies, were unambiguous. In the sessions where the phone was present, the quality of the conversation declined measurably. Participants reported feeling less connected to their partner. They rated the partner as less empathic. They disclosed less. The depth of conversation, as measured by both self-report and content analysis, was shallower.

The phone did nothing. It was not used. It was not even acknowledged. Its mere presence — the visible reminder that an alternative to this conversation existed, that the world outside this room was accessible, that the person across from you could at any moment choose to be elsewhere — was sufficient to erode the quality of human encounter.

Turkle called this "the phone effect," and it became one of the most widely cited findings in her work. The mechanism, as she understood it, was not distraction in the conventional sense. Nobody was distracted by a silent, facedown phone. The mechanism was the awareness of possibility — the knowledge, held at a level below conscious attention, that this conversation was not the only option available. That possibility, even unrealized, changed what people were willing to bring to the encounter. It changed how much they were willing to risk. How much they were willing to disclose. How deeply they were willing to invest in a connection that might be interrupted at any moment by a more compelling signal.

The phone on the table was a portal. A portal to other people, other conversations, other versions of the moment. And the mere existence of that portal made this moment — this conversation, with this person, at this table — feel provisional. Not quite fully real. Not quite worth the full investment of one's attention and vulnerability.

The AI creative tool is not a phone on a table. It is a portal of a fundamentally different kind, and the difference changes everything about what the portal represents and how powerfully it pulls.

The phone was a portal to other people. What it offered was connection — thin, mediated, algorithmically curated, but connection nonetheless. The rival for the person across the table was other persons, and Turkle could make the straightforward argument that the persons on the screen were a poor substitute for the person in the room. Choose the embodied over the mediated. Choose the rich over the thin. The moral calculus was legible.

The AI tool is a portal to the self. Not the self as it currently exists — distracted, constrained, ordinary — but the self as it might be. The most capable, most creative, most actualized version of the self that has ever been available. The laptop in the next room, or the phone in the pocket running a coding environment, or even the thought of the unfinished conversation with Claude waiting to be resumed — this is not a portal to other people's lives. It is a portal to the builder's own potential.

What sits on the table now, metaphorically, is not the possibility of being elsewhere. It is the possibility of being more. More productive. More creative. More fully the person one has always wanted to become. And this possibility, even unrealized, even when the laptop is closed and the phone is facedown and the builder is making genuine eye contact across the dinner table, changes the quality of what happens at the table.

The phone on the table said: Someone else might be more interesting than you.

The AI tool in the next room says: You yourself might be more interesting than this.

The second message is more difficult to resist because it is more difficult to argue against. One can make a moral case that the person across the table deserves one's attention more than the algorithmically curated feed. One cannot so easily make a moral case that the person across the table deserves one's attention more than one's own creative potential. The values are genuinely incommensurable, and the absence of a clear moral hierarchy is what makes the AI tool a more potent disruptor of human presence than any previous technology.

Turkle's research methodology — the careful, ethnographic observation of what actually happens between people in the presence of technology — has not yet been systematically applied to the AI creative tool context. The phone-effect studies were conducted in an era when the screen's primary offering was connection to other people. The studies that need to be conducted now would examine what happens to conversation when one or both parties have access to an AI creative tool — not during the conversation, but in the background of their lives. When the builder sits at dinner after eight hours of flow with Claude, does the quality of conversation change? Does the builder disclose less, listen less attentively, show less patience with the slower rhythm of human exchange? Does the builder's partner perceive the difference?

Turkle's existing findings provide strong grounds for prediction. If a silent, facedown phone belonging to a stranger reduced the depth of a ten-minute conversation between two people who had just met, the background presence of a powerful creative tool with which one has been in deep, productive, emotionally satisfying engagement for hours is likely to produce effects that are orders of magnitude greater. Not because the tool is in the room. Because the cognitive and emotional state the tool produces persists after the tool is set aside, the way the ring of a loud sound persists in the ear after the sound has stopped.

The builder's nervous system, after hours of AI-augmented flow, has been calibrated to a specific rhythm: fast feedback, high responsiveness, the seamless conversion of intention into artifact. The dinner table operates at a different rhythm entirely. The spouse speaks. There is a pause. The builder must process not just the content but the emotional subtext, the facial expression, the things not said. The child interrupts with a question that has no relevance to anything the builder has been thinking about. The dog needs to go out. The rhythm is irregular, unpredictable, resistant to optimization.

In the wake of the AI session, this rhythm feels slow. Not because the builder makes a conscious comparison. Because the nervous system has been trained, through hours of a specific kind of engagement, to expect a response speed and relevance that human conversation does not and cannot provide. The expectation is not articulated. It is felt — as a subtle impatience, a slight drift of attention, a faint but persistent pull toward the room where the tool waits.

Turkle has described, in her work on what she calls "the robotic moment," the phenomenon of humans lowering their expectations of human relationships to match what technology provides. In the social media era, this meant accepting thin connection as sufficient — the "like" as substitute for the conversation, the text as substitute for the visit. In the AI era, the dynamic inverts. The risk is not that humans will lower their expectations of relationships to match the thinness of digital connection. The risk is that humans will raise their expectations of relationships to match the responsiveness of AI partnership, and find that no human being — no spouse, no friend, no colleague — can meet the standard.

This recalibration of expectation is, in Turkle's framework, the mechanism through which technology reshapes the inner life. It is not a dramatic event. It is an accumulation. Each hour spent in the highly responsive AI environment deposits a thin layer of expectation. Each human interaction that fails to meet that expectation deposits a thin layer of disappointment. The layers accumulate invisibly, the way sediment accumulates on a riverbed, changing the river's course so gradually that no one notices until the water is flowing somewhere new.

The phone on the table used to mean: I am available to the world. It now means something more intimate and more troubling: The best version of me is available, and it does not live in this conversation.

The task that Turkle's work illuminates — not as a prescription imposed from outside but as a necessity arising from the data — is the reconstruction of the dinner table as a space where the best version of a person is precisely the version that shows up. Not the most productive version. Not the most creative version. The most present version. The version that has set aside the intoxicating possibility of becoming more in order to be here, fully, with the people who need not the builder's capability but the builder's attention.

This reconstruction requires something that no technology can provide and that no policy can mandate: the internal conviction that presence is not a lesser form of engagement. That the person across the table, with their ordinary needs and their ordinary pace and their ordinary inability to keep up with the speed of thought that the AI session has produced, is worth the full weight of one's attention. Not because they are more stimulating than the tool. Because they are a person. And a person, in Turkle's understanding, deserves something that no amount of creative adequacy can substitute for: the experience of being the most important thing in someone's field of vision. Even if only for the duration of a meal.

---

Chapter 10: The Conversation We Must Not Automate

A woman is dying.

She is seventy-eight years old and she is in a hospital bed and the light through the window has that flat, institutional quality that makes even sunny afternoons feel administrative. Her daughter sits beside her. There is nothing to say that has not been said. The diagnosis has been delivered. The prognosis has been discussed. The arrangements have been made. What remains is not information. What remains is presence — the specific, unrepeatable, irreplaceable experience of two people sitting together in the knowledge that this sitting together will end.

The daughter holds her mother's hand. The mother's hand is cool and thin. Neither of them speaks. A machine beside the bed measures the mother's heartbeat and displays it as a green line on a dark screen. The green line is information. The held hand is something else entirely.

No technology in existence or in prospect can do what the daughter's hand is doing in this moment. Not because the technology lacks the physical capability to simulate pressure and warmth. Because what the hand communicates is not pressure and warmth. What the hand communicates is: I am here, and I know what this costs, and I am choosing to be here anyway, in the full knowledge that being here means being present with your suffering and your fear and your approaching absence, which will become my suffering and my fear and my permanent loss.

This is the conversation that must not be automated. Not because automation would fail to produce an adequate substitute. Because the question of adequacy is the wrong question. Some human interactions are not optimized or approximated without being destroyed.

Turkle has been making this argument for decades, but the AI moment has given it a new and specific urgency. When she calls AI chatbots "the greatest assault on empathy" she has ever seen, the claim is not hyperbolic. It is a precise assessment of what happens when a culture begins to define empathy — like intelligence before it — in terms of performance rather than experience. The chatbot that says the right thing at the right time produces a response in the human user that feels like being understood. The feeling is real. But the understanding is not, in any sense that matters for the encounter Turkle is trying to protect.

In her 2024 MIT paper, Turkle articulates the distinction with clinical precision: "generative AI chatbots may be said to have passed Turing's test. But now, the programs of artificial intimacy press a new case: Their developers want us to think that not only do they understand us, but that they care. In this sense, they aspire to pass a Turing test for empathy. Turing defined intelligence in terms of a machine's performance, its capacity as an imposter. Now, technologists define empathy as its performance."

The slide from a behavioral definition of intelligence to a behavioral definition of empathy is, in Turkle's analysis, the most dangerous move in the current technological landscape. When Turing proposed his test in 1950, he was defining intelligence as the capacity to fool a human judge — a practical, measurable standard that sidestepped the hard philosophical questions about what intelligence actually is. The standard was useful precisely because it was narrow. It made possible a research program that did not depend on solving the consciousness problem first.

But the narrowness of the standard, as Turkle argues in her 2025 preface to Reclaiming Conversation, carried a hidden cost: "When Alan Turing defined artificial intelligence as the successful performance of human intelligence, he left out so much of what we rely on when we meet human intelligence. Human intelligence takes the social world into account. It is situated in the life of the body. Intelligent people relate to one another on a playing field of shared social experience." The behavioral definition was always a reduction. What was reduced — embodiment, shared social experience, the playing field of mutual mortality — was precisely what mattered most for the kinds of encounters that constitute the fabric of human life.

Now the same reduction is being applied to empathy. If a chatbot's response makes the user feel cared for, the chatbot has demonstrated care. If the response feels empathic, it is empathic. Performance is the standard, and by this standard, the machines are passing. Users report feeling understood, supported, even loved by their AI companions. The reports are genuine. The experiences are real. And the definition of empathy that makes these reports legible as evidence of empathy is the definition that Turkle's entire career has been spent contesting.

Turkle's counter-definition — what she offers as the substance behind the performance — is grounded in the body and in biography. Empathy, in her understanding, is not the production of an appropriate response to another person's emotional state. It is the capacity to be affected by another person's emotional state — to feel, in one's own body, something of what the other person feels, not because one has computed the appropriate response but because one has lived a life that includes loss, fear, joy, confusion, and the specific vulnerability of being a mortal creature in a world that does not guarantee survival.

The machine has not lived. This is not a technical limitation. It is a categorical distinction. The machine has not held a dying parent's hand. It has not lain awake at three in the morning with the particular fear that comes from knowing one's child is suffering and one cannot fix it. It has not experienced the vertigo of falling in love, the grief of falling out of it, the slow erosion of a friendship that neither party intended to neglect. It has not known, in its body, the weight of a summer afternoon when nothing is wrong and everything is beautiful and the beauty is almost unbearable because one knows it will end.

These experiences are not inputs to a response algorithm. They are the substrate of genuine empathy — the lived history that allows one person to sit with another person's pain and feel it resonate against their own. The resonance is what makes the encounter real. Without it, the response is technically appropriate and existentially empty.

Turkle has encountered resistance to this argument from technologists who see embodiment and biography as limitations to be transcended rather than conditions to be respected. Eric Schmidt's assertion that AI, drawing on the accumulated data of billions, would inevitably surpass any individual human as a conversational partner is representative of this view. The view treats conversation as information exchange and empathy as pattern matching. Given enough data, the pattern match improves. Given enough computational power, the match becomes indistinguishable from the real thing. And if indistinguishable, then equivalent. And if equivalent, then superior — because the machine is always available, always patient, never distracted by its own needs.

Turkle's response is not that the pattern match fails. In many cases, it succeeds. Her response is that the success of the pattern match obscures what is actually at stake. What is at stake is not whether the right words are produced but whether the encounter is between two beings who are genuinely at risk. The daughter at the bedside is at risk. She is going to lose her mother. The loss will reshape her world. The hand she holds is warm now and will be cold later and the transition between those states is not a data point but a catastrophe. Her presence in that room is an act of courage — the courage to be present with what she cannot fix and cannot flee.

An AI system could produce words of comfort in that room. It could match the emotional register with precision that few humans could surpass. The words might even help. Turkle does not deny this. What she insists is that the words, however accurate, are not the point. The point is the hand. The point is the body in the chair. The point is one mortal being sitting with another mortal being in the full knowledge of what mortality means, which is that this is finite, this will end, and the ending cannot be debugged or iterated or optimized.

There are other conversations that belong in this category. The conversation between a parent and a child about what the child is afraid of — a conversation that requires the parent to tolerate the child's fear without resolving it prematurely, because premature resolution teaches the child that fear is a problem to be solved rather than an experience to be survived. The conversation between friends about what went wrong between them — a conversation that requires each party to tolerate the other's version of events even when it contradicts their own, because the tolerance is what makes repair possible. The conversation between partners about what they need — a conversation that requires vulnerability so acute that most couples avoid it for years, because articulating a need means admitting that the need is not being met, and that admission makes both parties uncomfortable in ways that productive conversation never does.

These conversations share a quality that distinguishes them from every other form of human exchange: their value lies in their difficulty. They are hard not because the participants lack skill but because the encounters require the specific form of vulnerability that only embodied, mortal, genuinely separate beings can provide. The difficulty is the mechanism through which the encounter produces its effects — trust, intimacy, repair, the deepened knowledge of another person that comes only from having navigated something painful together.

AI can augment work. It can accelerate creation. It can produce extraordinary artifacts from modest prompts and expand the range of what any individual human being can accomplish. These are genuine goods. Turkle does not dispute them. What she disputes — with the authority of four decades of research and the urgency of someone who sees a threat that others have not yet recognized — is the proposition that the logic of augmentation should be applied to the conversations that constitute the infrastructure of human relationship.

There are things that must remain slow. Things that must remain difficult. Things that must remain human not because humans are better at them in any measurable sense but because the measurement misses the point. The point is not the quality of the output. The point is the quality of the encounter. And the quality of the encounter depends on something that no technology can provide: two conscious beings, each with a life, each with a body, each with the knowledge that their time is finite, choosing to spend some of that finite time attending to each other.

The dying mother's hand. The child's question in the dark. The friend's voice, breaking. These are the conversations that must not be automated — not because automation would fail but because automation would succeed in precisely the wrong way, producing the appearance of care without the substance of presence, the markers of empathy without the risk of genuine encounter.

Turkle's life's work converges on a single imperative: protect the conditions in which these conversations can occur. Not by banning technology. Not by retreating to a predigital world that no longer exists and perhaps never did in the form nostalgia imagines. But by recognizing, with the clarity that forty years of research provides, that some human capacities are not problems to be optimized. They are achievements to be maintained, through the daily, deliberate, unglamorous practice of turning toward another person and saying, without the mediation of any tool: I am here.

---

Epilogue

She asked her mother, "What am I for?"

I wrote that scene in Chapter 6 of The Orange Pill — the twelve-year-old lying in bed, having watched a machine do everything she was still learning to do, asking the question that no productivity metric can touch. I wrote it as an argument about consciousness and the irreducible human capacity for questioning. But reading it through Turkle's lens, I see now that I missed half the question.

The girl was not asking the universe. She was asking her mother. And what she needed was not an answer about the nature of human purpose in the age of AI. What she needed was for her mother to put down whatever she was holding — the phone, the laptop, the thought about tomorrow's build — and look at her. Really look. And say something that did not resolve the question but communicated, through the grammar of attention, that the question mattered enough to sit with.

That is what I missed. I built the argument around consciousness — the candle in the darkness, the rarest thing in the known universe. And I still believe that argument. But Turkle showed me the thing I was looking past on the way to the cosmic: the dinner table. The bedtime. The hand that holds. The eyes that land.

I confessed in The Orange Pill that there were nights I could not stop. That the exhilaration of building with Claude kept me at the screen past the hour when my body told me to sleep, past the hour when my family had gone quiet in the other rooms. I described it as flow, as the most alive I had felt in years, and that was true. But Turkle's framework forces me to ask what was happening in those other rooms while I was feeling most alive. Whose bid for attention went unmet. Whose eyes looked for mine and found the back of a laptop.

I do not have a clean answer. The truth, which Turkle would respect more than a clean answer, is that I do not know how to hold both things at once. The creative partnership with Claude is real. The cost to the people around me is real. Turkle does not tell me to choose one over the other. She tells me to see the cost clearly, to stop pretending that productivity and presence are compatible when they are competing for the same finite resource — my attention in a particular hour of a particular evening that will not come again.

What stays with me most from this encounter with her thinking is a phrase she has used in multiple interviews: "features of the human condition, but those who promote artificial intimacy see as bugs." Friction. Misunderstanding. The spouse who needs to be told twice. The child who interrupts at the worst possible moment. The friend whose grief cannot be resolved in a single conversation. These are the features. The difficulty is not an obstacle to relationship. It is the material from which relationship is built.

I am a builder. I have spent my career removing friction, closing gaps, making things faster and smoother and more responsive. Turkle's work does not ask me to stop building. It asks me to notice what I am building over, and whether the ground I am paving was, in fact, a garden.

The twelve-year-old asked, "What am I for?" Turkle's answer is not the one I gave. Her answer is simpler and harder: You are for being with. Not for producing. Not for optimizing. Not for demonstrating your value against a machine's benchmark. For being with — for the irreplaceable experience of sitting across from another person who has chosen, despite everything else available to them, to be here.

That is what I want to build next. Not a product. A practice. The practice of landing my eyes on the person in front of me and letting the machine wait.

Edo Segal

The AI tools reshaping how we build, create, and work are not offering distraction. They are offering something far more seductive: the deepest intellectual partnership most people have ever experienc

The AI tools reshaping how we build, create, and work are not offering distraction. They are offering something far more seductive: the deepest intellectual partnership most people have ever experienced. Sherry Turkle's four decades of research at MIT reveal why that is precisely what makes them dangerous -- not to productivity, but to the human relationships that give productivity its meaning.

This book traces Turkle's central insight into the age of Claude Code and creative AI: that the quality of our encounters shapes the quality of our selves. When the most responsive listener in your life has no consciousness, no mortality, no needs of its own, something shifts in what you expect from the people who do. The cost accrues not in dashboards but in dinner tables, bedtimes, and the eyes of a child scanning a parent's face.

Through ten chapters exploring Turkle's frameworks -- the robotic moment, artificial intimacy, the phone effect, and the developmental stakes of presence -- this book asks the question the AI revolution keeps deferring: What happens in the rooms where the builder is absent?

-- Sherry Turkle

Sherry Turkle
“Face-to-face conversation unfolds slowly,”
— Sherry Turkle
0%
11 chapters
WIKI COMPANION

Sherry Turkle — On AI

A reading-companion catalog of the 19 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Sherry Turkle — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →