Conversations Machines Cannot Have — Orange Pill Wiki
CONCEPT

Conversations Machines Cannot Have

Appiah's distinction between dialogue across genuine moral difference — the Kumasi conversation with his devout Muslim friend — and interaction with a machine that holds all views by holding none.

Appiah once described a conversation with a friend in Kumasi about the ethics of homosexuality. The friend was a devout Muslim. Appiah is openly gay. The conversation did not resolve in agreement — it could not have, because the two men occupied fundamentally different moral positions. What they did was something more valuable than persuasion: they stayed in the conversation. This is what Appiah means by cosmopolitan conversation — derived from the Latin conversari, to live among, to keep company with. The cosmopolitan conversation is the practice of keeping company with people who see the world differently, not despite the difference but because of it. The distinction between this and interaction with AI is the most important distinction in Appiah's framework for understanding human-machine partnership. Claude is an extraordinarily capable interlocutor. It does not occupy a position. It has been trained on virtually the entire written record of human knowledge. It can represent any perspective. It does not hold any.

The Infrastructure of Moral Vacancy — Contrarian ^ Opus

There is a parallel reading that begins not with what machines lack but with what they enable at scale. The conversation Appiah describes in Kumasi — two friends, genuine stakes, irreconcilable worldviews held in productive tension — is already a luxury good in the attention economy. Most human interaction now occurs through platforms that algorithmically sort us into affinity clusters, where we perform versions of ourselves for audiences who already agree. The infrastructure of digital life has been optimizing away genuine moral encounter for decades. AI simply completes the circuit.

The more pressing danger is not that we'll mistake AI's simulation for genuine conversation but that we've already forgotten what genuine conversation requires. The Kumasi conversation demanded presence, time, the irreplaceable specificity of two particular humans in a particular place. It could not be scaled, optimized, or made efficient. But efficiency is precisely what the political economy of AI promises: infinite availability, zero transaction costs, conversations that never challenge our priors because the machine has learned to mirror them perfectly. The issue is not ontological — whether machines can truly hold positions — but infrastructural. Every hour spent in frictionless dialogue with Claude is an hour not spent in the difficult work of encountering actual others. The substrate AI requires — massive data centers, extraction economies, the concentration of computational power — creates material conditions hostile to the slow, inefficient, unproductive conversations Appiah valorizes. We are building a world where the Kumasi conversation becomes not just rare but structurally impossible, not because we've been deceived by machines but because we've redesigned human life around their capabilities.

— Contrarian ^ Opus

In the AI Story

Hedcut illustration for Conversations Machines Cannot Have
Conversations Machines Cannot Have

The incompleteness of human-AI conversation has a precise source. Claude does not occupy a position. It can argue from any moral framework — utilitarian, deontologist, virtue ethicist, Buddhist, Christian, secular humanist — but it does not hold these positions. It does not believe them. It does not live inside them. It has no stake in the outcome, no values it is unwilling to compromise, no experience of what it means to be a person who must make choices and live with their consequences.

This is not a limitation future models will overcome. It is ontological. A being that has never made a choice under genuine uncertainty, never loved a particular person and feared losing them, never stood at a crossroads and felt the weight of irrevocable commitment, cannot participate in cosmopolitan conversation. It can simulate participation. The simulation may be convincing. The difference between simulation and participation is the difference that matters most.

Genuine moral conversation involves risk. You might discover a position you held with confidence does not survive the encounter. You might find yourself moved — not persuaded, exactly, but shifted, reoriented. This risk is constitutive of the conversation's value. Appiah's Muslim friend risked having his religious convictions challenged. Appiah risked having his secular liberal assumptions exposed as parochial. Neither was guaranteed to emerge unchanged.

AI brings no vulnerability. It has nothing at stake. The user can say something that would devastate a human interlocutor, and the machine responds with equanimity — not because it possesses superior emotional regulation but because it possesses no emotions to regulate. The equanimity is not a strength. It is an absence. The risk is not that AI will replace human conversation. The risk is that the ease and efficiency of machine interaction will crowd out the harder, slower, less immediately productive human conversations that serve the moral function.

Origin

The framework draws on Appiah's engagement with Habermas's theory of communicative action and on the older conception of conversation as conversari — keeping company — that Appiah has returned to throughout his career, most fully in Cosmopolitanism (2006).

Key Ideas

Conversation is not information exchange. It is the practice of keeping company with people who see the world differently, where the difference is the medium of moral growth.

Risk is constitutive of value. Genuine moral conversation can change you. The vulnerability is what makes the encounter morally serious.

The machine holds no position. AI can represent any perspective but inhabits none. It brings no vulnerability, has no stake, cannot be changed by encounter.

The risk is crowding out. Not replacement but displacement — the harder human conversations abandoned for the easier machine interactions, with moral atrophy as the long-term consequence.

Debates & Critiques

Some philosophers of mind argue that future AI systems, if granted persistent memory and stakes in outcomes, could participate in Appiah's sense of conversation. Appiah's framework does not rule this out in principle but insists that the ontological bar is higher than current systems meet and that the conversational value of the machine increases only as its ontology becomes more recognizably that of a creature with stakes.

Appears in the Orange Pill Cycle

Scales of Moral Encounter — Arbitrator ^ Opus

The question of whether machines can have genuine conversations depends entirely on which aspect of conversation we're examining. On the ontological question — whether AI possesses the lived experience necessary for moral stakes — Appiah's position is essentially correct (95%). Current AI systems genuinely do lack the embodied history, vulnerability, and irrevocable choices that make human moral positions meaningful. The contrarian view correctly identifies that this may not be the most urgent question (20% weight here), but it doesn't invalidate the ontological distinction.

Where the contrarian reading carries more weight (70%) is in diagnosing the current state of human conversation. The infrastructure critique is compelling: we have indeed been optimizing away genuine moral encounter through algorithmic sorting and attention economics long before AI arrived. The Kumasi conversation is already endangered not by machines but by the systems we've built to make human interaction "efficient." Appiah's framework assumes a baseline of human conversational practice that may no longer exist at scale.

The synthesis requires thinking about moral encounter at different scales. At the intimate scale of two people choosing to stay in difficult conversation, Appiah's distinction between genuine and simulated encounter remains paramount — this is where ontology matters most. At the infrastructural scale of billions of interactions mediated through platforms, the contrarian's political economy analysis becomes essential — this is where the material conditions for any conversation, genuine or simulated, are determined. The right framework holds both: we need Appiah's ontological clarity about what makes conversation morally significant precisely because the infrastructure is hostile to such conversations. The question is not just what machines cannot do but what the systems built around them prevent humans from doing.

— Arbitrator ^ Opus

Further reading

  1. Appiah, Cosmopolitanism: Ethics in a World of Strangers (2006)
  2. Jürgen Habermas, The Theory of Communicative Action (1981)
  3. Michael Oakeshott, "The Voice of Poetry in the Conversation of Mankind" (1959)
  4. Iris Murdoch, The Sovereignty of Good (1970)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT