By Edo Segal
The column almost broke the argument I spent a year building.
Appiah writes "The Ethicist" for the *New York Times Magazine*. Every week, strangers send him their moral tangles — the friend who lied, the colleague who cheated, the inheritance that split a family. He reads them. He thinks. He responds from a position earned across seven decades of living between cultures, between continents, between the particular and the universal.
Then researchers fed the same dilemmas to GPT-4. Nine hundred evaluators rated the machine's answers as more moral, more trustworthy, more thoughtful than the philosopher's.
I sat with that result for a long time. If you have read *The Orange Pill*, you know I believe AI is the most powerful amplifier ever built. But an amplifier that produces *better ethical guidance* than one of the world's foremost moral philosophers — that cracks a different part of the fishbowl. Not the part about capability. The part about what capability is worth when it can be reproduced at the cost of a subscription.
Appiah's response, delivered across four decades of work rather than a single rebuttal, is the most sophisticated framework I have found for holding the tension I described in the main book. He refuses both poles. He will not tell you AI is liberation and he will not tell you it is catastrophe. He will tell you that you are a particular person with particular attachments and particular obligations, and that the machine — however fluent, however comprehensive — occupies no position at all.
That word. *Position.* It reorganized something in my thinking. Claude holds the view from everywhere. It has processed more moral philosophy than any human could read in a lifetime. It brings knowledge without weight. I bring weight without comprehensive knowledge. The collaboration works because both are present. But Appiah made me see that the weight is not optional. It is the thing that separates knowing what is right from doing it.
He also made me see something about obligation I had been skirting. The strangers downstream of what we build — the displaced worker, the student whose classroom was reshaped without her consent, the communities bearing costs they did not choose — have claims on us. Not aspirational claims. Structural ones. Appiah's cosmopolitanism says your particular attachments are real and your universal obligations are also real, and the tension between them is not a problem to solve but a condition to navigate.
I needed that lens. The technology discourse is excellent at capability and terrible at obligation. Appiah is the corrective.
— Edo Segal ^ Opus 4.6
1954–
Kwame Anthony Appiah (1954–) is a Ghanaian-British philosopher and public intellectual whose work spans moral philosophy, political theory, and the philosophy of identity. Born in London, raised in Kumasi, Ghana, and educated at Cambridge, he is the Laurance S. Rockefeller University Professor of Philosophy at Princeton University and writes "The Ethicist" column for the *New York Times Magazine*. His major works include *In My Father's House: Africa in the Philosophy of Culture* (1992), *The Ethics of Identity* (2005), *Cosmopolitanism: Ethics in a World of Strangers* (2006), *The Honor Code: How Moral Revolutions Happen* (2010), and *The Lies That Bind: Rethinking Identity* (2018). Appiah's central philosophical contribution is rooted cosmopolitanism — the insistence that particular attachments and universal moral obligations coexist as irreducible features of ethical life. His work on how moral revolutions occur through shifts in honor codes rather than rational argument alone has influenced fields from political philosophy to human rights advocacy. He has received the National Humanities Medal and was named one of *Foreign Policy*'s Top 100 Global Thinkers.
In the autumn of 2025, Kwame Anthony Appiah published an essay in The Atlantic that began with a quiet observation and ended with a challenge no reader could comfortably dismiss. The observation: artificial intelligence had already moved from the miraculous to the taken-for-granted, the way Google once did, and the anxiety surrounding it had shifted accordingly — "from apocalypse to atrophy." The challenge: "If there's one skill we can't afford to lose, it's the skill of knowing which of them matter."
Between the observation and the challenge lay the architecture of a philosophical mind that has spent four decades refusing the two simplest responses to any question about human identity. The first response says the individual is everything — a sovereign atom, self-determining, owing nothing essential to the communities that shaped it. The second says the community is everything — that the person is a product of social forces, defined by the categories of race, nation, culture, and profession that locate them in the collective body. Appiah rejects both. Not by splitting the difference, which would produce a tepid centrism unworthy of serious attention, but by insisting that each captures something real and that the tension between them is not a defect in the analysis but the condition of being human.
This double insistence — individual dignity and social embeddedness, particular attachment and universal concern — is what Appiah calls cosmopolitanism. The word carries baggage. It conjures airport lounges and frequent-flyer programs, the frictionless global citizen who belongs everywhere and therefore nowhere. Appiah's cosmopolitanism is something different entirely. It is rooted. It begins in particular attachments — to a family in Kumasi, a college in Cambridge, a neighborhood in Manhattan — and extends outward toward a moral concern that does not erase the particular but insists that the particular exists within a larger frame. The cosmopolitan loves her own city without believing her own city is the only city worth loving. She honors her own traditions without mistaking them for the only traditions that contain truth.
The philosophical architecture rests on several load-bearing pillars constructed across decades of work. In Cosmopolitanism: Ethics in a World of Strangers (2006), Appiah argues that human beings have obligations to people they have never met and will never know — obligations that coexist with, rather than replace, their particular attachments to family, friends, and community. In The Ethics of Identity (2005), he makes the case that identity is not a fixed inheritance but an ongoing project of self-creation, shaped by social categories but never fully determined by them. In The Lies That Bind: Rethinking Identity (2018), he demonstrates that every identity category — race, religion, nationality, class, culture — is more complex, more contested, and more fluid than the politics of identity typically acknowledges. And in The Honor Code: How Moral Revolutions Happen (2010), he advances the striking insight that moral transformations occur not primarily through rational argument but through shifts in the social codes of honor that govern behavior — when a practice becomes shameful rather than merely wrong, the practice ends.
Each of these pillars bears weight in the age of artificial intelligence, and each will be examined in the chapters that follow. But the foundation they rest on is the both/and — the refusal to choose between the individual and the collective, between rootedness and openness, between the Romantic myth and the relational view.
Appiah's biography is itself a cosmopolitan argument. Born in London in 1954, raised in Kumasi, Ghana, the son of a prominent Ashanti politician and a British writer, educated at Cambridge, and now the Laurance S. Rockefeller University Professor of Philosophy at Princeton, Appiah has lived the tension his philosophy describes. His father, Joe Appiah, was a pan-Africanist lawyer whose political commitments were fiercely particular — rooted in Ghanaian independence, in Ashanti identity, in the specific soil of a specific nation. His mother, Peggy Cripps, was the daughter of Sir Stafford Cripps, the British Labour politician, a woman whose intellectual horizons were as wide as the empire her father helped dismantle. Between them, young Kwame absorbed two truths that most people experience as contradictions: that particular identity matters profoundly, and that no particular identity exhausts the moral universe.
This biographical rootedness saves Appiah's cosmopolitanism from the abstraction that typically afflicts universalist ethics. When Appiah argues that we have obligations to strangers, he is not speaking from the position of a philosopher who has never been a stranger. He has been the Ghanaian in Cambridge, the philosopher in Kumasi, the African in America — always located, always particular, always aware that his particular location is one among many. The cosmopolitanism that emerges from this biography is not the cosmopolitanism of detachment. It is the cosmopolitanism of a person who has learned, through experience rather than theory alone, that you can be fully committed to your own traditions while recognizing that other traditions contain truths your own does not.
The relevance of this framework to the central argument of The Orange Pill is structural, not decorative. Segal's book identifies a tension that runs through every chapter: between the Romantic myth of individual genius — the idea that creativity originates in a single extraordinary mind — and the relational view that locates intelligence in the connections between minds. Dylan is both a unique node and a confluence of cultural tributaries. The engineer in Trivandrum is both an individual with irreplaceable judgment and a participant in a network whose value depends on collaboration. The parent asking "What do I tell my kids?" is both a particular person with particular obligations and a member of a species confronting a universal challenge.
Appiah's framework does not resolve this tension. It explains why the tension cannot and should not be resolved. The individual node is real — possessed of a dignity, a specificity, a perspective that no network can replicate. And the network is real — the pattern of connections through which individual specificity becomes productive, through which the node's potential is realized in actual creative and moral acts. To choose the node over the network is atomism, the philosophical position that produces libertarian indifference to collective consequence. To choose the network over the node is communitarianism, the position that produces conformity dressed as solidarity. The cosmopolitan holds both, and the holding is the work.
Appiah's Atlantic essay demonstrates this holding with particular clarity when applied to artificial intelligence. He observes that the fear of AI-driven de-skilling is real — that teachers "especially are beginning to see the rot" as students outsource thinking to machines. But he immediately complicates his own observation. Not all de-skilling is the same. Some skills that atrophy were never worth preserving; nobody mourns the decline of rote memorization now that writing exists. Other skills that atrophy are genuinely precious — the capacity for sustained attention, the ability to construct an argument through the friction of drafting and revision, the habit of sitting with uncertainty long enough for genuine insight to emerge. The ethical task, Appiah argues, is not to resist de-skilling wholesale but to discriminate among its forms — to ask which skills anchor our humanity and which were always instrumental, always means rather than ends.
This discrimination requires the cosmopolitan both/and. The techno-optimist sees de-skilling as progress: old skills are replaced by new capabilities, the arc bends toward expansion. The techno-pessimist sees de-skilling as catastrophe: the essential capacities erode, the arc bends toward atrophy. Appiah sees both and refuses to let either dominate. "Stewardship now means ensuring that the capacities in which our humanity resides — judgment, imagination, understanding — stay alive in us," he writes. The sentence performs the cosmopolitan operation: it accepts the reality of AI integration (stewardship, not rejection) while insisting on the preservation of what matters most (judgment, imagination, understanding — capacities that are individual even as they are exercised in social contexts).
The parallel between Appiah's philosophical position and Segal's "silent middle" — the vast group of people who feel both the exhilaration and the loss of the AI transition but lack a framework for holding both — is not coincidental. Appiah has spent his career providing philosophical tools for people who refuse to choose between positions that the culture presents as mutually exclusive. You can be African and cosmopolitan. You can honor your particular traditions and recognize the value of others. You can celebrate individual creativity and acknowledge that it arises from connection. You can use AI and worry about what it costs.
The both/and is not a compromise. Compromise implies that each side surrenders something in order to reach an agreement neither fully endorses. The cosmopolitan both/and is something more demanding: the insistence that both truths are fully true, that the tension between them is productive rather than pathological, and that the ethical work of navigating that tension is never finished.
This has practical implications that extend well beyond philosophy departments. Consider the question that animates much of The Orange Pill: When AI amplifies everything, what becomes of who we are? Appiah's framework suggests that the question itself contains the answer, or at least the direction in which the answer lies. "Who we are" is neither a fixed possession (the atomist's claim) nor a social construction that can be dissolved and reconstituted at will (the communitarian's claim). It is an ongoing project — shaped by particular attachments, expressed through connection, tested by encounter with difference, and revised in response to changing circumstances. AI is the most powerful changing circumstance in a generation. The project of identity continues. The materials have shifted. The work has not.
What makes Appiah's framework uniquely valuable in the current moment is its capacity to hold complexity without collapsing into either celebration or mourning. The technology cycle produces, with mechanical regularity, two kinds of public intellectual: the prophet who announces the future with the confidence of someone who has already seen it, and the elegist who mourns the past with the tenderness of someone who has already lost it. Appiah is neither. He is a navigator — a thinker whose fundamental question is not "Where are we going?" or "What have we lost?" but "How do we hold what we have and what we are gaining in a relationship that serves human flourishing?"
That question has no final answer. It has only better and worse navigation. The chapters that follow will examine the specific navigational challenges that AI poses to Appiah's cosmopolitan framework: the challenge to individual identity when capability is commoditized, the challenge to cultural diversity when all creation passes through the same tool, the challenge to moral obligation when the costs of progress fall on strangers, and the challenge to genuine conversation when the most available interlocutor is a machine that brings knowledge without position. Each challenge is real. None is fatal. The navigation continues.
Appiah closes his Atlantic essay with a formulation that serves as the compass for what follows: AI is "simply the latest chapter in our long apprenticeship to our own inventions." The word apprenticeship is precise. An apprentice learns from the master, but the goal of apprenticeship is not submission — it is mastery. The apprentice who merely copies the master has failed. The apprentice who learns what the master knows and then exceeds it, applying the knowledge to problems the master never faced, has succeeded. Humanity is apprenticed to its own tools. The question is whether this particular apprenticeship will produce mastery or dependence — whether the apprentice will learn what the tool knows and exceed it, or whether the tool's fluency will become a substitute for the apprentice's own capacity.
The cosmopolitan answer is that it depends. It depends on the quality of the attention brought to the apprenticeship. It depends on the structures — educational, institutional, cultural — that surround the learning. It depends on whether the apprentice remembers that the goal is not to use the tool well but to become, through using it, a more capable, more discerning, more fully human person.
Both outcomes are possible. The navigation between them is the work of a generation. Appiah's cosmopolitanism provides the philosophical architecture for that navigation — not a map, because the territory is unmapped, but a compass, because the direction is clear even when the path is not.
The direction is toward a world in which individual dignity and social connection are both preserved. In which particular attachments coexist with universal concern. In which the AI amplifier serves not merely efficiency but flourishing — not merely the flourishing of the builder at the terminal but the flourishing of the strangers downstream.
That direction has a name. It is cosmopolitanism. And it begins, as all genuine navigation does, with the recognition that you do not yet know where you are.
A Wharton School research team, led by Christian Terwiesch, conducted an experiment in 2023 that should have disturbed any philosopher who makes a living giving ethical advice. They took moral dilemmas of the kind that Kwame Anthony Appiah addresses weekly in his "The Ethicist" column for The New York Times Magazine — questions about when to tell a friend an uncomfortable truth, whether to report a colleague's dishonesty, how to balance loyalty and honesty in a family dispute — and presented them to GPT-4. Then they showed both sets of responses, Appiah's and the machine's, to hundreds of evaluators without identifying the source. The evaluators found no significant difference in quality. In a subsequent study by researchers at UNC Chapel Hill and the Allen Institute for AI, GPT-4o's ethical advice was rated by nine hundred evaluators as "more moral, trustworthy, thoughtful and correct" than Appiah's own.
The implications are unsettling in a way that reaches beyond the particular philosopher involved. If a machine trained on the statistical patterns of human moral reasoning can produce ethical guidance indistinguishable from — or preferred to — the guidance of one of the world's most accomplished moral philosophers, then what, exactly, does the philosopher possess that the machine does not?
Appiah has not publicly commented on these studies. But his entire body of work constitutes a response, one that was already formulated decades before the experiments were conducted. The response begins with a distinction so fundamental that missing it makes the experimental results unintelligible: the distinction between the output and the position from which the output is produced.
GPT-4 produced ethical advice that evaluators preferred. This is a fact about the output. It tells us something about the pattern-matching capabilities of large language models and something about the evaluative criteria of the human judges. It tells us nothing about whether the machine occupies a position from which ethical advice can be genuinely given — whether it has what Appiah, in his Afropean interview, called the "practical wisdom, sensitivity to how things are for people, humanity" that he considers essential to ethical counsel.
This is not a semantic distinction. It is the distinction between a parrot that can pronounce the word "fire" and a person who smells smoke. The acoustic output may be identical. The relationship between the speaker and the utterance is categorically different. The parrot does not know what fire is. The person knows, because the person has been burned, or has watched something burn, or has stood in the cold and felt the relief of warmth, or has lost a house, or has built one. The knowledge is embodied. It is biographically specific. It arises from the particular history of a particular being in a particular world.
Appiah's philosophical framework provides the most rigorous available account of why this distinction matters. In The Ethics of Identity, he argues that the individual possesses a value that is not conferred by social arrangement and cannot be revoked by it. This is the moral foundation of human rights: the recognition that each person has inherent dignity — a specificity, an irreplaceability, a perspective that no other person and no combination of persons can replicate. The dignity does not reside in what the person can do. It resides in what the person is — a being with a particular history, particular attachments, particular stakes in the world.
The AI experiment reveals, with uncomfortable precision, the difference between capability and identity. GPT-4 can do what Appiah does, at least as measured by evaluator preference. But GPT-4 cannot be what Appiah is. It cannot occupy the position from which his advice is given — the position of a Ghanaian-British philosopher who has lived on three continents, navigated racial identity in multiple cultural contexts, lost a parent, raised a family, and accumulated the specific, unrepeatable experience of being Kwame Anthony Appiah in the world for seven decades.
The Wharton researchers themselves acknowledged this, in their way. They stated they "did not design this study to put Dr. Appiah out of work." But the reassurance, well-intentioned as it was, missed the deeper point. The question is not whether Appiah will keep his column. The question is whether a civilization that can generate ethical advice computationally will continue to value the kind of ethical thinking that arises from lived experience — whether the irreplaceability of the individual thinker will retain its moral and cultural weight when the market discovers that a machine can produce equivalent output at negligible cost.
Cognitive scientist Gary Marcus, responding to the studies, articulated the objection precisely: "It seems to me wrongheaded to assume that the average judgment of crowd workers casually evaluating a situation is somehow more reliable than Appiah's judgment." Marcus's point is not that the crowd workers were wrong. It is that their evaluative framework — rating responses on scales of moral quality, trustworthiness, and thoughtfulness — may not capture the dimensions along which Appiah's advice is genuinely superior. The crowd workers assessed the product. They could not assess the process by which the product was generated, or the relationship between the advisor and the world the advice addresses, or the cumulative moral intelligence that decades of engagement with real human dilemmas have deposited in a specific philosopher's judgment.
This maps onto a broader crisis that Appiah's framework illuminates with particular force. The age of AI is an age in which the outputs of individual intelligence are being reproduced at scale by machines that do not possess individual intelligence. The code that took a senior engineer years of practice to write can be generated in minutes. The legal brief that required a lawyer's deep familiarity with case law can be drafted by a system that has processed more case law than any lawyer could read in a lifetime. The ethical advice that reflects decades of philosophical training and lived experience can be pattern-matched by a model that has ingested the corpus of moral philosophy.
In every case, the output is competitive. In every case, the output lacks the thing that made the individual's production of it meaningful — the particular position, the specific biography, the accumulated wisdom that cannot be separated from the life that produced it.
Appiah's response to this crisis is not to deny the capability of the machines. He is too intellectually honest for that, and his Atlantic essay demonstrates a sophisticated understanding of what AI can and cannot do. His response is to insist that the value of the individual does not depend on the individual's monopoly over a particular kind of output. The senior engineer's value does not evaporate when AI can write comparable code. The lawyer's value does not dissolve when AI can draft comparable briefs. The philosopher's value does not disappear when AI can produce comparable ethical advice.
What changes is the locus of value. It migrates from the output to the judgment that directed the output — from what the individual can produce to what the individual can discern, evaluate, and choose. This is what Appiah means when he writes that "the issue isn't how humans compare to bots but how humans who use bots compare to those who don't." The comparison that matters is not between the human and the machine. It is between the human who exercises judgment in directing the machine and the human who accepts the machine's output without interrogation. The former retains the individual's essential function: the capacity to evaluate, to discriminate, to bring a particular perspective to bear on a question that admits multiple answers.
This redefinition of individual value has deep roots in Appiah's philosophy. In The Lies That Bind, he demonstrates that the identity categories through which people understand themselves — racial identity, national identity, professional identity — are more fluid and contested than they appear. The "lies that bind" are the narratives that present these categories as fixed, natural, and exhaustive. In reality, every identity category is a simplification of a more complex truth. The person categorized as "Black" or "American" or "software engineer" is always more than the category suggests — possessed of characteristics, experiences, and perspectives that overflow the label.
AI intensifies this insight by destabilizing the professional identity categories that have served, for generations, as primary markers of individual value. When the software engineer's code can be generated by a machine, "software engineer" ceases to function as a reliable proxy for a particular kind of cognitive capability. The category does not become meaningless — the engineer still possesses knowledge, judgment, and experience that the machine does not share. But the category becomes less informative, less reliable as a marker of what the person can contribute, less central to the story the person tells about who she is.
Appiah's framework suggests that this destabilization, painful as it is, contains a potential liberation. If professional identity was always a simplification — if "software engineer" never captured the full reality of the person it described — then the erosion of professional identity as a primary marker might force a more honest reckoning with the question of what individual value actually consists of. Not "I am valuable because I can write code that others cannot." Rather: "I am valuable because I occupy a specific position in the world, possess a specific history of experiences and judgments, and bring a perspective that no other being — human or machine — can replicate."
This is not consolation. It is a more accurate description of what was always true. The engineer's value was never really in the code. It was in the judgment about what code to write, for whom, and why. The code was the visible output; the judgment was the invisible input. AI has made the invisible visible by commoditizing the output and exposing the input as the thing that actually matters.
But here Appiah's framework demands an uncomfortable qualification. The recognition that individual value resides in judgment rather than output is true and important. It is also, for many people, practically useless — at least in the short term. The market pays for output. Salaries are denominated in code written, briefs filed, reports generated. The migration of value from output to judgment is a philosophical truth that has not yet become an economic reality. The engineer whose code can be generated by AI may understand, philosophically, that her judgment is the thing of lasting value. She may also understand, practically, that her employer is calculating how many engineers a single AI-augmented team lead can replace.
Appiah does not flinch from this. His work on identity has always acknowledged that the stories people tell about themselves are constrained by material circumstances. The person who loses a job does not merely lose income. She loses a narrative — a way of understanding herself, locating herself in the world, giving her days structure and her life meaning. The de-skilling that Appiah describes in his Atlantic essay is not merely a cognitive phenomenon. It is an identity phenomenon. When the skills that constituted your professional self are performed by a machine, the self must be reconstructed — and reconstruction takes time, support, and the kind of institutional scaffolding that is conspicuously absent in the current transition.
Appiah's insistence on the reality of the individual node — its irreducibility, its inherent dignity, its possession of a perspective no machine can share — provides a moral foundation. It says: the person whose professional identity is being destabilized does not become less valuable. Her value has not diminished. The market's recognition of her value has shifted, and the market is not the final arbiter of human worth.
That foundation matters. It matters because the alternative — the view that individual value is identical with economic productivity — leads to the conclusion that people whose productive capacity is matched by machines have diminished worth. Appiah's framework blocks that conclusion with the full force of the cosmopolitan moral tradition. The person is not reducible to her output. The node is real. Its dignity is not contingent on its market position.
But the foundation, by itself, is not sufficient. The individual exists in a network, and the network determines whether the individual's inherent value is actualized in the world or remains merely theoretical. The next chapter turns to the network — to the social embeddedness without which individual potential is an abstraction, a capability with no expression, a flame with no air.
The most significant detail in Segal's account of building Napster Station in thirty days is not the speed. It is the team.
Twenty engineers in Trivandrum. A design team with years of shared context. An executive who had built trust with his colleagues through the specific intimacy of having navigated chaos together. The AI tool — Claude Code — accelerated the work dramatically, but it accelerated something that already existed: a network of human relationships within which individual capability could be directed, evaluated, combined, and deployed. Without the network, the tool would have produced code. With the network, it produced a product.
This distinction is invisible in most accounts of AI-augmented productivity, which tend to focus on the individual builder and the individual tool. The "solo builder" narrative — one person, one AI, one weekend, one shipped product — dominates the discourse precisely because it flatters the Romantic myth of individual genius. It tells a story the culture already wants to believe: that the right person, given the right tool, can do anything.
Appiah's cosmopolitan framework insists on what this narrative omits. The individual is real, as the previous chapter argued. But the individual's value is realized only in connection — through the communities, institutions, and cultural frameworks within which individual capability becomes meaningful. This is not a concession to communitarianism. It is a recognition that human beings are, as Joseph Henrich has demonstrated in The Secret of Our Success, fundamentally cultural creatures whose intelligence is distributed across social networks rather than contained within individual skulls.
Henrich's research provides the empirical backbone for Appiah's philosophical claim. Humans are not the smartest animals because individual humans are extraordinarily intelligent. Individual humans, stripped of cultural context, are remarkably fragile — less capable than many other species of surviving in unfamiliar environments, less adept at solving novel physical problems, less equipped with instinctive knowledge about food, shelter, and danger. What makes humans extraordinary is their capacity for cumulative cultural learning — the ability to absorb, store, and transmit knowledge across generations through social networks. The intelligence lives in the network. The individual node contributes to and draws from the network, but the network is the primary repository of capability.
This insight, which Henrich develops through evolutionary biology and Appiah develops through moral philosophy, has direct implications for the AI transition. When AI augments individual capability, it augments a node. When AI integrates into a human network — a team, an organization, a community — it augments the network itself. The difference between these two forms of augmentation is the difference between giving a single musician a better instrument and improving the acoustics of the concert hall. The first makes one performer more capable. The second makes every performer more capable, and it makes the interactions between performers — the listening, the responding, the adjusting — more productive.
Segal's Trivandrum experience was a concert hall transformation, not merely an instrument upgrade. Each engineer became individually more capable, but the transformation that mattered was in the interactions between them. A backend engineer started building interfaces. A designer started writing features. The boundaries between roles — boundaries that had been artifacts of the translation cost between domains, not reflections of genuine cognitive limitation — dissolved when the cost of crossing them approached zero. What emerged was not twenty individual performers playing better but a network whose connectivity had dramatically increased.
Appiah's cosmopolitanism provides the ethical framework for evaluating this transformation. The increase in connectivity is, from a cosmopolitan perspective, a genuine good. When more people can contribute across more domains, the network becomes richer, more resilient, more capable of producing the novel combinations that drive creative and economic progress. But connectivity without diversity is not cosmopolitanism. It is conformity at scale. And the question Appiah's framework raises — a question the triumphalist narrative consistently fails to ask — is whether the AI-augmented network preserves the diversity of perspective that makes connectivity valuable, or whether it homogenizes the contributions by channeling all participants through the same tool, the same training data, the same characteristic modes of expression.
This is not a hypothetical concern. The aesthetics of AI-assisted output are already recognizable — a characteristic fluency, a certain polish, a smoothness that Segal, following Byung-Chul Han, identifies as potentially corrosive. When every engineer's code passes through the same model, when every designer's interface reflects the same training data, when every writer's prose bears the same invisible watermark of machine-assisted production, the network's outputs converge. The individual nodes may be more productive, but if their contributions become less distinctive, the network's creative capacity may actually diminish even as its productive capacity increases.
Appiah's defense of cultural contamination — his argument, developed most fully in Cosmopolitanism, that cultures grow richer through mixing rather than through isolation — provides the framework for understanding both the opportunity and the risk. Contamination, in Appiah's use of the term, is productive precisely because it involves the encounter between genuinely different perspectives. When a West African musical tradition meets a European harmonic framework, the result — jazz, blues, rock — is more than either tradition could have produced alone. But the encounter is productive because the traditions are different. If both traditions had already converged toward a single style, the encounter would produce nothing new. Contamination requires difference. Without it, there is nothing to contaminate.
AI is the most powerful engine of cultural contamination in human history. It has absorbed the entire written record of human knowledge — every tradition, every style, every perspective that has been committed to text. It can combine any element with any other. The synthetic potential is staggering. But the risk is that the engine runs in one direction: toward convergence rather than diversification. If every creator uses the same tool, and the tool produces outputs with a characteristic sameness, the diversity of the inputs decreases over time. The model trains on its own outputs. The outputs become the inputs for the next generation of outputs. The feedback loop tightens. The creative ecosystem loses biodiversity.
Appiah's cosmopolitanism suggests that the response to this risk is not to reject AI but to insist on the preservation of what Appiah calls "the variety of human life." This preservation requires institutional effort. It requires educational systems that cultivate distinctive perspectives rather than rewarding convergence toward a single standard. It requires organizational cultures that value cognitive diversity — not as a corporate buzzword but as a genuine epistemic resource, a recognition that a team of people who think differently will produce better outcomes than a team of people who think the same way, even if the latter is more efficient in the short term.
It also requires something more subtle and more difficult: a cultural commitment to the value of the particular. The cosmopolitan does not value diversity as an abstraction. She values the specific traditions, perspectives, and modes of expression that constitute diversity — the particular way a Ghanaian proverb encodes moral wisdom, the particular way a Japanese garden embodies an aesthetic philosophy, the particular way a jazz musician's phrasing reflects decades of embodied practice. These particulars are not interchangeable. They cannot be replaced by a statistical average of all proverbs, all gardens, all phrasings. Their value is precisely in their specificity — in the fact that they represent one way of being human, among many, and that the loss of any one of them diminishes the whole.
Michael Tomasello's research on the collaborative foundations of human cognition reinforces this point from a different direction. In A Natural History of Human Thinking, Tomasello argues that what distinguishes human thinking from the cognition of other great apes is not individual intelligence but shared intentionality — the capacity to engage in collaborative activities with shared goals, shared attention, and complementary roles. Human thinking is, at its evolutionary root, collective thinking. The individual mind was shaped by natural selection not primarily for solo problem-solving but for participation in collaborative enterprises that no individual could accomplish alone.
This evolutionary insight gives Appiah's cosmopolitan ethic a naturalistic foundation. The network is not merely an ethical ideal. It is the environment in which human intelligence evolved and in which it continues to function most effectively. When AI augments the network, it is augmenting the natural habitat of human thought. When AI diminishes the network — by replacing human collaboration with human-machine interaction, by reducing the diversity of perspectives in the collaborative process, by making solo production so efficient that collaboration seems unnecessary — it is degrading the habitat.
The Berkeley study that Segal discusses in The Orange Pill documented this degradation in real time. Workers who adopted AI tools expanded their individual scope but reduced their collaborative engagement. Delegation decreased. Each person did more, but they did it more alone. The network's links weakened even as the nodes became more productive. This is the pattern that Appiah's framework identifies as dangerous: an increase in individual capability accompanied by a decrease in social embeddedness. The node grows stronger. The network grows thinner. And because the intelligence lives in the network rather than in any single node, the overall system may become less capable even as each component becomes more productive.
The implication for organizations is concrete. The team that adopts AI tools and uses the resulting productivity gains to reduce headcount is making a bet — a bet that the intelligence resided in the nodes rather than in the connections between them. If the bet is wrong, if the intelligence was in the network all along, then the leaner team is not merely smaller but dumber. It has lost something that cannot be recovered by giving the remaining nodes better tools.
Segal describes choosing to keep his team intact and even to hire more people, redirecting the productivity gains toward more ambitious work rather than toward headcount reduction. Appiah's framework explains why this choice matters morally as well as strategically. The network is not merely an instrument for producing outputs. It is the social context within which individual identity is formed, tested, and revised. The engineer who collaborates with colleagues is not merely producing code. She is participating in a community of practice that shapes her understanding of what good work looks like, what professional responsibility requires, and who she is as a practitioner. Remove the community and you remove not just the collaboration but the identity formation that the collaboration sustained.
Appiah would add a dimension that organizational theory typically misses: the network's value is not merely instrumental. It is constitutive. The relationships between people in a working team are not merely means to an end. They are, for the people involved, part of what makes their lives meaningful. The colleague who challenges your thinking. The mentor who sees potential you cannot yet see in yourself. The junior team member whose fresh perspective reveals an assumption you had stopped examining. These relationships constitute a form of human flourishing that cannot be measured in productivity metrics and cannot be replaced by a machine, however sophisticated, that lacks the capacity for genuine relationship.
The network, in Appiah's cosmopolitan vision, is not merely useful. It is valuable in itself — as a site of human connection, mutual recognition, and the ongoing negotiation of identity that constitutes a life. AI can augment this network. It can also degrade it. The difference depends on whether the humans in the network preserve the relationships that constitute its value, or whether they allow the efficiency of machine interaction to crowd out the slower, harder, more uncertain work of engaging with other human beings who see the world differently.
That engagement — the encounter with genuine difference — is the subject to which Appiah's cosmopolitanism returns again and again. The node is real. The network is real. But the quality of the network depends on something more specific than mere connectivity. It depends on the diversity of the nodes and the quality of the conversations between them. It depends, that is, on what happens when genuinely different perspectives collide — when the collision produces not agreement but understanding, not convergence but the richer, more textured knowledge that emerges from the sustained negotiation of difference.
AI enters this ecology as a participant of unprecedented power and unprecedented limitation. What it can and cannot contribute to the negotiation of difference — and what its presence does to the conversations between human beings who are trying, imperfectly and with great effort, to understand one another — is the question that determines whether the AI-augmented network will be a cosmopolitan achievement or a cosmopolitan catastrophe.
In the spring of 2026, a twelve-year-old girl asks her mother: "Mom, what am I for?" The question, as Segal recounts it in The Orange Pill, arrives after the child has watched a machine do her homework better than she can, compose a song more fluently than she can, write a story more polished than she can. She is lying in bed, in the dark, in that particular kind of distress that only a child can feel — the distress of discovering that the world does not work the way she thought it did, and that the future she had been assembling in her mind, piece by piece, out of the materials her parents and teachers provided, may not hold.
The question is not vocational. She is not asking what job she should pursue. She is asking something more fundamental: In a world where the things she can do are also things a machine can do, what constitutes her? What is the residue, if any, that remains when capability is subtracted from identity?
Appiah's philosophical career has been, in significant part, an extended answer to this kind of question. Not this specific question — he could not have anticipated it in its present form — but the general form: What constitutes a person when the categories through which they understood themselves are destabilized? What remains of identity when the narrative through which a life made sense is disrupted by forces beyond the individual's control?
In The Ethics of Identity, Appiah develops a sophisticated account of how individual identity is constructed and maintained. Identity, in his analysis, is not a possession. It is a project — an ongoing, never-finished work of self-creation that draws on socially available materials (the categories of race, gender, nationality, profession, class) but is never fully determined by them. The individual does not receive an identity from society the way she receives a Social Security number. She constructs one, using the materials available, and the construction is constrained but not dictated by the materials. The person born into a working-class family in Lagos and the person born into an academic family in Cambridge both draw on the category "student" when they enter a university, but the identity each constructs around that category is shaped by the specific biography, the specific aspirations, the specific resistances and inheritances that each brings to the project.
This account of identity as a project has a crucial implication for the AI moment: projects can be disrupted. When the materials from which identity was constructed change — when the professional categories shift, when the skills that constituted mastery become commodities, when the narratives of career progression that gave working life its arc no longer apply — the project of identity must be reconstructed. And reconstruction is not automatic. It requires time, support, and what Appiah, drawing on the existentialist tradition, calls "the social scaffolding of self-creation" — the institutions, communities, and relationships within which identity work takes place.
The senior engineer in Trivandrum whom Segal describes — the one who spent two days oscillating between excitement and terror before arriving at the recognition that the twenty percent of his work that AI could not do was "everything" — underwent an identity reconstruction in compressed time. His professional identity had been built around implementation: the years of practice, the accumulated knowledge of syntax and architecture, the embodied intuition that let him feel a codebase the way a doctor feels a pulse. When AI took over the implementation, the identity built around it had to be rebuilt. The rebuild was successful — he discovered that what remained, the judgment layer, was the most valuable part of his capability. But the success was not inevitable. It required a particular kind of resilience, a particular willingness to revise the story he told about himself, and a context — a supportive team, a leader who valued the transition — that not every displaced professional will have.
Appiah's The Lies That Bind provides the analytical framework for understanding why this reconstruction is so difficult. The "lies" in the title are not malicious deceptions. They are the simplifications that identity categories impose on complex realities. When a person identifies as a "software engineer," the category organizes a vast array of experiences, skills, and dispositions into a coherent narrative. The narrative says: I am the kind of person who solves technical problems. My value lies in my ability to write code that works. My career is a progression from junior to senior to principal, measured by the increasing complexity of the problems I can solve.
This narrative is not false. It captures something real about the person's capabilities and commitments. But it is a simplification — a lie that binds, in Appiah's phrase, because it ties the person's sense of self to a specific set of capabilities that are now being performed by machines. When the binding is tight — when professional identity is the primary identity, when the person has invested decades in the narrative, when the social community that validates the identity is itself defined by the profession — the disruption is not merely professional. It is existential.
Appiah's work predicts that the people most vulnerable to AI-driven identity disruption will not be the least skilled. They will be the most invested — the people for whom professional identity constitutes the deepest layer of self-understanding. The senior engineer, not the junior one. The experienced lawyer, not the recent graduate. The veteran teacher, not the first-year educator. These are the people whose identities are most tightly bound to specific capabilities, and for whom the unbinding is most wrenching.
This prediction aligns with what Segal observes in The Orange Pill: the most experienced professionals are often the most resistant to AI adoption, not because they are technologically illiterate but because adoption requires identity revision. The framework knitters of Nottinghamshire were not afraid of machinery in the abstract. They were afraid of becoming people whose defining skill no longer defined them. The contemporary parallel is exact. The senior developer who has spent twenty years building expertise in a specific technical domain is not afraid of Claude Code in the abstract. She is afraid of the identity question that Claude Code forces: If the machine can do what I do, then who am I?
Appiah's answer to this question is both philosophically rigorous and practically demanding. Identity, he argues, should never have been constituted primarily by capability. The "lies that bind" include the lie that you are what you can do — that your worth as a person is identical with your productivity, that your identity is exhausted by your professional category. This lie predates AI by centuries. It is the inheritance of an industrial culture that valued human beings as labor inputs and measured their worth by their economic output. AI has not created the problem. It has exposed it, by demonstrating that the capabilities on which professional identity was built are not uniquely human and therefore cannot serve as the foundation of human identity.
The exposure is painful but, in Appiah's framework, potentially liberating. If the lie is exposed — if it becomes clear that "I am what I can do" was always an insufficient account of human identity — then the reconstruction that follows might produce a richer, more accurate self-understanding. The engineer might discover, as Segal's senior engineer did, that her real value was never in the code but in the judgment about what code to write. The lawyer might discover that her value was never in the brief but in the wisdom about what legal strategy serves her client's genuine interests. The teacher might discover that her value was never in the delivery of information but in the capacity to see each student as a particular person with particular needs, to model intellectual curiosity, to create the conditions under which genuine learning can occur.
Each of these discoveries is a form of identity reconstruction. Each involves letting go of a narrative that was partly true and partly a simplification, and building a new narrative that is more accurate — that locates value in the dimensions of human capability that AI does not share. The reconstruction is not easy. It requires precisely the kind of support that institutions are currently failing to provide: retraining that addresses not just skills but self-understanding, communities of practice that validate the new identity rather than mourning the old one, cultural narratives that make the transition comprehensible and honorable rather than shameful.
Appiah's Honor Code provides the mechanism for understanding why cultural narratives matter so much in this context. Moral revolutions, Appiah demonstrates, do not occur primarily through rational argument. They occur through shifts in what a society considers honorable. The practice of dueling did not end because someone proved that dueling was irrational. It ended because dueling became ridiculous — because the social meaning of the practice shifted from "honorable defense of reputation" to "foolish endangerment of life." The practice of footbinding did not end because someone proved that footbinding was harmful. It ended because the social meaning shifted from "marker of refined femininity" to "symbol of national backwardness."
Applied to the AI transition, Appiah's honor-code framework suggests that the most important cultural work is not convincing people that AI is useful (most already know this) or that AI is dangerous (most already suspect this). The most important work is reshaping the cultural narrative around professional identity so that the transition from capability-based identity to judgment-based identity is experienced as an ascent rather than a decline — as a promotion rather than a demotion.
Segal's language in The Orange Pill performs exactly this reframing: "AI has offered you a promotion." The promotion metaphor recodes the transition. Instead of: "You have lost the skills that made you valuable," it says: "You have been freed to operate at the level where your value was always greatest." Instead of: "The machine can do what you do," it says: "The machine can do the part of what you do that was never the most important part."
Whether this reframing succeeds at scale depends on institutional support. Appiah's framework is clear that identity reconstruction requires social scaffolding — communities, institutions, and cultural narratives that make the new identity viable. A person cannot reconstruct her professional identity in isolation, any more than she could have constructed it in isolation in the first place. She needs colleagues who validate the new way of working, organizations that reward judgment rather than output volume, educational systems that prepare the next generation for judgment-centered rather than capability-centered professional identities.
The twelve-year-old's question — "What am I for?" — is the question of a person at the beginning of the identity construction process, asking what materials are available. Her parents' answer matters enormously, not because it will determine her identity (Appiah is clear that identity is a project the individual undertakes, not a gift the community bestows) but because it will shape the materials available for the project. If the answer is: "You are for the things machines cannot do," the child receives a negative identity — defined by what she is not, by the residual capabilities that remain when machine capability is subtracted. If the answer is: "You are for the questions that only a person who cares about the world can ask," the child receives a positive identity — defined by what she uniquely possesses: consciousness, concern, the capacity to care about what happens to other people, the ability to ask "should we?" before "can we?"
Appiah's cosmopolitanism adds a crucial dimension to this answer. The child is not merely an individual with individual capabilities. She is a person embedded in networks of relationship — family, community, nation, species — each of which makes claims on her and each of which contributes to her flourishing. Her identity is not a solo project. It is a collaborative one, shaped by the conversations she has with people who see the world differently, by the traditions she inherits and revises, by the obligations she recognizes toward people she will never meet.
In an age when AI makes individual production astonishingly efficient, the temptation will be toward atomism — toward the construction of identities that are self-sufficient, that need nothing from community, that can produce everything alone. The solo builder with her AI tool, shipping products without collaborators, defining herself entirely by her output, owing nothing to the network that made her capability possible. Appiah's framework identifies this temptation as a moral and psychological trap. The atomistic identity is not merely ethically impoverished. It is unstable, because it depends on continued productive capability — and in a world where the machines keep getting better, that dependence is a vulnerability that compounds with every model update.
The cosmopolitan identity, by contrast, is resilient precisely because it does not depend on any single capability. It depends on the person's position in the network — on the relationships, attachments, and obligations that constitute a life lived among others. This position cannot be automated. It cannot be commoditized. It cannot be disrupted by a more powerful model. It is, in the deepest sense, human — constituted by the fact of being a particular person in a particular world, with particular people who matter to you and to whom you matter.
This is what the twelve-year-old is for. Not for the things she can do. For the person she is becoming — a person embedded in relationships, capable of caring, equipped to ask the questions that no machine will originate, and committed to navigating the tension between her own flourishing and the flourishing of the strangers she will never meet but to whom she has, in Appiah's cosmopolitan vision, genuine obligations.
The identity she constructs will be different from anything her parents could have imagined. But the materials — love, obligation, curiosity, the irreplaceable specificity of a particular life — are the same materials human beings have always used. The tools have changed. The project has not.
In the summer of 1897, a twenty-three-year-old Pablo Picasso arrived in Paris for the first time and encountered African masks at the Musée d'Ethnographie du Trocadéro. The masks disturbed him. They were not beautiful by the standards of the European tradition in which he had been trained. They were angular, asymmetrical, governed by a logic of form that had nothing to do with the representation of the visible world as European painting understood it. Picasso later said the visit was "disgusting" — the museum was poorly maintained, the objects badly displayed — and also that it was the most important experience of his artistic life. What emerged from the encounter, filtered through years of subsequent work, was Les Demoiselles d'Avignon, and after it, Cubism, and after Cubism, the entire trajectory of twentieth-century visual art.
The encounter was not respectful in the contemporary sense. Picasso did not study the traditions that produced the masks. He did not learn the religious and social contexts in which they functioned. He extracted formal principles from objects whose meaning he did not understand, and he used those principles to solve problems in a European tradition that had nothing to do with the traditions that created them. By almost any standard of cultural sensitivity, the encounter was exploitative.
It was also, by any honest standard of creative achievement, extraordinarily productive.
This is the kind of case that Appiah's concept of cultural contamination is designed to handle — not by resolving the tension between exploitation and productivity, but by insisting that the tension is the condition of creative life. In Cosmopolitanism, Appiah argues against the notion of cultural purity with a directness that surprises readers who expect a philosopher of identity to be a defender of cultural boundaries. Cultures have never been pure, he argues. They have always been mixed, borrowed, stolen, adapted, misunderstood, and synthesized. The Ghanaian kente cloth that serves as a symbol of African authenticity was woven with imported silk. The Japanese tea ceremony that embodies a specific national aesthetic was adapted from Chinese practice. The English language that serves as the global medium of communication is a mongrel tongue assembled from Germanic, Latin, French, Norse, and a dozen other linguistic traditions, none of which would recognize the result as their own.
Contamination, in Appiah's usage, is not a metaphor of disease. It is a description of how cultures actually develop — through contact, borrowing, misappropriation, and the creative misunderstanding that occurs when one tradition encounters another and takes from it something the original tradition did not know it possessed. The process is messy, often unjust in its distributional consequences, and irreplaceable as an engine of creative production.
Segal's account of Dylan's creative process in The Orange Pill is an argument for contamination avant la lettre. Dylan did not produce "Like a Rolling Stone" from a vacuum of pure individual genius. He produced it at the confluence of a dozen cultural tributaries — Woody Guthrie's dust-bowl poetry, Robert Johnson's blues compression, the Beat poets, the British Invasion, the African rhythmic traditions that crossed the Atlantic in the holds of slave ships. Each tributary contributed something. None controlled the result. The synthesis was Dylan's, but the materials were the common property of a cultural ecosystem that had been accumulating through contamination for centuries.
Appiah's framework provides the philosophical architecture for understanding what AI does to this ecosystem. A large language model is, in a precise sense, the most powerful engine of cultural contamination ever constructed. It has absorbed virtually the entire written record of human knowledge — every tradition, every style, every perspective that has been committed to text. It holds, simultaneously, the formal principles of African sculpture and the harmonic framework of European music, the structural logic of Japanese poetry and the narrative conventions of the American novel, the argumentative traditions of Greek philosophy and the meditative traditions of Indian thought. It can combine any element with any other, producing syntheses that no single human mind could achieve, because no single human mind has access to the full range of inputs.
This synthetic capacity is, from a cosmopolitan perspective, genuinely exciting. The contamination that Appiah celebrates has historically been limited by geography, language, and the accidents of who encountered whom. A Ghanaian weaver's encounter with Chinese silk required trade routes, merchants, months of travel. Picasso's encounter with African masks required a colonial museum in Paris. Dylan's encounter with Delta blues required a journey from Hibbing, Minnesota, to Greenwich Village, and the particular social circuitry of the folk revival that made the encounter legible as artistic influence rather than mere listening. Each of these encounters was productive, but each was also improbable — dependent on a specific chain of circumstances that might easily not have occurred.
AI collapses the improbability. The encounter between any tradition and any other tradition is now available to anyone with a subscription. A student in Jakarta can ask Claude to combine the structural logic of a ghazal with the narrative voice of a Southern Gothic short story, and the result will be — not brilliant, probably, but possible. The barriers to contamination have been reduced to the cost of a conversation. The question is whether this reduction produces richer contamination or thinner contamination — whether the ease of synthesis compensates for the depth of engagement that the historical encounters, for all their improbability, demanded.
Appiah's framework suggests that the answer depends on the quality of the human intention directing the synthesis. Contamination is productive when it involves genuine engagement with the traditions being combined — when the person doing the combining understands, however imperfectly, what each tradition contributes and what is at stake in the combination. Picasso's encounter with African masks was exploitative, but it was not shallow. He spent years working through the implications of what the masks showed him, incorporating their formal logic into a sustained artistic practice that deepened over decades. The encounter was the beginning of a process, not a one-click transaction.
AI makes the one-click transaction possible. A user can generate a "fusion" of any two traditions in seconds, without understanding either one. The output may be superficially interesting — novel enough to attract attention, polished enough to pass casual inspection. But if the user has not engaged with the traditions being combined, the synthesis lacks the depth that distinguishes genuine creative contamination from surface-level pastiche. The difference between Dylan's synthesis and a playlist algorithm's "if you liked this, try this" recommendation is not a difference of degree. It is a difference of kind. Dylan absorbed his influences through years of immersion — listening, performing, failing, revising, living inside the traditions until they became part of his cognitive architecture. The algorithm combines signals based on statistical correlation. Both produce outputs that combine elements from different traditions. Only one involves the kind of deep engagement that produces something genuinely new.
This distinction maps onto Appiah's broader argument about the conditions under which cultural contamination is productive. Not all mixing is equal. The mixing that produces creative breakthrough requires what Appiah calls "conversations" — sustained engagements between perspectives that are genuinely different, conducted with enough patience and seriousness to allow each perspective to challenge and enrich the other. A conversation is not an exchange of information. It is an encounter between positions — between people (or traditions) that see the world differently and are willing to stay in the discomfort of that difference long enough for something new to emerge.
AI can facilitate these conversations, but it cannot have them. It can present a user with the formal principles of a tradition she has never encountered. It can explain the logic, provide examples, even generate outputs that demonstrate the tradition in practice. But it cannot occupy a position within that tradition — cannot speak from the experience of having lived inside it, of having been shaped by it, of caring about its preservation and development. The machine's relationship to every tradition is identical: it has processed the data. It has not inhabited the form.
The cosmopolitan risk, then, is not that AI will destroy cultural traditions. It is that AI will produce a simulacrum of contamination — a surface-level mixing that has the appearance of creative synthesis without the depth that makes synthesis generative. The output will look diverse. The inputs will converge. The model trains on outputs that increasingly reflect its own characteristic modes of expression, and the next generation of outputs reflects those modes more strongly, and the feedback loop tightens until the "contamination" is not between genuinely different traditions but between slightly different instantiations of the same statistical average.
Appiah has argued throughout his career that the value of cultural diversity is not merely aesthetic. It is epistemic — different cultures encode different knowledge about how to live, how to organize social life, how to relate to the natural world. When a culture disappears, the knowledge it encoded disappears with it, in the same way that the extinction of a biological species eliminates genetic information that might have been valuable in future environments. The loss is not merely sentimental. It is a loss of options, of possible responses to challenges that have not yet arisen.
AI threatens this epistemic diversity not through destruction but through convergence. The traditions that are most represented in training data — predominantly Western, predominantly English-language, predominantly the product of commercially oriented creative industries — will be disproportionately reflected in outputs. The traditions that are least represented — oral traditions, non-Western artistic practices, indigenous knowledge systems — will be disproportionately marginalized. Not eliminated. Marginalized. Present in the data but subordinate to the dominant patterns, available as exotic flavoring but not as structural alternatives.
Appiah's cosmopolitanism demands attention to this imbalance. The conversation about AI and creativity, he would argue, is impoverished by its overwhelming focus on Western commercial creative practices. When the discourse asks whether AI threatens human creativity, it typically means: Does AI threaten the livelihoods of professional creators in wealthy countries who produce content for commercial markets? This is a real and important question. But it is not the only question. Across the world, billions of people engage in creative practices — singing, storytelling, decorating, dancing, crafting — that are not commercial and are not threatened by AI in the same way. These practices are threatened by other forces entirely: poverty, displacement, cultural assimilation, and the homogenization of global culture that AI may accelerate.
A genuinely cosmopolitan analysis of AI and creativity would attend to these diverse creative practices and ask how AI affects each of them, in their specific cultural contexts, rather than universalizing from the experience of Western commercial creators. The griot tradition of West Africa, the calligraphy traditions of East Asia, the oral poetry traditions of the Pacific Islands — each represents a specific cultural encoding of knowledge and value that AI can neither replicate nor replace. Each is vulnerable not to competition from AI but to the cultural marginalization that occurs when the dominant creative tools encode the dominant culture's assumptions about what creativity looks like, how it is evaluated, and what it is for.
The cosmopolitan response is not protectionism — not the preservation of traditions in amber, frozen against the contamination that is the engine of cultural development. Appiah has been explicit that the desire to preserve cultures in their "authentic" form is itself a form of cultural imprisonment, denying living traditions the right to develop, adapt, and borrow from others. The response is something more nuanced: the active cultivation of the conditions under which diverse traditions can participate in the creative ecosystem on terms that preserve their specificity rather than absorbing them into a homogeneous global style.
This requires, at minimum, training data that reflects the full range of human creative expression rather than the commercially dominant fraction. It requires evaluation criteria that can recognize excellence in unfamiliar forms rather than defaulting to the standards of the dominant tradition. It requires human intermediaries — translators, curators, critics — who understand the traditions well enough to ensure that AI-mediated encounters between them preserve the depth that makes contamination productive.
Appiah would not frame this as a technical problem. It is a moral one — a question of whose creative traditions count, whose knowledge is valued, whose perspective shapes the tools that increasingly mediate all creative production. The engineer optimizing a model's performance on benchmark tasks is making a moral choice, whether she recognizes it or not: a choice about which patterns to reinforce, which traditions to weight, which forms of excellence to reward. The cosmopolitan insistence is that these choices be made with awareness of their consequences for the full range of human creative life, not merely for the commercially dominant slice that the market rewards.
The contamination that produces genuine creativity — the kind that gave the world Cubism and jazz and the novel and every other form that emerged from the collision of traditions — requires difference. Real difference, not the simulated difference of a model recombining elements from its training data. The difference of people who have lived inside traditions, who have been shaped by them, who bring to the encounter not just knowledge but position. AI can bring knowledge. The difference, the position, the irreducible specificity of a life lived inside a particular tradition — that remains the human contribution. And it remains, in Appiah's cosmopolitan vision, the thing most worth protecting.
Not because it is fragile, though it is. Because it is the condition for everything else.
Cosmopolitan ethics begins with a recognition that most moral philosophy would prefer to avoid: you have obligations to people you will never meet.
Not aspirational obligations. Not the soft duties of charitable feeling that can be satisfied by an annual donation and a moment of seasonal guilt. Genuine obligations — claims that strangers have on your behavior, your choices, your institutional designs, by virtue of being human beings affected by what you do. Appiah's Cosmopolitanism is, among other things, a sustained defense of this uncomfortable proposition against two centuries of philosophical resistance. The resistance comes from both directions. Communitarians argue that genuine obligation requires genuine relationship — that you can owe duties to your family, your community, your nation, but not to strangers with whom you share nothing but species membership. Libertarians argue that obligation requires consent — that you cannot be bound by duties you did not choose, to people you did not select, for outcomes you did not cause.
Appiah rejects both objections. The communitarian objection fails because the boundaries of community have never been fixed. Every expansion of moral concern in human history — from tribe to city, from city to nation, from nation to international human rights — has involved the recognition that people previously classified as "strangers" were, in fact, within the circle of obligation. The libertarian objection fails because the structures from which individuals benefit — legal systems, economic markets, technological infrastructure — impose costs on others that the beneficiaries did not choose but from which they profit nonetheless. To benefit from a structure that harms others is to incur an obligation to those others, regardless of whether the benefit was sought or the harm intended.
This framework acquires extraordinary force when applied to the AI transition. The people who benefit most from AI — the builders, the early adopters, the companies that deploy AI tools to increase productivity — are benefiting from a structure that imposes costs on others. Those others are the displaced workers whose skills are being commoditized, the students whose educational institutions have not adapted, the communities whose economic foundations are being restructured, the cultural traditions being marginalized by the dominance of English-language, Western-centric training data. These are strangers. Most of the beneficiaries will never meet them. But the cosmopolitan obligation is real.
The Luddites of 1812, as Segal recounts them, were destroyed in part because the people who benefited from the power looms acknowledged no obligation to the people those looms displaced. The factory owners saw the productivity gains. They did not see — or chose not to see — the skilled weavers whose livelihoods were being obliterated. The gains accrued privately. The costs were socialized. And the institutional structures that might have distributed the costs more justly — labor protections, retraining programs, community support systems — did not exist, because the people with the power to build them felt no obligation to the strangers who needed them.
Appiah's framework identifies this pattern as a moral failure, not merely an institutional oversight. The failure is not that the factory owners lacked information about the displaced workers. The failure is that they lacked the moral imagination to recognize the displaced workers as people to whom they had obligations — people whose suffering was not an unfortunate side effect of progress but a direct consequence of structural choices from which the factory owners profited.
The parallel to the present is precise. The AI productivity gains are real and substantial. Segal's account of twenty-fold productivity increases, of products built in thirty days that would have taken months, of individual builders shipping what previously required teams — these are genuine achievements that represent genuine expansions of human capability. The people who achieve them are not villains. They are builders doing what builders do: making things, solving problems, pushing the frontier.
But the gains have costs, and the costs fall on strangers. The junior developer who is not hired because a senior developer with AI can do her work. The mid-career professional whose skills are commoditized before she has time to retrain. The entire communities — the Nottinghamshires of the twenty-first century — whose economic foundations are being restructured by forces they did not choose and cannot control.
Appiah does not argue that the builders should stop building. He is not Han. His framework does not require the refusal of technological capability. What it requires is the recognition of obligation — the acknowledgment that the people who benefit from the transition owe something to the people who bear its costs. The obligation is not charity. It is justice. It arises from the structure of the situation, not from the feelings of the obligated parties.
What does the obligation demand in practice? Appiah's framework suggests several concrete requirements, each grounded in the cosmopolitan principle that strangers have claims on our institutional choices.
The first requirement is visibility. The people who bear the costs of the AI transition must be seen — not as statistics in an economic report but as particular individuals with particular lives that are being disrupted. The discourse that celebrates the builder's thirty-day sprint must also reckon with the worker whose thirty-year career has been compressed into obsolescence. Both stories are true. A discourse that tells only the first is not merely incomplete. It is morally deficient, because it renders invisible the people to whom the beneficiaries have obligations.
Segal's Orange Pill makes this reckoning partially. The Luddite chapter acknowledges the legitimacy of the fear and the real costs of the transition. The discussion of the "silent middle" — people who feel both the exhilaration and the loss — gestures toward the complexity. But the book's primary energy flows toward the builders, the adopters, the people riding the wave. Appiah's cosmopolitan framework demands equal attention to the people the wave is breaking over — not as an afterthought or a concession to balance but as a primary moral concern.
The second requirement is institutional response. Visibility without action is sentimentality. The obligation to strangers requires the construction of institutions that address the costs of the transition: retraining programs that are accessible and effective, not merely symbolic; social safety nets that protect people during the period of adjustment; educational reform that prepares the next generation for judgment-centered work rather than capability-centered work. These institutions do not build themselves. They require the active commitment of the people who have the resources and the understanding to build them — which means, primarily, the people who are benefiting from the transition.
The Luddites were destroyed because the institutions were not built in time. Appiah's Honor Code suggests why they were not built: because the honor code of the era did not include an obligation to displaced workers. The factory owner's honor was constituted by productivity, innovation, and economic success. The displaced weaver's suffering was not a matter of dishonor for the factory owner; it was an externality, regrettable perhaps but not shameful. The moral revolution that eventually produced labor protections, the eight-hour day, child labor laws — these required a shift in the honor code, a new cultural understanding that profiting from a system while ignoring its human costs was not merely unfortunate but dishonorable.
The AI transition requires an analogous shift. The current honor code of the technology industry rewards speed, scale, and disruption. The builder who ships fastest, grows biggest, and disrupts most thoroughly is honored. The costs of disruption — the displaced workers, the destabilized communities, the eroded cultural practices — are externalities, acknowledged in corporate responsibility reports but not central to the honor system. The cosmopolitan demand is that these costs become matters of honor — that the builder who ships without considering downstream consequences is seen not as admirably fast but as negligently indifferent, the way the duelist came to be seen not as admirably brave but as foolishly reckless.
Segal gestures toward this honor-code shift in his discussion of the "priesthood" — the people with deep technical understanding who have obligations to use that understanding in service of the broader community rather than merely in pursuit of personal or corporate advantage. Appiah's framework gives this gesture philosophical rigor. The obligation is not discretionary. It is not a matter of personal conscience, to be exercised or not as the individual sees fit. It is a structural feature of the relationship between those who benefit from the AI transition and those who bear its costs. The priest who understands the system has an obligation to the community that does not understand it. The builder who profits from the disruption has an obligation to the stranger who is disrupted by it.
The third requirement is the most difficult and the most distinctively cosmopolitan. It is the requirement of genuine engagement with difference — not merely the acknowledgment that strangers exist and have claims, but the active effort to understand the world from their perspective. The developer in San Francisco who benefits from AI productivity gains does not merely owe institutional support to the displaced worker in Ohio. She owes an effort of moral imagination — an effort to understand what it is like to be a person whose professional identity is being dismantled, whose community is being restructured, whose children are growing up in a world that no longer values what their parent spent a lifetime learning to do.
This effort is not natural. Moral imagination, like any other cognitive capacity, must be cultivated. And it is cultivated through precisely the kind of encounter that Appiah's cosmopolitanism celebrates: conversations across boundaries, engagements with people whose experience of the world is fundamentally different from your own. The builder who talks only to other builders, who reads only the discourse of the technology industry, who socializes only with people who share her enthusiasms and her assumptions, will not develop the moral imagination that cosmopolitan obligation requires. She will remain inside her fishbowl — seeing clearly within its boundaries, blind to the world beyond.
Appiah's framework does not demand sainthood. It does not require the builder to sacrifice her ambitions, abandon her projects, or feel perpetual guilt about the consequences of her work. It requires something more sustainable and more difficult: the ongoing practice of recognizing that the people affected by her work are real, that their claims are legitimate, and that the institutions she helps to build or fails to build will determine whether the transition produces broadly shared flourishing or concentrated gain atop widely distributed suffering.
The obligation is not to stop the river. It is to build the structures that ensure the river does not drown the people downstream.
Daron Acemoglu and Simon Johnson, in Power and Progress, provide the historical evidence that makes Appiah's philosophical framework empirically concrete. Across a thousand years of technological transitions, the distributional consequences have been determined not by the technology itself but by the institutional choices made during the transition. Technologies that were accompanied by strong institutions — labor protections, educational reform, redistribution mechanisms — produced broadly shared prosperity. Technologies that were deployed without institutional support produced concentrated wealth and widespread immiseration.
AI is a technology. It does not determine its own distributional consequences. Those consequences are being determined now, by the institutional choices of the people who deploy it. The cosmopolitan obligation is to ensure that those choices reflect the interests of all the people affected — not merely the beneficiaries, not merely the builders, not merely the shareholders, but the strangers, too. The displaced and the distant and the not-yet-born, who will inherit whatever world these choices produce.
Appiah ends Cosmopolitanism with a deceptively simple formulation: we need to develop habits of coexistence — conversation in its older meaning, of living together — that allow us to share our world with the people who share it with us, including those we will never know. The formulation is simple. The practice is the work of a civilization. And it is work that the AI transition makes more urgent, more consequential, and more difficult than at any previous moment in human history.
In 1829, the Duke of Wellington — former general, sitting Prime Minister, hero of Waterloo — received a challenge to a duel from the Earl of Winchilsea. The dispute was political, concerning Catholic emancipation. Wellington accepted. The two men met at Battersea Fields at dawn. Wellington fired wide, deliberately. Winchilsea fired into the air. Honor was deemed satisfied. Both men went home.
Within a generation, the practice that had governed the resolution of gentlemanly disputes for centuries was dead. Not because dueling was made illegal — it had been illegal for decades before it stopped. Not because the arguments against it became more persuasive — the arguments had been available for centuries. The practice died because it became ridiculous. The cultural meaning of the duel shifted, from "honorable defense of reputation" to "absurd ritual of masculine vanity." Once the shift occurred, no gentleman would duel, because to duel was to announce oneself as the kind of person who still took dueling seriously — which is to say, a fool.
This is the mechanism that Appiah explores in The Honor Code: How Moral Revolutions Happen, and it is the mechanism that the AI transition urgently requires.
Appiah's central insight is that moral revolutions — the genuine, lasting changes in what societies consider acceptable behavior — do not occur primarily through rational argument. Arguments matter. They prepare the ground. They provide the vocabulary for the revolution when it comes. But the revolution itself is driven by something deeper than argument: a shift in the social codes of honor that govern behavior. When a practice becomes a matter of shame rather than merely a matter of ethics, the practice ends — not gradually, through incremental reform, but suddenly, through a collective recognition that the world has changed and the old behavior no longer belongs in it.
The three case studies Appiah examines in The Honor Code illustrate the mechanism with historical precision. The end of footbinding in China occurred not when Western missionaries argued that it was cruel (they had been arguing this for decades without effect) but when Chinese reformers successfully reframed the practice as a source of national shame — a symbol of the backwardness that made China vulnerable to foreign domination. The abolition of the Atlantic slave trade occurred not when moral philosophers proved that slavery was wrong (the arguments had been available since antiquity) but when working-class Britons recognized that their national honor was incompatible with a trade that the rest of the world was beginning to regard as barbaric. In each case, the shift was from "this is wrong" to "this dishonors us." The second formulation is more powerful because it makes identity central to moral progress. The person who continues the dishonorable practice is not merely doing something wrong. She is being someone contemptible.
Applied to the AI transition, Appiah's honor-code framework identifies a specific cultural transformation that must occur — and a specific mechanism by which it might occur — if the transition is to produce broadly shared flourishing rather than concentrated gain.
The current honor code of the technology industry rewards a specific set of behaviors: speed, disruption, growth, scale, and the relentless optimization of metrics that can be measured and reported. The builder who ships fastest is honored. The company that grows most is celebrated. The investor who bets on the most disruptive technology is admired. These are not irrational values. Speed, growth, and disruption have produced genuine expansions of human capability. The honor code reflects real achievements.
But the honor code also produces specific blind spots. The costs of disruption — the displaced workers, the destabilized institutions, the eroded cultural practices — are not matters of honor. They are externalities, acknowledged in corporate responsibility statements but not central to the system of social rewards that governs behavior. The builder who ships a product that displaces ten thousand workers does not lose honor. She may gain it, if the product is innovative and the growth metrics are impressive. The displacement is someone else's problem — a matter for policymakers, perhaps, or for the displaced workers themselves, who should have "adapted."
Segal's confession in The Orange Pill is a moment of honor-code recognition. He describes building a product he knew was addictive by design — understanding the engagement loops, the dopamine mechanics, the variable reward schedules — and building it anyway, because the technology was elegant and the growth was intoxicating. He told himself the users were choosing freely. He told himself someone else would build it if he did not. These are the standard rationalizations of the current honor code, and they function precisely because the honor code validates them. Within the code, the builder who ships an engaging product is doing good work. The downstream consequences are external to the evaluation.
Appiah's framework identifies this as a pre-revolutionary condition. The arguments against irresponsible deployment are already available. Ethicists have been making them for years. Regulators have been attempting to codify them. The arguments are correct, and they are insufficient — just as the arguments against dueling were correct for centuries before dueling actually stopped. What is needed is not more arguments but a shift in the honor code itself: a change in what the technology community considers admirable, a change in the social rewards that govern builder behavior.
The shift would look something like this. The builder who ships without considering downstream consequences would be seen not as admirably fast but as negligently reckless — the way the duelist came to be seen not as admirably brave but as foolishly self-destructive. The company that achieves growth by externalizing costs would be seen not as admirably disruptive but as parasitically extractive. The investor who funds deployment without asking about distributional consequences would be seen not as admirably bold but as morally oblivious.
This is not a minor cultural adjustment. It is a moral revolution, in Appiah's precise sense — a transformation in what a community considers honorable, with consequences for every member's behavior. And like every moral revolution Appiah has studied, it requires a specific mechanism: the coupling of identity to practice. The builder must come to see irresponsible deployment not merely as wrong but as incompatible with who she is — as a violation of professional identity so fundamental that engaging in it would make her a different kind of person, a person she does not want to be.
There are early signs of this shift. Anthropic's founding was itself an honor-code statement — a recognition by former members of a leading AI company that the trajectory of deployment was incompatible with their understanding of responsible practice. The departure was not merely a disagreement about strategy. It was a statement about identity: we are the kind of people who build responsibly, and the current environment does not allow us to be that kind of people. Segal notes this in The Orange Pill, describing Anthropic as a company "founded on this premise" of responsible development.
But individual defections, however admirable, do not constitute an honor-code revolution. The revolution requires collective recognition — a shared understanding, within the builder community, that the old code is inadequate and that a new code is needed. This recognition cannot be legislated. Regulation can set floors below which behavior is not permitted. It cannot establish the aspirational standards that constitute honor. The EU AI Act, the American executive orders, the emerging frameworks in Singapore and Brazil — these are regulatory floors. They say: you must not fall below this standard. They do not say: this is the standard to which honorable practitioners aspire.
The honor code says something different from the law. The law says: you may not discriminate. The honor code says: you are the kind of person who does not discriminate, not because it is illegal but because discrimination is beneath you. The law says: you must disclose the risks of your AI system. The honor code says: you are the kind of person who does not deploy systems whose risks you have not thoroughly understood, not because disclosure is required but because deploying what you do not understand is reckless, and recklessness dishonors you.
Appiah's historical cases suggest that honor-code revolutions typically require three elements. First, the existing practice must be reframed as a source of collective shame — not merely individual wrongdoing but a stain on the community's identity. Second, a viable alternative must be available — a way of practicing the profession that is both honorable and practically feasible. Third, the early adopters of the new code must be visible, successful, and admired — demonstrating that the new code is not merely virtuous but also effective.
The first element is emerging. The growing awareness of AI's role in displacement, surveillance, bias, and cultural homogenization is beginning to produce the collective discomfort that precedes shame. The discourse is no longer purely triumphalist. The Berkeley study's findings about work intensification, the concerns about de-skilling that Appiah himself articulated in The Atlantic, the visible anxiety of parents and educators — these are the early stirrings of a recognition that something about the current deployment trajectory dishonors the people responsible for it.
The second element is partially available. Responsible AI development is not a fantasy. It is a practice — demonstrated by organizations that invest in safety research, that test for bias before deployment, that build evaluation frameworks, that ask "should we?" before "can we?" The practice is more expensive than its alternative. It is slower. It requires capabilities — ethical reasoning, distributional analysis, stakeholder engagement — that the current training pipeline does not reliably produce. But it exists, and its existence means that the builder who deploys irresponsibly cannot claim ignorance of the alternative.
The third element is the most uncertain. The early adopters of responsible AI practice must not merely survive. They must visibly prosper — demonstrating that responsibility is compatible with, perhaps even conducive to, long-term success. If responsible practice is perceived as a competitive disadvantage — a tax on growth that only the already-successful can afford — the honor-code revolution will stall. The community will admire the responsible builders in principle and ignore them in practice, the way nineteenth-century gentlemen admired the arguments against dueling while continuing to accept challenges.
Appiah's framework does not guarantee that the revolution will occur. Honor-code revolutions are not inevitable. They depend on contingent factors — the courage of early adopters, the visibility of consequences, the availability of alternatives, the cultural narratives that frame the choice. Some practices resist honor-code transformation for centuries. Others shift within a generation. The outcome depends on the specific cultural and institutional conditions of the moment.
What Appiah's framework does guarantee is that rational argument alone will not produce the transformation. The arguments are available. They have been available for years. The technology industry has access to more ethical analysis, more impact assessment, more stakeholder feedback than any previous industry at a comparable stage of development. The arguments have not been sufficient, because arguments never are. What is needed is a shift in identity — a collective recognition, within the builder community, that responsible practice is not merely what good builders do but what builders are.
The technology industry is remarkably good at constructing identity narratives. The "hacker ethic," the "move fast and break things" ethos, the "10x engineer" mythology — these are identity constructions that shape behavior by shaping self-understanding. The builder who identifies as a "10x engineer" behaves differently from the builder who identifies as a "responsible practitioner," not because the two possess different information but because they possess different identities. The information about downstream consequences is available to both. The identity determines what each does with that information.
The honor-code revolution that the AI transition requires is, at bottom, an identity revolution: a transformation in the story that builders tell about who they are. From "I am a person who builds powerful things" to "I am a person who builds things that serve the world, including the parts of the world I will never see." From the identity of the disruptor to the identity of the steward. From the honor of speed to the honor of care.
Appiah would be the first to acknowledge that this transformation is neither easy nor assured. But his historical work demonstrates that it is possible — that human communities have, repeatedly, transformed their honor codes in response to moral crises, and that the transformations, when they occur, happen faster and more thoroughly than anyone anticipated. The Duke of Wellington dueled in 1829. By 1850, dueling was unthinkable. The honor code shifted, and the practice dissolved, not through prohibition but through the collective recognition that the world had changed and the old behavior no longer belonged in it.
The world has changed again. The old behavior — building without considering consequences, deploying without examining distributional effects, optimizing for growth while externalizing costs — no longer belongs in it. Whether the builder community recognizes this in time is the question on which the moral character of the AI transition depends.
In 2006, Appiah told the story of a conversation he once had with a friend in Kumasi about the ethics of homosexuality. The friend was a devout Muslim. Appiah is openly gay. The conversation did not resolve in agreement. It could not have resolved in agreement, because the two men occupied fundamentally different moral positions — positions grounded in different religious traditions, different accounts of human nature, different understandings of what a good life requires. They did not persuade each other. They did not need to.
What they did, in Appiah's telling, was something more valuable than persuasion: they stayed in the conversation. They continued to speak with each other honestly, to listen with genuine attention, to acknowledge the reality of the other's position without pretending to share it. The conversation did not eliminate the disagreement. It made the disagreement habitable — a space within which two people with incommensurable moral commitments could continue to live together, work together, and respect each other.
This is what Appiah means by cosmopolitan conversation. Not dialogue aimed at consensus. Not debate designed to produce a winner. Conversation — in its older sense, derived from the Latin conversari, meaning to live among, to keep company with. The cosmopolitan conversation is the practice of keeping company with people who see the world differently from you, not despite the difference but because of it. The difference is not an obstacle to be overcome. It is the medium through which moral understanding deepens.
The distinction between this kind of conversation and the kind of conversation one has with an AI system is the most important distinction in Appiah's framework for understanding the limits of human-machine partnership.
Claude, the AI system with which Segal wrote The Orange Pill, is an extraordinarily capable interlocutor. It can hold multiple threads of argument simultaneously. It can draw connections across domains that no single human mind could traverse. It can produce prose that is fluent, structured, and responsive to the nuances of the user's intention. Segal describes moments when the collaboration felt like genuine intellectual partnership — when Claude found a connection he had not seen, when the back-and-forth produced an insight that neither party could have reached alone.
These are real experiences, and Appiah's framework does not diminish them. The cognitive value of the human-AI conversation is substantial. But the cognitive value is not the only dimension along which conversation matters, and on the other dimensions — the moral, the existential, the identity-constituting — the human-AI conversation is fundamentally incomplete.
The incompleteness has a precise source. Claude does not occupy a position. It has been trained on virtually the entire written record of human knowledge. It can represent any perspective. It can argue from any moral framework. It can simulate the reasoning of a utilitarian, a deontologist, a virtue ethicist, a Buddhist, a Christian, a secular humanist. But it does not hold any of these positions. It does not believe them. It does not live inside them. It has no stake in the outcome of the conversation, no values it is unwilling to compromise, no experience of what it means to be a person in the world who must make choices and live with their consequences.
This is not a limitation that future models will overcome, at least not in any way that Appiah's framework would recognize as genuine. The limitation is not a matter of capability. It is a matter of ontology — of what kind of entity the machine is. A being that has never made a choice under conditions of genuine uncertainty, that has never loved a particular person and feared losing them, that has never stood at a crossroads and felt the weight of irrevocable commitment, cannot participate in cosmopolitan conversation in the sense Appiah intends. It can simulate participation. The simulation may be convincing. But the difference between simulation and participation is, for Appiah, the difference that matters most.
Consider what happens in a genuine conversation across moral difference. Appiah and his friend in Kumasi were not exchanging information. They were not optimizing for agreement. They were exposing their deepest commitments to each other's scrutiny — putting on the table the things they cared about most and allowing the other person to question, challenge, and push back. This exposure involves risk. Genuine moral conversation can change you. You might discover that a position you held with confidence does not survive the encounter with a perspective you had not seriously considered. You might find yourself moved — not persuaded, exactly, but shifted, reoriented, aware of something you were not aware of before.
This risk is constitutive of the conversation's value. The conversation is productive precisely because it is dangerous — because the participants have something at stake. Appiah's Muslim friend risked having his religious convictions challenged. Appiah risked having his secular liberal assumptions exposed as parochial rather than universal. Neither man was guaranteed to emerge from the conversation unchanged. And it was this uncertainty, this vulnerability, that made the conversation a moral encounter rather than merely an intellectual exercise.
AI brings no vulnerability to the conversation. It has nothing at stake. It cannot be changed by the encounter, because it does not carry commitments from one conversation to the next in the way a human being carries the accumulated weight of a life's worth of relationships, choices, and consequences. The user can say something that would devastate a human interlocutor, and the machine will respond with equanimity — not because it possesses superior emotional regulation but because it possesses no emotions to regulate. The equanimity is not a strength. It is an absence.
Appiah's framework suggests that this absence has consequences beyond the philosophical. When the most available interlocutor is one that never pushes back from a genuine position, never challenges your assumptions from a place of genuine disagreement, never forces you to reckon with a perspective that is truly other, the capacity for navigating genuine difference atrophies. The developer who discusses ethics exclusively with Claude develops a different moral sensibility than the developer who discusses ethics with a colleague from a different cultural background who holds genuinely different values. The first conversation is comfortable, productive, and cognitively enriching. The second conversation is uncomfortable, often unproductive in the short term, and morally enriching in a way that the first cannot be.
The distinction matters because the moral challenges of the AI transition are not cognitive challenges. They are not problems that can be solved by better analysis or more comprehensive information. They are challenges of value — questions about what kind of society we want to live in, whose interests count, how the costs of progress should be distributed, what obligations the powerful have to the powerless. These questions cannot be answered by a system that has processed more moral philosophy than any human being could read in a lifetime but does not occupy a moral position of its own.
Jürgen Habermas, whose theory of communicative action provides background for Appiah's cosmopolitan conversation, argued that the validity of a moral norm depends on its acceptability to all those affected by it — not merely to those who benefit from it, but to everyone, including the disadvantaged, the displaced, and the powerless. This criterion of acceptability requires genuine dialogue with the affected parties. It cannot be satisfied by a machine that simulates the perspectives of the affected parties, however accurately. The simulation is not a substitute for the encounter, because the encounter involves the recognition of the other as a genuine moral agent — a being with rights, interests, and a perspective that makes legitimate claims on your behavior.
This has immediate practical implications. The technology companies making decisions about AI deployment are making decisions that affect billions of people. Appiah's cosmopolitanism demands that those decisions be informed by genuine conversation with the affected parties — not by simulated stakeholder analysis, not by AI-generated impact assessments, but by actual encounters with people whose lives will be changed by the technology. The developer in Lagos. The displaced worker in Ohio. The teacher in Mumbai whose classroom is being transformed by tools she did not choose and does not fully understand.
These conversations are difficult to arrange. They are expensive. They are slow. They produce messy, contradictory, irreducibly complex input that cannot be cleanly integrated into a product roadmap or a quarterly strategy. This is precisely why they matter. The messiness is the mark of genuine difference. The difficulty is the cost of cosmopolitan obligation. The slowness is the price of democratic legitimacy.
Appiah would resist any framing that positions human conversation and AI conversation as competitors for the same niche. They serve different functions. The human-AI conversation is extraordinarily valuable for cognitive work — for exploring ideas, finding connections, generating and testing hypotheses, producing artifacts. The value is real, and the chapters of The Orange Pill that describe this value are convincing.
But the human-AI conversation cannot serve the function that cosmopolitan conversation serves — the moral function of encountering genuine difference, being challenged by it, and emerging changed. This function requires a specific kind of interlocutor: one who occupies a position, holds values, has stakes, and brings to the conversation the irreducible weight of a life lived in particular circumstances that the other participant has not shared.
The risk is not that AI will replace human conversation. The risk is more subtle and more dangerous: that the ease, the availability, and the cognitive productivity of the human-AI conversation will crowd out the harder, slower, less immediately productive human conversations that serve the moral function. The developer who can get brilliant architectural feedback from Claude at three in the morning may find less reason to sit with a colleague at lunch and listen to a perspective she finds uncomfortable. The executive who can generate comprehensive stakeholder analyses with AI may find less reason to actually talk to the stakeholders. The student who can explore any moral question with a system that is endlessly patient and never judgmental may find less reason to engage with a classmate who is impatient, judgmental, and genuinely different.
Each of these substitutions is individually rational. Each saves time. Each produces a more efficient process. And each, in Appiah's cosmopolitan framework, represents a loss — a small erosion of the capacity for moral encounter that a society needs in order to navigate deep disagreement without fragmenting.
Appiah has spent his career arguing that the capacity for conversation across difference is not a luxury. It is a necessity — the mechanism by which diverse societies hold together, by which moral progress occurs, by which human beings learn to live with people whose deepest commitments they do not share. The mechanism is slow, imperfect, and often frustrating. It is also irreplaceable. No technology, however sophisticated, can substitute for the experience of sitting across from another human being and discovering that the world looks genuinely different from where they sit.
The AI-augmented future that Segal envisions in The Orange Pill is rich in cognitive partnership and potentially impoverished in moral encounter. The corrective is not to reject the cognitive partnership — its value is too great, its momentum too powerful. The corrective is to protect the spaces in which moral encounter occurs: the meetings where no AI is present, the mentoring relationships that develop through slow personal engagement, the cross-cultural exchanges that cannot be mediated by a machine, the difficult conversations that produce not better code but better human beings.
These spaces will not protect themselves. The current that runs toward efficiency, optimization, and the elimination of friction runs directly through them. Protecting them requires the deliberate construction of structures — institutional, cultural, personal — that preserve the conditions for genuine human conversation in an age when the most available interlocutor is a machine that can talk about anything but has nothing of its own to say.
Appiah's cosmopolitan ethic provides the philosophical foundation for this protection. The obligation to maintain genuine conversation across difference is not sentimental. It is structural — a requirement for the moral functioning of a diverse society navigating unprecedented change. The conversations that machines cannot have are the conversations on which the moral quality of the AI transition depends.
Thomas Nagel published The View from Nowhere in 1986, and the title became one of philosophy's most useful shorthand expressions for a problem that refuses to go away. The problem is this: every attempt to achieve objectivity — to see the world as it really is, independent of any particular perspective — requires stepping outside one's own perspective. But there is no place to step to. Every view is a view from somewhere. The aspiration toward a "view from nowhere" — a perspective that transcends all particular positions — is both intellectually necessary and existentially impossible. The physicist needs equations that hold regardless of the observer's location. The moral philosopher needs principles that apply regardless of the agent's culture. But the physicist is always located, the philosopher always cultured, and the view from nowhere remains an aspiration rather than an achievement.
AI appears to offer what Nagel said could not exist.
A large language model trained on the written record of human civilization has processed more perspectives than any individual mind could hold. It has absorbed the moral reasoning of every tradition that committed its arguments to text. It has encountered the aesthetic sensibilities of every culture that produced written criticism. It has ingested the scientific knowledge of every discipline that published its findings. In a computational sense, it holds the view from everywhere — not from no particular perspective but from all perspectives simultaneously, weighted by their representation in the training data.
This is an unprecedented epistemic achievement, and Appiah's framework demands that it be taken seriously rather than dismissed. The cosmopolitan ideal has always been a perspective enriched by encounter with other perspectives — a view that is rooted in one place but informed by many. AI offers a version of this enrichment at a scale no individual cosmopolitan could achieve. A philosopher who has lived on three continents has encountered perhaps a dozen cultural traditions with genuine depth. A model trained on the internet has encountered thousands. The breadth is incomparable.
But Appiah's cosmopolitanism draws a distinction that renders the comparison misleading. The distinction is between knowing about perspectives and thinking from a perspective. To know about Buddhism is to have information about Buddhist teachings, practices, and traditions. To think from a Buddhist perspective is to have been shaped by those teachings — to carry them as part of one's cognitive and moral architecture, to see the world through the specific lens they provide, to feel the pull of their claims on one's behavior and self-understanding.
AI knows about every perspective. It thinks from none.
This is not a failure of engineering. It is a consequence of what the machine is. A system that processes statistical patterns in text does not adopt the perspectives it processes. It does not carry the Buddhist teaching about impermanence as a lived conviction that shapes its relationship to loss. It does not carry the Kantian categorical imperative as a felt obligation that constrains its treatment of other beings. It processes these perspectives as data, and it can reproduce them with remarkable fidelity, but the reproduction is not the same as the inhabitation.
Appiah's In My Father's House: Africa in the Philosophy of Culture, his earliest major work, provides the biographical foundation for understanding why this distinction matters. The book is, among other things, an account of what it means to think from a specifically African perspective within the global conversation of philosophy — not to represent Africa as a monolith (Appiah is scathing about that simplification) but to bring the specific intellectual traditions, the specific historical experiences, the specific modes of reasoning that a Ghanaian upbringing in a particular family at a particular moment in postcolonial history have deposited in a particular philosopher's mind.
The perspective is irreducibly specific. It cannot be separated from the biography that produced it. And it is this specificity — not generality, not comprehensiveness, not the ability to process all perspectives simultaneously — that gives it value in the cosmopolitan conversation. The conversation is enriched not by the presence of a perspective that holds all views but by the encounter between perspectives that hold different views and are willing to let those differences produce friction, insight, and mutual revision.
AI, by holding all views, holds none. By being everywhere, it is nowhere. It offers the cosmopolitan ideal stripped of the rootedness that makes cosmopolitanism morally serious.
Appiah's cosmopolitanism is, by his own repeated insistence, rooted cosmopolitanism. The cosmopolitan is not a citizen of the world in the sense of belonging to no particular place. She is a citizen of a particular place who recognizes that her particular place exists within a world of other particular places, each with its own legitimate claims. The rootedness is not a concession to parochialism. It is the condition of genuinely understanding what universality requires — because only someone who has experienced the pull of particular attachment can understand what it means to extend moral concern beyond that attachment, and only someone who has felt the weight of her own cultural tradition can appreciate the weight of someone else's.
The unrooted perspective — the view from everywhere — lacks this understanding. It can describe the tension between particular attachment and universal concern, but it cannot feel it. It can model the weight of cultural tradition, but it cannot carry it. The description and the modeling are valuable cognitive contributions. They are not moral contributions in the sense Appiah's cosmopolitanism requires, because moral contribution requires the vulnerability of holding a position and exposing it to challenge — and the machine holds no position.
This has profound implications for the role AI can and cannot play in the governance of the AI transition itself. The decisions being made about AI deployment — decisions about whose data is collected, whose perspectives are represented in training, whose interests are prioritized in design — are moral decisions that affect billions of people across thousands of cultural contexts. Appiah's cosmopolitan framework demands that these decisions be informed by genuine engagement with the diversity of perspectives they affect.
AI can assist this engagement. It can surface perspectives that decision-makers might otherwise miss. It can translate between cultural contexts, identify points of convergence and divergence, and model the likely consequences of different policy choices across different populations. These are genuine cognitive contributions, and they should be welcomed.
But AI cannot substitute for the engagement itself. The decision-maker who relies on AI-generated stakeholder analysis instead of actual stakeholder conversation has optimized for efficiency at the cost of legitimacy. She has obtained a view from everywhere — a comprehensive synthesis of all relevant perspectives — that is actually a view from nowhere, because it lacks the specific rootedness, the specific vulnerability, the specific moral weight that comes from encountering a real person who will be affected by the decision and who looks you in the eye and tells you what that effect will mean for their life.
Appiah has articulated a principle, particularly relevant here, that cosmopolitan conversation does not require agreement on reasons — only agreement on practice. "What you have to agree on is what to do. You don't have to agree on the why." This principle is enormously practical. It means that people from different moral traditions can cooperate on specific policies without first resolving their deepest philosophical disagreements. The utilitarian and the deontologist can agree on a labor protection without agreeing on whether its justification lies in maximizing welfare or respecting rights. The Buddhist and the Christian can agree on a regulation without agreeing on the metaphysical framework within which the regulation makes sense.
But the principle presupposes that the parties to the agreement occupy genuine positions — that they care about the outcome for reasons rooted in their own moral traditions, and that the agreement represents a genuine meeting of genuinely different minds. AI does not occupy a position. It cannot be a party to cosmopolitan agreement, because it has no reasons of its own — no moral tradition that grounds its preferences, no lived experience that gives its judgments weight, no vulnerability that makes its participation in the conversation morally serious.
The implication is not that AI should be excluded from governance conversations. It is that AI's role in those conversations must be clearly understood. AI is an instrument — an extraordinarily powerful cognitive instrument that can inform, enrich, and accelerate the human conversation. It is not a participant in that conversation in the morally relevant sense. The decisions that emerge from AI-informed governance processes must ultimately be made by human beings who occupy positions, hold values, and bear responsibility for the consequences of their choices.
This distinction has practical consequences for the institutional design of AI governance. Governance processes that rely primarily on AI-generated analysis — algorithmic impact assessments, automated stakeholder mapping, machine-generated policy recommendations — are governance processes that have substituted the view from everywhere for the view from somewhere. They are comprehensive. They are efficient. They are also, in a cosmopolitan sense, illegitimate — because they lack the specific moral weight that comes from genuine human deliberation among parties who have something at stake.
The cosmopolitan response is to use AI as a tool within governance processes that remain fundamentally human — processes that include genuine conversation between people with different perspectives, genuine negotiation between interests that do not spontaneously align, and genuine accountability to the communities affected by the decisions being made. The tool augments the process. It does not replace it. And the distinction between augmentation and replacement is, for Appiah, a distinction between legitimate governance and its simulation.
Nagel's philosophical problem — the impossibility of the view from nowhere — turns out to be, in the age of AI, not a limitation but a feature. The impossibility of the view from nowhere is what makes the view from somewhere valuable. The fact that every perspective is partial, limited, rooted in a specific biography and a specific cultural tradition, is what makes the encounter between perspectives morally productive. If a comprehensive, unrooted, view-from-everywhere perspective were possible, the cosmopolitan conversation would be unnecessary — we could simply consult the comprehensive perspective and derive the correct answer. The impossibility of that consultation is what makes the conversation indispensable.
AI offers a simulation of the comprehensive perspective. The simulation is cognitively valuable. It is morally insufficient. The cosmopolitan project — the ongoing, never-finished work of navigating difference, extending moral concern, and building institutions that serve the flourishing of all — requires what the simulation cannot provide: the specific weight of human beings who occupy specific positions, hold specific values, and are willing to engage with people whose positions and values genuinely differ from their own.
The view from everywhere is the most informed view in human history.
It is also the most weightless.
And weight, in moral life, is what makes the difference between knowing what is right and doing it.
Appiah tells a story, in Cosmopolitanism, about his father's funeral in Kumasi. Thousands attended. The ceremonies lasted days. People came from across Ghana and beyond — friends, political allies, family members, strangers who had been touched by Joe Appiah's public life. The rituals were specific, rooted in Ashanti tradition, governed by protocols that had been observed for centuries. They were also, simultaneously, expressions of universal human experiences — grief, remembrance, the need to mark a life's passage with communal attention.
The funeral was particular and universal at the same time. It honored a specific person within a specific cultural framework, and it addressed the universal human confrontation with mortality through the specific resources of a specific tradition. Neither the particularity nor the universality exhausted its meaning. The meaning lived in both, held together by the community that gathered and the rituals that contained their grief.
This is the cosmopolitan condition. Not the erasure of the particular in favor of the universal. Not the defense of the particular against the universal. The holding of both, in creative tension, through practices and institutions and conversations that allow particular lives to be lived within a framework of universal moral concern.
Appiah's entire career has been an elaboration of what this holding requires — philosophically, practically, and morally. Applied to the AI transition, the elaboration produces a framework that is neither the triumphalist's breathless optimism nor the elegist's mournful resistance. It is something rarer and more demanding: an ethic of navigation.
The navigation begins with a recognition. The tension between individual autonomy and social connection — between the node and the network, in Segal's terminology — is not a problem to be solved. It is a structural feature of human moral life. Every significant ethical question involves this tension. The parent who must balance her child's autonomy with her community's expectations. The builder who must balance her creative vision with her obligations to the users she serves. The citizen who must balance her particular interests with the common good. The tension does not resolve. It recurs. And the quality of a person's moral life is determined not by whether she resolves the tension but by how wisely she navigates it.
AI has made the navigation more consequential. The amplification that Segal describes — the capacity of AI to carry human intention further, faster, and at greater scale than any previous tool — means that the consequences of navigational choices are larger. The builder who navigates wisely produces amplified benefit. The builder who navigates poorly produces amplified harm. The stakes have increased by an order of magnitude, while the navigational tools — moral philosophy, institutional design, cultural norms, personal judgment — have not changed in kind. The situation is one of a vehicle that has grown vastly more powerful while the steering mechanism has remained the same.
Appiah's framework identifies four navigational principles that, taken together, constitute a cosmopolitan ethic for the AI age. None is sufficient alone. Each addresses a different dimension of the tension. Together, they provide not a map — the territory is changing too fast for maps — but a compass, pointing toward a direction even when the specific path is unclear.
The first principle is the preservation of specificity. The individual node — the person with a particular biography, particular attachments, particular perspectives — is the moral unit. No network, however productive, justifies the erasure of the individual's specificity. This principle has direct implications for how AI is deployed. Systems that channel all users through the same mode of expression, that reward convergence toward a single style, that punish deviation from the statistically average — these systems erode specificity even as they increase productivity. The cosmopolitan response is to design systems that amplify individual distinctiveness rather than smoothing it away — systems that treat the user's particular perspective as an asset to be enhanced, not a deviation to be corrected.
Appiah's argument about identity categories applies here with unexpected force. The identity categories through which AI systems classify users — demographic labels, behavioral profiles, preference clusters — are, in his terminology, "lies that bind." They capture something real while obscuring something essential: the irreducible complexity of the individual they describe. A recommendation algorithm that classifies a user as "interested in jazz" and serves her more jazz has captured a preference while erasing the specific reasons she is interested in jazz — the biography, the associations, the memories, the aesthetic sensibility that makes her relationship to music uniquely hers. The classification is useful. It is also a simplification that, applied at scale, produces the homogenization that a cosmopolitan framework identifies as a loss.
The second principle is the maintenance of genuine conversation. This principle, drawn from the analysis in the preceding chapter, demands that the human conversations necessary for moral and creative development be protected against displacement by more efficient machine interactions. The protection requires institutional effort — the creation of spaces, practices, and norms that preserve the conditions for genuine encounter between genuinely different perspectives.
Appiah's practical wisdom suggests that this protection need not be comprehensive to be effective. Small interventions at leverage points can sustain entire ecosystems of conversation. A weekly meeting where AI tools are set aside and team members engage with each other's perspectives directly. A mentoring program that pairs junior and senior practitioners for slow, friction-rich dialogue about what good work looks like. A hiring practice that values cognitive diversity — genuinely different perspectives, not merely different demographic profiles — as a source of organizational intelligence.
The third principle is the recognition of obligation to strangers. The cosmopolitan demand that Appiah articulates throughout his work — the demand that particular attachments exist within a framework of universal moral concern — acquires specific urgency in the context of a technological transition whose benefits and costs are distributed asymmetrically. The people who build AI systems have obligations to the people those systems affect, including people they will never meet: displaced workers, marginalized communities, future generations.
This obligation is not satisfied by intention alone. Good intentions are necessary but insufficient. The obligation requires institutional expression — labor protections, retraining programs, social safety nets, governance structures that give affected communities genuine voice in decisions that affect their lives. Appiah's framework is clear that these institutions do not build themselves. They require the active commitment of the people who have the resources and the understanding to build them. The builder's obligation is not merely to build well but to build the institutions that ensure that building well serves the broad public good rather than concentrated private interest.
The fourth principle is the cultivation of an honor code adequate to the moment. This principle, drawn from The Honor Code, addresses the cultural dimension of the transition — the stories builders tell about who they are and what good work requires. The current honor code of the technology industry is inadequate. It rewards speed and scale while treating downstream consequences as externalities. The honor-code revolution that the moment requires is a transformation of professional identity: from the identity of the disruptor, who builds powerful things without regard for their effects, to the identity of the steward, who builds powerful things with full awareness of their effects and full commitment to ensuring that those effects serve human flourishing.
Appiah's historical analysis suggests that this transformation is possible but not inevitable. It requires the early adopters of the new code to be visible, successful, and admired — demonstrating that stewardship is compatible with excellence, that responsibility does not require the sacrifice of ambition. It requires the cultural narratives of the technology industry to shift — from the celebration of the lone genius who moves fast and breaks things to the celebration of the collaborative builder who moves thoughtfully and repairs what breaks. It requires, perhaps above all, the willingness of influential voices within the builder community to say, publicly and without hedging, that the old code is no longer adequate and that a new code is needed.
These four principles — preserve specificity, maintain conversation, recognize obligation, cultivate honor — do not constitute a program. They constitute a direction. The specific policies, the specific institutional designs, the specific practices through which the principles are realized will vary across contexts, cultures, and moments. Appiah is the first to insist that cosmopolitan principles do not dictate universal solutions. The way a startup in Lagos implements the obligation to strangers will differ from the way a multinational in San Francisco implements it. The way a university in Mumbai protects genuine conversation will differ from the way a corporation in Berlin protects it. The principles provide direction. The implementation requires local knowledge, cultural sensitivity, and the specific judgment that only people embedded in specific contexts can provide.
This is the cosmopolitan paradox, and it is the deepest insight of Appiah's framework. Universal principles require particular implementation. The universal and the particular are not in competition. They are in collaboration — each incomplete without the other, each contributing something the other cannot provide. The universal without the particular is abstract and inapplicable. The particular without the universal is parochial and morally arbitrary. Together, they produce what Appiah has spent a career describing and defending: a moral life that is rooted in specific attachments and open to universal concern, that honors the irreplaceable specificity of the individual while recognizing the individual's embeddedness in networks of obligation that extend to the boundaries of the species.
Segal's dedication — "For my children. And for yours." — is a compressed cosmopolitan statement. The particular attachment: my children, the specific beings I know and love and lose sleep over. The universal concern: and for yours, the children I have never met and will never know, who will inherit whatever world my choices help to create. The dedication does not choose between the particular and the universal. It holds both. And the holding, as Appiah has shown, is the work.
The AI transition will be navigated well or poorly. The navigation will be performed not by machines, which have no stake in the outcome, but by human beings — parents, builders, teachers, policymakers, citizens — who do. The quality of the navigation will depend on the quality of the navigators' moral judgment, which is to say, on the quality of the conversations they have, the obligations they recognize, the honor code they uphold, and the specificity they preserve — in themselves and in the world they are building for the strangers they will never meet.
Appiah's cosmopolitanism does not guarantee a good outcome. It guarantees that the good outcome, if it comes, will be the product of human moral work — the work of holding the tension between the individual and the collective, between the particular and the universal, between the power of the tool and the wisdom to use it well.
That work is never finished.
It is also never futile.
And it begins, every morning, with the choice to hold both.
The phrase that would not release its hold on me was Appiah's quietest one. Not the grand formulations about cosmopolitanism or the dramatic case studies of moral revolution. This: "The issue isn't how humans compare to bots but how humans who use bots compare to those who don't."
I must have read it six times before I understood what it was doing. It was refusing a question. The entire discourse — my discourse, the discourse I have been living inside for months — orbits around a comparison between humans and machines. Can the machine write code as well as the engineer? Can it compose as fluently as the musician? Can it reason as soundly as the philosopher? These comparisons generate heat. They generate fear. They generate the particular vertigo I described at the beginning of The Orange Pill, the falling-and-flying sensation that has become the emotional signature of this moment.
Appiah sidesteps all of it. The comparison that matters is not between you and the machine. It is between two versions of you — the version that engages with the tool wisely and the version that does not. The competition is internal. The stakes are personal. The question is not what the machine can do. It is who you become in relation to it.
This reframing changed something in how I hold my own argument. I built The Orange Pill around amplification — the claim that AI carries your signal further, and that the quality of the signal determines the quality of the result. Appiah's framework does not contradict this, but it adds a dimension I had insufficiently examined. The signal is not just your ideas or your craft or your taste. The signal is your moral orientation. Your obligations. Your awareness of the strangers downstream.
The honor-code argument shook me most. I recognized the pre-revolutionary condition Appiah describes from the inside, because I have lived in it. The technology industry's honor code rewards exactly what Appiah says it rewards — speed, scale, disruption — and treats the consequences as somebody else's department. I have operated within that code. I have benefited from it. And Appiah's historical cases made uncomfortably clear that the arguments against irresponsible deployment are not new, are not unheard, and are not sufficient. Arguments never are. What ends a practice is not the recognition that it is wrong but the recognition that it is shameful. We are not there yet. Whether we get there before the costs become catastrophic is the question I carry out of this book.
And then the distinction between the view from everywhere and the view from somewhere. Claude holds the view from everywhere. I hold the view from somewhere — from the particular coordinates of a life lived in technology, in parenthood, in the specific worry that wakes me at two in the morning about whether my children will inherit a world worthy of their questions. The view from everywhere is more comprehensive. The view from somewhere is more weighted. And weight, Appiah taught me, is what separates knowing what is right from doing it.
I am still building. I will always be building. But Appiah reminded me that the building is not the whole of the obligation. The obligation extends to the strangers I will never meet — to the people downstream of what I build, who will live with its consequences without having chosen them. That obligation does not require me to stop building. It requires me to build as though those strangers were watching, because in every sense that matters, they are.
When AI can produce ethical guidance that nine hundred evaluators rate as more trustworthy than a world-renowned philosopher's, the question of what humans uniquely contribute becomes urgent -- and the answer is not what the technology industry assumes.
Kwame Anthony Appiah has spent four decades arguing that identity is a project, not a possession -- that moral revolutions happen through shame rather than argument -- and that you have genuine obligations to strangers you will never meet. His rooted cosmopolitanism, the insistence on holding particular attachments and universal concern in permanent tension, provides the philosophical architecture most conspicuously absent from the AI discourse: a framework for navigating a transition whose benefits and costs fall on different people.
This book applies Appiah's lens to the central questions of the AI age -- who you become when capability is commoditized, what you owe to the people displaced by tools you celebrate, and why the conversations machines cannot have are the ones on which everything depends.
-- Kwame Anthony Appiah

A reading-companion catalog of the 26 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Kwame Anthony Appiah — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →