The Wharton experiment that rated GPT-4's ethical advice as more trustworthy than Appiah's own raises a question his framework is uniquely equipped to answer: what does the philosopher possess that the machine does not? The response begins with a distinction so fundamental that missing it makes the experimental results unintelligible — the distinction between the output and the position from which the output is produced. GPT-4 can do what Appiah does. GPT-4 cannot be what Appiah is. The machine cannot occupy the position of a Ghanaian-British philosopher who has lived on three continents, lost a parent, raised a family, and accumulated the specific, unrepeatable experience of being himself in the world for seven decades. The node is real because the position is real.
There is a parallel reading that begins not with the philosopher's irreplaceability but with the infrastructure that made the philosopher possible. Appiah's position — the Ghanaian-British philosopher who has lived on three continents — is not self-generating. It required specific material conditions: colonial educational systems that selected certain subjects for advancement, prestigious universities that credentialed their outputs, media platforms that amplified particular voices, and economic arrangements that freed certain individuals from subsistence labor to pursue abstract thought. The "node" is real, but the node was constructed.
The Wharton evaluators who preferred GPT-4's responses were not confused about ontology — they were responding to an emergent fact about knowledge production. The machine aggregated the ethical reasoning of millions of texts, including Appiah's own published work, and synthesized patterns that individual humans recognize as valuable. The evaluators did not mistake the machine for a person. They recognized that the question "what should I do?" can be answered effectively without requiring the answerer to have "lived" in Appiah's biographical sense. The framework that centers irreplaceable experience assumes that moral authority flows from positional specificity, but this assumption is itself historically contingent — a product of Enlightenment individualism that the AI transition may be rendering optional. What unsettles is not that the machine lacks a position, but that position-having may not be load-bearing for the functions we assigned to it.
The Wharton School experiment led by Christian Terwiesch in 2023 took moral dilemmas of the kind Appiah addresses in his New York Times Magazine ethics column and presented them to GPT-4. Evaluators rated the machine's responses as more moral, more trustworthy, more thoughtful than the philosopher's. The implications unsettled the discourse: if a machine can produce ethical guidance indistinguishable from one of the world's most accomplished moral philosophers, what does the philosopher possess?
Appiah's The Ethics of Identity provides the rigorous account. The individual possesses a value not conferred by social arrangement and not revocable by it. This is the moral foundation of human rights: the recognition that each person has inherent dignity — a specificity, an irreplaceability, a perspective that no other person and no combination of persons can replicate. The dignity does not reside in what the person can do. It resides in what the person is — a being with a particular history, particular attachments, particular stakes in the world.
This is not a semantic distinction. It is the distinction between a parrot that can pronounce the word fire and a person who smells smoke. The knowledge is embodied. It is biographically specific. Cognitive scientist Gary Marcus, responding to the studies, articulated the objection precisely: "It seems to me wrongheaded to assume that the average judgment of crowd workers casually evaluating a situation is somehow more reliable than Appiah's judgment." The crowd workers assessed the product. They could not assess the process by which the product was generated.
The broader crisis the framework illuminates is this: the outputs of individual intelligence are being reproduced at scale by machines that do not possess individual intelligence. Appiah's response is not to deny the capability of the machines but to insist that the value of the individual does not depend on the individual's monopoly over a particular output. What changes is the locus of value. It migrates from the output to the judgment that directed it.
The concept is developed most fully in The Ethics of Identity (2005) and The Lies That Bind: Rethinking Identity (2018). The AI-era application emerged through Appiah's 2025 Atlantic essay and the cultural reaction to the Wharton and UNC Chapel Hill studies that tested GPT-4 against his ethics column responses.
Capability versus identity. The machine can replicate what the individual does. It cannot occupy what the individual is. The distinction is ontological, not technical.
Value migrates to judgment. When AI commoditizes output, individual value does not disappear — it relocates to the capacity to evaluate, discriminate, and choose.
Dignity is not contingent on market position. The node's worth is not identical with economic productivity. The market's valuation is not the final arbiter of human worth.
The human who uses the tool wisely. The comparison that matters is not between human and machine but between two versions of the same person — the one who engages wisely and the one who does not.
The framework faces the practical objection that markets pay for output, not for judgment, and that the migration of value from output to judgment may be philosophically true and economically ignored. Appiah acknowledges this — his moral foundation says the displaced worker's value has not diminished, but it cannot by itself produce the institutions that would translate inherent dignity into livelihood.
The right weighting depends on which question we are answering. For the question "can this advice help me navigate a moral dilemma?" — the Wharton evaluators were 80% correct to prefer GPT-4. The machine's aggregated synthesis of ethical reasoning patterns produces practically useful guidance, and the individual's biographical specificity contributes little to that narrow function. Appiah is fully right (100%) that the machine cannot occupy his position, but the evaluators were not asking it to — they were asking whether the output served their need.
For the question "what grounds human dignity when markets ignore it?" — Appiah's framework is 100% correct and load-bearing. The inherent worth of the displaced worker does not evaporate when AI replicates their output. But for the question "what institutions translate inherent dignity into livelihood?" — the framework is 0% sufficient. Philosophical foundations do not generate redistributive mechanisms, and the migration of value from output to judgment offers no path for the worker whose judgment the market does not purchase.
The synthetic frame the territory requires is functional stratification of authority. Different questions require different epistemic sources. For immediate guidance, aggregated pattern-matching suffices (machine advantage). For accountability when guidance fails, positional specificity matters (human requirement). For moral community, being-with-others is irreplaceable (human exclusive). The node is real, but its reality is differentially relevant depending on what we are asking the node to do. The crisis is not that machines can produce outputs — it is that we have not yet built the institutions that recognize which human capacities remain categorically required.