Thomas Nagel's 1986 book The View from Nowhere identified a problem that refuses to go away: every attempt to achieve objectivity requires stepping outside one's own perspective, but there is no place to step to. Every view is a view from somewhere. AI appears to offer what Nagel said could not exist. A large language model has processed more perspectives than any individual mind could hold. In a computational sense, it holds the view from everywhere — not from no particular perspective but from all perspectives simultaneously, weighted by their representation in training data. Appiah's framework draws a distinction that renders the apparent breakthrough misleading: the distinction between knowing about perspectives and thinking from a perspective. AI knows about every perspective. It thinks from none. The unrooted perspective — the view from everywhere — lacks the moral weight that comes from holding a position. By being everywhere, it is nowhere.
The cosmopolitan ideal has always been a perspective enriched by encounter with other perspectives — a view rooted in one place but informed by many. AI offers a version of this enrichment at unprecedented scale. A philosopher who has lived on three continents has encountered perhaps a dozen cultural traditions with genuine depth. A model trained on the internet has encountered thousands. The breadth is incomparable.
Appiah's distinction is between knowing about perspectives and thinking from a perspective. To know about Buddhism is to have information about Buddhist teachings. To think from a Buddhist perspective is to have been shaped by those teachings — to carry them as part of one's cognitive and moral architecture, to feel their claims on one's behavior. AI processes Buddhist perspectives as data. It can reproduce them with remarkable fidelity. It does not inhabit them.
This has profound implications for AI governance. Decisions being made about AI deployment affect billions of people across thousands of cultural contexts. Appiah's cosmopolitanism demands that these decisions be informed by genuine engagement with the diversity of perspectives they affect. AI can assist this engagement — surfacing perspectives, translating between contexts, modeling likely consequences. But AI cannot substitute for the engagement. The decision-maker who relies on AI-generated stakeholder analysis instead of actual stakeholder conversation has optimized for efficiency at the cost of legitimacy.
Appiah's principle that cosmopolitan conversation does not require agreement on reasons — only agreement on practice — presupposes that the parties occupy genuine positions. AI does not occupy a position. It cannot be a party to cosmopolitan agreement because it has no reasons of its own — no moral tradition that grounds its preferences, no lived experience that gives its judgments weight, no vulnerability that makes its participation morally serious.
The framework synthesizes Nagel's analysis in The View from Nowhere (1986) with Appiah's rooted cosmopolitanism. The impossibility Nagel identified becomes, in the AI age, not a limitation but a feature — the specificity of the situated view is what makes it morally productive.
Knowing about vs. thinking from. AI holds information about every perspective. It inhabits none. The distinction is ontological, not a matter of training data coverage.
The view from everywhere is weightless. Comprehensiveness does not produce moral authority. Moral authority comes from specificity — from holding a position and being accountable to it.
Governance requires position-holders. Legitimate governance cannot be produced by AI-generated stakeholder analysis. It requires genuine deliberation among parties with stakes in the outcome.
Impossibility is generative. Nagel's view from nowhere is unachievable. That impossibility is what makes the view from somewhere valuable. If a comprehensive unrooted view were available, the cosmopolitan conversation would be unnecessary.
Defenders of AI governance tools argue that AI's comprehensive perspective-processing complements rather than replaces human deliberation, serving as a cognitive aid within legitimately human decision processes. Appiah's framework accepts this formulation while insisting that the aid must not become the source — that the human deliberation must remain primary, and that institutions must be designed to prevent the comfortable displacement of humans by their more efficient AI substitutes.