Compassion, in Nussbaum's precise philosophical sense, is not soft sentiment but a structured cognitive evaluation requiring three judgments: that the suffering is serious, that the person did not bring it on herself, and that the observer could conceivably share the person's situation. Each condition is substantive; each can fail. The displaced expert satisfies all three — her suffering is serious (disruption of professional identity, economic vulnerability, loss of a form of life), was not caused by any fault of her own (it was the product of luck), and is one any knowledge worker could conceivably share. Yet compassion for the displaced is not the dominant emotion in the AI discourse — which is itself, on the framework, a moral and political failure.
The three-part structure distinguishes compassion from pity (which often lacks the equal-vulnerability condition) and from mere sympathy (which often lacks the seriousness judgment). It also identifies the specific cognitive moves through which compassion can be blocked: if the suffering is judged trivial, if the person is judged to have caused her own suffering, or if the observer judges herself immune, compassion does not arise.
The technology discourse exhibits all three failure modes regarding the displaced. The suffering is minimized (they can just retrain). The displacement is reframed as fault (they failed to adapt). The observer is positioned as exempt (the displaced are those who bet wrong). Each move is a cognitive operation that blocks the emotion the situation warrants, and each move has institutional consequences — because institutions are built by the emotions the culture cultivates.
Nussbaum's framework connects compassion directly to justice. A political culture that cultivates compassion builds institutions responsive to undeserved suffering; a culture that cultivates contempt builds institutions that serve only the winners. The dominant emotional register of the technology discourse — closer to what Nussbaum would identify as contemptuous triumph — is both morally repugnant and politically dangerous, because it undermines the emotional conditions necessary for constructing just institutions to address the transition.
The framework's normative conclusion is direct: the AI transition calls for compassion, not as charity but as the cognitive recognition of genuine undeserved suffering produced by morally arbitrary contingency — and for the institutional response that compassion motivates when it is allowed to operate as the cognitive evaluation it actually is.
The framework draws on Aristotle's Rhetoric, which offers the classical analysis of compassion's cognitive structure, and on Rousseau's treatment of pitié as a natural human capacity. Nussbaum developed the framework most fully in Upheavals of Thought (2001) and extended its political implications in Political Emotions (2013).
Its application to the technology discourse emerges through the recognition that the three conditions of compassion can be systematically blocked by rhetorical moves common to triumphalist rhetoric — and that the blocking has institutional consequences for how the transition is navigated.
Three cognitive conditions. Compassion requires judgments of seriousness, non-fault, and shared vulnerability — each can fail independently.
Compassion as achievement. Not a natural welling-up but a cognitive achievement that can be developed or blocked by cultural practice.
Institutional consequence. The emotions a culture cultivates shape the institutions it builds — compassion-cultivating cultures build different institutions than contempt-cultivating ones.
Technology discourse failure. The dominant register of AI discourse blocks each condition of compassion for the displaced — a moral failure with institutional consequences.
Political urgency. Institutions of luck-mitigation cannot be built or sustained without the compassion that motivates their construction.