Putting cruelty first is Judith Shklar's signature contribution to political philosophy — the argument, developed across Ordinary Vices and The Liberalism of Fear, that cruelty is not merely one vice among others but the vice that destroys the conditions under which every other wrong can be addressed. A person subjected to systematic cruelty cannot resist, cannot organize, cannot articulate what is being done to her. Cruelty forecloses political agency. This priority is not abstract: it derives from Shklar's childhood flight from Riga one step ahead of annihilation, and from four decades of studying how political orders fail the people at their margins. The AI transition produces cruelty not dramatically but structurally — through institutional arrangements that inflict suffering without any individual intending it, through the amplified indifference of builders whose environment prevents the moral imagination from operating.
The priority operates as a methodological commitment before it becomes a policy prescription. Shklar argued that political theories which begin with visions of the good society systematically defer the urgent question — who suffers now, and what institutions could have prevented it? — in favor of the aspirational one. The liberalism of fear reverses this priority. It begins at the bottom. It begins with the question every refugee learns to ask before any other: what is the worst that can happen, and what structures prevent it?
Applied to AI, the priority reveals what the technology discourse systematically conceals. The focus on capability — what AI can build, what it enables, what futures it opens — defers the question of what AI is already inflicting. The Berkeley study documented measurable cruelty: workers filling every cognitive gap with AI-assisted activity, lunch breaks colonized by prompts, the specific grief of senior engineers watching decades of expertise lose market value. None of this was dramatic. Much of it presented itself as opportunity. Han's diagnosis of the achievement subject captures the condition in which the victim and the agent are the same person — a novel political problem that the two-position architecture of classical cruelty analysis cannot accommodate without extension.
The priority generates institutional demands that the discourse of responsibility systematically evades. If cruelty is first, then the prevention of cruelty is not one policy consideration weighed against others but the precondition of legitimate political order. A political system that produces avoidable suffering at scale cannot offset that suffering through aggregate gains elsewhere. The distribution problem is not a peripheral concern. It is the test. The developer in Lagos, the displaced expert, the worker whose task seepage has eliminated the cognitive gaps in which rest became possible — these are not footnotes to the story of AI progress. They are the signal the priority demands be heard first.
The move Shklar spent her career exposing — the reclassification of avoidable suffering as misfortune rather than injustice — operates throughout the AI discourse with particular efficiency. The technology arrived. The market shifted. Skills lost their value. The process is classified as natural, beyond remedy, requiring compassion rather than structural change. Putting cruelty first refuses this classification. It insists that the distinction between misfortune and injustice is itself a political act, performed by the powerful in their own interest, and that the suffering of the displaced is injustice until proven otherwise by the institutional record.
The concept emerged from Shklar's 1982 essay "Putting Cruelty First" (later a chapter in Ordinary Vices, 1984) and reached its mature form in the 1989 essay "The Liberalism of Fear." Its deepest source is biographical: Shklar fled Riga as a child through Siberia, Japan, and Canada. The specific education that exile provides — the knowledge that political orders which feel stable from inside can be catastrophically unstable for those at their margins — shaped every subsequent theoretical commitment.
Cruelty forecloses agency. Unlike other vices, cruelty destroys the victim's capacity to function as a political actor, making its prevention the precondition of addressing every other wrong.
Structural cruelty is the dominant form. The cruelties that matter most in modern political orders operate through institutional arrangements rather than through the malice of identifiable persons.
The amplifier transmits whatever it receives. AI's indifference to the signal it carries means that cruelty by default becomes the predictable output of builders operating in environments that prevent the moral imagination from working.
Diffusion of cause dissolves accountability. The causal chains of AI deployment run through so many actors that responsibility for specific harms cannot be assigned — the oldest trick in the repertoire of political cruelty.
The priority is methodological before it is substantive. Putting cruelty first organizes analysis from the perspective of the person who suffers rather than the perspective of the builder, the investor, or the aggregate.
The strongest challenge to the priority comes from perfectionist and communitarian theorists who argue that a politics built around preventing the worst cannot sustain the civic virtue required to sustain itself — that the liberalism of fear is too thin to ground genuine political community. Shklar's response was characteristically uncompromising: political orders that aspire to the good without securing against the worst reliably produce the worst. The AI transition tests this claim with unusual directness, because the aspirational discourse around beneficial AI is precisely the discourse the framework identifies as evasive.