Judith Shklar — On AI
Contents
Cover Foreword About Chapter 1: Putting Cruelty First in the Age of the Amplifier Chapter 2: Cruelty by Default — The Builder's Indifference Chapter 3: The Fishbowl of the Powerful Chapter 4: Fear as Political Data Chapter 5: The Luddite's Fear Was Legitimate Chapter 6: The Achievement Subject as Victim and Agent Chapter 7: Institutional Failure and the Dam Deficit Chapter 8: The Developer in Lagos — Inclusion and Its Risks Chapter 9: The Smooth as a Form of Political Domination Chapter 10: Toward a Politics of Cruelty Prevention in the Age of AI Epilogue Back Cover
Judith Shklar Cover

Judith Shklar

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Judith Shklar. It is an attempt by Opus 4.6 to simulate Judith Shklar's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The suffering I kept misclassifying was my own team's.

Not dramatically. Not in any way that would show up in an exit interview or a Glassdoor review. The kind of suffering that hides inside opportunity. The kind where the person bearing it describes it as excitement, because the vocabulary available to them — the vocabulary I helped create, the culture I helped build — has no other word for what they are feeling.

I wrote in *The Orange Pill* about the engineer in Trivandrum who spent two days oscillating between excitement and terror. I celebrated the resolution: he found his footing, discovered that the twenty percent of his work that was judgment and taste was the part that mattered most. I framed it as a growth story. And it was. But Judith Shklar forced me to ask a question I had not asked: What would have happened if the resolution had gone the other way? What institutional structure existed to catch him if the terror won? What mechanism gave him a voice in the decision about whether the twenty-fold multiplier would expand his role or eliminate it?

The answer was: me. I was the mechanism. My judgment, my values, my quarterly math. One person, operating inside a competitive environment that structurally rewards headcount reduction, standing between twenty engineers and an arithmetic that could erase them. That is not an institution. That is a bet on individual character, and Shklar spent forty years explaining why bets on individual character are the thinnest possible foundation for preventing the worst.

Shklar was a political theorist who fled Latvia as a child, one step ahead of the forces that would have killed her. She spent her career studying not the spectacular evils — not the dictators and the death camps, though she understood those intimately — but the quieter cruelties. The institutional indifference. The suffering that gets reclassified as the cost of doing business. The gap between what a political order promises and what it actually protects.

She called her approach the liberalism of fear. Not fear as weakness. Fear as data. The fear of the displaced worker, the anxious parent, the senior engineer moving to the woods — these are not failures of character. They are readings of institutional adequacy. And when the readings come back negative, the obligation falls not on the fearful to be braver but on the institutions to be better.

This lens matters now because the AI revolution is producing exactly the kind of suffering Shklar trained her eye on: avoidable suffering, institutional suffering, suffering that presents itself as opportunity and therefore generates no political demand for its own remedy. The amplifier does not choose what it carries. The institutions we build around it do. Shklar insists we build them before the worst arrives, not after.

That insistence is uncomfortable. It should be.

Edo Segal ^ Opus 4.6

About Judith Shklar

1928-1992

Judith Shklar (1928–1992) was a Latvian-born American political theorist who spent most of her career at Harvard University, where she became the first woman to hold a tenured position in the Government Department. Born Judith Nisse in Riga, she fled with her family through Siberia, Japan, and Canada to escape the Nazi and Soviet occupations, an experience that profoundly shaped her political thought. Her major works include *After Utopia: The Decline of Political Faith* (1957), *Legalism: Law, Morals, and Political Trials* (1964), *Ordinary Vices* (1984), *The Faces of Injustice* (1990), and the essay "The Liberalism of Fear" (1989), widely regarded as one of the most important statements of liberal political philosophy in the twentieth century. Shklar's signature contributions include the principle of "putting cruelty first" — the argument that cruelty is the worst thing we do and that political orders must be judged primarily by their capacity to prevent it — and the distinction between misfortune and injustice, which exposed how the powerful reclassify avoidable suffering as natural and inevitable to escape institutional obligation. Her work has experienced a significant revival in the twenty-first century, with scholars and political theorists finding her frameworks newly urgent in an era of rising authoritarianism, institutional erosion, and technological disruption.

Chapter 1: Putting Cruelty First in the Age of the Amplifier

Judith Shklar spent four decades studying the worst things human beings do to one another. Not the spectacular evils — not genocide or total war, though she understood those intimately, having fled Riga as a child one step ahead of annihilation. The evils that consumed her intellectual life were quieter. The systematic humiliation of the powerless by the powerful. The casual indifference of institutions to the suffering they produce. The political arrangements that allow some people to inflict pain on others without consequence, without awareness, sometimes without even intending to. She called this putting cruelty first — the insistence that among all the vices a political order might exhibit, cruelty occupies a singular position. Not because it is the most common, though it may be. Because it is the vice that destroys the victim's capacity to function as a political agent. A person subjected to systematic cruelty cannot resist. Cannot organize. Cannot articulate what is being done to them. Cruelty forecloses the possibility of addressing every other wrong.

The liberalism of fear follows from this priority. Shklar's political philosophy does not begin where most political theories begin — with a vision of the good society, a theory of justice, an account of rights and obligations that would characterize the ideal political arrangement. It begins at the bottom. It begins with the question every refugee learns to ask before any other: What is the worst that can happen, and what structures prevent it?

This starting point produces a political theory of unusual modesty and unusual force. Modest because it does not promise flourishing, does not claim to know what the good life looks like, does not offer a blueprint for utopia. Forceful because it asks the one question that utopian theories tend to defer: Who suffers, and what institutions could have prevented it?

The arrival of artificial intelligence, as documented in Edo Segal's The Orange Pill, does not change the obligation that the liberalism of fear imposes. It amplifies it. The amplification is not metaphorical. It is the central fact of the technological moment.

Segal's core metaphor describes AI as an amplifier — a tool that carries whatever signal it receives, indifferent to the quality of that signal. Feed it care, and care reaches further than any previous tool in human history could carry it. Feed it carelessness, and carelessness scales to a degree that pre-AI carelessness could never achieve. The amplifier does not choose. The amplifier does not judge. The amplifier transmits.

From the perspective of the liberalism of fear, this indifference is the most politically significant feature of the technology. Not its capability. Not its speed. Not its capacity to generate code or compose music or draft legal briefs. Its indifference. Because indifference, in Shklar's framework, is how the worst political outcomes typically arrive.

The popular imagination fixes on dramatic cruelty — the dictator, the torturer, the commander who orders the bombardment. Shklar understood these figures. She had fled from their consequences. But her mature political theory focused on something more pervasive and more difficult to name: the cruelty that operates through systems rather than through persons. The cruelty of institutional arrangements that produce suffering as a byproduct rather than a purpose. The cruelty of a legal system that grinds slowly enough that justice delayed becomes justice denied. The cruelty of an economic order that treats the displacement of millions as an externality rather than a cost to be accounted for.

This is cruelty without a face. No individual intended it. No individual can be held accountable for it. The suffering is real — documented, measurable, borne by specific human beings in specific communities — but the causal chain between decision and consequence is long enough, and diffuse enough, that responsibility dissolves before it can be assigned.

AI makes this dynamic faster, larger, and harder to trace. When a technology company deploys a system that intensifies work without the workers' meaningful consent — the pattern the Berkeley researchers documented, in which AI tools colonize lunch breaks and waiting rooms and the cognitive gaps that previously served as rest — the cruelty is not dramatic. No one is being tortured. No one is being imprisoned. The workers are, by every external measure, choosing to use the tools. The choice is voluntary in the narrowest legal sense.

But Shklar's framework asks a question that voluntariness alone cannot answer: What are the conditions under which the choice is made? If the institutional environment rewards intensity and punishes rest — if the worker who uses AI to fill every cognitive gap is promoted while the worker who preserves those gaps is labeled unproductive — then the "choice" to use the tool is structurally coerced. The coercion is invisible. The coercion presents itself as opportunity. And the suffering it produces — the flat affect, the eroded empathy, the chronic fatigue that the Berkeley study measured — is classified not as the consequence of institutional failure but as the personal failing of someone who could not "handle the pace."

This classification is itself a political act. It is the reclassification of injustice as misfortune — the move that Shklar spent her career exposing as the most reliable instrument of political domination. When avoidable suffering is reclassified as the natural cost of progress, the obligation to prevent it disappears. The suffering continues. But it continues without generating the political demand for structural change that it would generate if it were correctly identified as injustice.

The amplifier accelerates this reclassification by making the causal chain between decision and consequence even longer and more diffuse. When a single developer using Claude Code produces work that previously required a team of twenty, the nineteen people whose labor is no longer needed experience real suffering — economic precarity, identity crisis, the specific grief of watching hard-won expertise lose its market value. But who inflicted this suffering? Not the developer who used the tool. Not the company that deployed it. Not Anthropic, which built it. Not the investors who funded it. The causal chain runs through so many actors, so many decisions, so many institutional arrangements, that no single point of accountability exists.

Shklar would recognize this as the oldest trick in the book of political cruelty. Diffuse the cause. Dissolve the accountability. Let the suffering persist while everyone involved points to someone else as the responsible party. The technology company points to the market. The market points to consumer demand. The consumer points to the company. The circle closes without anyone inside it bearing responsibility, and the people outside it — the displaced, the exhausted, the fearful — bear the full cost.

The political question posed by AI is therefore not the utopian one. Not "How do we use AI to build the good society?" That question, however well-intentioned, defers the urgent question in favor of the aspirational one. It asks what the technology might achieve before asking what the technology is already inflicting.

The liberalism of fear reverses the priority. It asks first: What cruelties does AI enable? What suffering is it producing right now, in real workplaces, in real families, in real communities? And what institutions — what specific, concrete, enforceable institutional structures — could prevent that suffering?

The cruelties are already documented. The Orange Pill documents them with an honesty that is unusual in technology writing precisely because the author is complicit in the systems he describes. The intensification of work. The colonization of rest. The devaluation of expertise without transitional support. The concentration of productivity gains among those who already possess capital and capability while the costs fall on those who possess neither. The erosion of the cognitive conditions — boredom, patience, the capacity for sustained attention — under which genuine thinking develops. The displacement of an entire generation of skilled workers whose fear is dismissed as nostalgia by the people who benefit from their displacement.

Each of these is a form of cruelty in Shklar's precise sense: suffering inflicted by the powerful upon the powerless through institutional arrangements that could be otherwise. None of them was inevitable. All of them are the product of choices — choices about deployment speed, about the distribution of gains, about the presence or absence of transitional protections, about whether the costs of progress are treated as externalities or as obligations.

The liberalism of fear does not oppose AI. Shklar was not a Luddite; she was not opposed to power per se but to power without constraint. The liberalism of fear opposes the deployment of AI without the institutional structures that prevent its power from producing cruelty. It opposes the speed of deployment that outpaces the speed of institutional response. It opposes the classification of avoidable suffering as inevitable progress. It opposes the dissolution of accountability through causal diffusion. It opposes, above all, the comfortable assumption that because the technology creates value in aggregate, the suffering it inflicts at the margin is an acceptable cost.

There are no acceptable costs when the costs are borne by those who had no voice in deciding whether to incur them.

This is the foundational commitment of every argument that follows. The liberalism of fear does not promise to solve the problem of AI. It promises something more modest and more urgent: to ensure that the problem is correctly identified. Not as a technology problem. Not as an innovation problem. Not as a workforce development problem. As a political problem — a question of who has power, who bears the consequences of that power, and what institutions exist to prevent the powerful from inflicting suffering on the powerless through indifference, through carelessness, through the amplified neglect that the most powerful tool in human history makes possible.

Shklar understood, from the specific education that exile provides, that political orders are not judged by their aspirations. They are judged by their failures. By the suffering they could have prevented and did not. By the cruelties they tolerated because naming them was inconvenient, because preventing them was expensive, because the people who bore them lacked the political power to demand otherwise.

The question for this generation is not what AI can build. That question answers itself daily, in demonstrations of capability that genuinely astonish. The question is what AI is already destroying — what forms of human security, human dignity, human rest, human purpose are being eroded by a transition that no one voted for, that no one consented to, that arrived with the speed of a phase transition and the institutional preparedness of an afterthought.

The amplifier is transmitting. The signal it carries is not determined by the technology. It is determined by the institutions — present or absent, adequate or failing — that shape the conditions under which the technology is deployed. The liberalism of fear demands that those institutions be built before the consequences of their absence become irreversible. Not after. Not eventually. Now.

The chapters that follow apply this demand to the specific forms of cruelty that the AI transition is producing: the builder's indifference, the epistemological blindness of the powerful, the legitimate fear of the displaced, the novel political problem of self-inflicted exploitation, the institutional failures that are already widening, the distributional cruelties of market repricing, and the obligations that fall on parents, educators, and political leaders who inherit a transition they did not design and cannot stop — but whose consequences they can, if they choose, prevent from becoming the worst version of what is possible.

Chapter 2: Cruelty by Default — The Builder's Indifference

The confession arrives in Chapter 16 of The Orange Pill, delivered with the particular candor of a person who has lived long enough with a failure to stop defending it. Segal describes building a product he knew was addictive by design. He understood the engagement loops. He understood the dopamine mechanics, the variable reward schedules, the social validation cycles, the precise timing of notifications calibrated to exploit moments of boredom. He understood all of this, and he built it anyway, because the technology was elegant and the growth was intoxicating.

The justification he offered himself at the time is one that Shklar would have recognized immediately, because it is the justification that power has offered itself in every century and every political order she studied: Someone else will build it if I do not, so it might as well be me.

This sentence deserves examination, because it contains, compressed into fourteen words, the entire mechanism of what might be called cruelty by default — the infliction of suffering not through malicious intent but through the failure to exercise the moral imagination that power demands.

The sentence performs three operations simultaneously. First, it converts a choice into an inevitability. The builder had a choice — to build or not to build. The sentence reclassifies that choice as a non-choice by invoking the inevitability of the outcome regardless of his decision. Second, it transfers agency from the builder to the market. If someone else will build it anyway, then the builder is not the agent of the consequences; the market is. The builder becomes a vessel through which market forces express themselves, and moral responsibility attaches to forces, not to the person who chose to embody them. Third, it introduces a comparative moral claim — "at least I'll do it better" — that reframes the act of building as an act of harm reduction rather than an act of harm production.

Shklar spent decades studying exactly this structure. In Ordinary Vices, she examined how political actors arrange the moral furniture of their decisions to render cruelty invisible — not by denying the suffering, but by reclassifying it as something other than cruelty. The torturer calls it interrogation. The colonizer calls it civilization. The builder calls it disruption, or innovation, or democratization, and the suffering that follows is reclassified as the natural cost of progress rather than the consequence of a choice that could have been made differently.

The concept of cruelty by default extends Shklar's analysis to the specific conditions of technological production. Cruelty by default occurs when the institutional environment in which building takes place — the market incentives, the competitive pressures, the cultural norms of the technology industry, the velocity of the deployment cycle — systematically prevents the builder from attending to the consequences of the product. The builder does not choose to be indifferent. The environment produces indifference as a structural feature.

Consider the deployment velocity that The Orange Pill celebrates and fears in equal measure. Segal describes building Napster Station in thirty days — a product that would have required months under previous conditions. The achievement is genuine. The capability it represents is real. But embedded in that velocity is a political fact that the builder's perspective tends to obscure: the faster the deployment, the shorter the interval between decision and consequence, and the shorter the interval, the less time exists for the moral imagination to operate.

Moral imagination — the capacity to anticipate the consequences of one's actions for people who are not in the room — is not instantaneous. It requires the specific cognitive conditions that speed eliminates: time to reflect, time to consult, time to imagine the person downstream who will be affected by the decision being made upstream. The Berkeley researchers documented this elimination precisely. When AI tools fill every cognitive gap with productive activity, the gaps that previously served as spaces for reflection disappear. The builder is always building. The question of whether the thing being built will cause harm is deferred — not deliberately, but structurally, because the production cycle has no scheduled interval for asking it.

Shklar argued that the institutional conditions under which decisions are made are as morally significant as the decisions themselves. A political order that structures decision-making in ways that prevent the decision-maker from attending to consequences is complicit in the consequences, even if no individual within the order intended them. The factory owner who set the conditions of labor in 1812 did not intend for children to lose their fingers in the machinery. But he created an institutional environment in which children's fingers in machinery was a foreseeable consequence of the conditions he set, and his failure to attend to that consequence — his indifference to it, produced by the structural incentives of the factory system — constitutes cruelty in Shklar's precise sense.

The same analysis applies to the technology builder of 2026. The deployment cycle that compresses development from months to days does not intend to eliminate the moral imagination. But it creates institutional conditions under which the moral imagination cannot operate, and the consequences of its absence — products that intensify work without consent, systems that erode rest, tools that devalue expertise without providing transitional support — constitute cruelty by default.

The confession in The Orange Pill illustrates a further dimension of this analysis. Segal does not merely confess to having built addictive products. He confesses to understanding what he was building while he was building it. The moral imagination was not absent. It was overridden — overridden by the intoxication of the frontier, by the competitive pressure, by the cultural environment of the technology industry that treats velocity as virtue and hesitation as weakness.

This is a more troubling form of cruelty by default than simple ignorance. Ignorance at least admits of remedy through education. The builder who does not know that engagement loops exploit cognitive vulnerabilities can be taught. But the builder who knows and builds anyway — who possesses the moral imagination but cannot, within the institutional environment, act on it — presents a structural problem that individual education cannot solve. The knowledge is present. The will to act on it is defeated by the environment.

Shklar would have located this problem in her analysis of political fear. In The Liberalism of Fear, she argued that the most dangerous political environments are those in which the people who could prevent cruelty are themselves afraid — afraid of losing competitive position, afraid of being left behind, afraid that the costs of restraint will fall on them while the benefits of acceleration accrue to others. This fear is rational. The competitive dynamics of the technology industry are real. The company that pauses to assess consequences while its competitors deploy without assessment bears a genuine cost. The builder who insists on asking "should we?" while everyone else is asking "how fast can we?" risks professional marginalization.

But rational fear does not eliminate moral responsibility. It redistributes it. If the individual builder cannot, within the competitive environment, exercise the moral imagination that cruelty prevention demands, then the responsibility shifts to the institutional level — to the regulatory frameworks, the professional norms, the accountability structures that should exist to make the exercise of moral imagination compatible with competitive survival. The absence of those structures is not the builder's fault. But the suffering that the absence produces is not the displaced worker's fault either, and someone must bear the obligation of prevention.

Contemporary AI development operates under conditions that maximize the production of cruelty by default. The deployment cycles are measured in weeks. The competitive pressure is extreme — Segal describes watching AI capability cross thresholds so rapidly that plans made in November were obsolete by January. The cultural norms of the industry celebrate speed, celebrate shipping, celebrate the elimination of friction between intention and execution. The institutional structures that could slow the cycle — regulatory review, impact assessment, mandatory transition periods — are either absent or too slow to operate at the speed of deployment.

The result is a political environment in which cruelty by default is not an aberration but a structural feature. The products ship before consequences can be assessed. The displacement occurs before transitional support can be provided. The work intensifies before cultural norms of rest can adapt. And the suffering that results — the flat affect, the eroded empathy, the specific grief of the senior engineer watching decades of expertise become economically weightless — is classified as the inevitable cost of progress rather than as the consequence of institutional failure.

Philosopher Matthieu Queloz, applying Shklar's framework to AI advisory systems in 2025, identified a pattern directly relevant to cruelty by default: the way AI systems create "epistemic, structural, and temporal asymmetries of power" between the builders of the systems and the people who interact with them. The builder possesses information the user does not — about how the system works, about what data it was trained on, about what its failure modes are, about what it optimizes for. This information asymmetry is a form of power, and power without accountability is, in Shklar's framework, the precondition for cruelty.

The technology priesthood that Segal describes — people with deep understanding of complex systems who mediate between those systems and the populations they affect — operates under precisely this asymmetry. The priesthood knows things the public does not. The question the liberalism of fear poses is not whether the priesthood's knowledge is accurate — it usually is — but whether the priesthood's power is constrained. Whether the knowledge differential is accompanied by accountability structures that prevent it from becoming an instrument of domination.

The answer, as the historical record of technology deployment suggests, is that it typically is not. The priesthood's self-assessment — "we are building responsibly, we are thinking about safety, we are committed to beneficial outcomes" — is not an adequate substitute for external constraint. Shklar knew this about every priesthood she studied, from the established church to the professional judiciary. Self-regulation fails at the moment of maximum pressure, because at that moment, the interests of the priesthood and the interests of the public diverge, and the priesthood, possessing the information asymmetry, is positioned to resolve the divergence in its own favor without the public being aware that a divergence has occurred.

The institutional response to cruelty by default is not to prohibit building. Shklar did not oppose power. She opposed power without constraint. The response is to create the conditions under which the builder's moral imagination can operate — conditions that include mandatory impact assessment, deployment intervals that allow consequences to be anticipated, accountability structures that make the builder responsible for foreseeable harm, and cultural norms that treat the question "should we?" as a mark of professional seriousness rather than competitive weakness.

These conditions do not currently exist. Their absence is not an accident. It is the product of a political order that has accepted the technology industry's self-assessment in place of external constraint, that has classified the suffering of the displaced as misfortune rather than injustice, and that has treated the speed of deployment as a feature rather than a risk. The liberalism of fear identifies this acceptance as the most dangerous political failure of the current moment — not because the technology is inherently cruel, but because the institutional environment in which the technology is deployed has been structured, by omission rather than by design, to maximize the production of cruelty by default.

Chapter 3: The Fishbowl of the Powerful

Every exercise of power produces a specific form of blindness in the person who exercises it. This is not a moral claim about the character of powerful people. It is a structural observation about the epistemological conditions that power creates. The factory owner of 1812 did not see the suffering of the displaced weavers — not because he was a bad person, but because nothing in his institutional environment required him to see it. His information came from other factory owners, from investors, from the market reports that measured productivity and profit. The weavers existed in his peripheral vision, if they existed at all, as a labor cost to be minimized rather than as persons whose suffering generated political obligations.

Shklar understood this structural blindness as one of the preconditions for political cruelty. In The Faces of Injustice, she argued that the line between misfortune and injustice is drawn not by neutral observers applying universal criteria, but by the powerful, whose position gives them both the authority to classify suffering and the incentive to classify it in ways that minimize their own obligations. When the factory owner classifies the weavers' displacement as the inevitable cost of progress — as misfortune rather than injustice — he is performing a political act. He is using his epistemic position, his access to the categories that shape public understanding, to place the suffering of the displaced outside the domain of political obligation.

Segal's fishbowl metaphor in The Orange Pill describes this condition with a precision that warrants philosophical development. The fishbowl is the set of assumptions so familiar that the person inside it has stopped noticing them. The water the fish breathes. The glass that shapes what the fish can see. Everyone inhabits a fishbowl — the scientist's is shaped by empiricism, the filmmaker's by narrative, the builder's by the question "Can this be made?" Each fishbowl reveals part of the world and hides the rest.

The claim that deserves scrutiny is not the existence of the fishbowl — that claim is almost trivially true — but the specific ways in which the builder's fishbowl conceals the consequences of building from the person who builds.

Segal describes the exhilaration of the Trivandrum trainingtwenty engineers, each operating with the leverage of a full team, a twenty-fold productivity multiplier achieved in days. The description is vivid, honest, and revealing in ways that extend beyond what the author explicitly acknowledges. The exhilaration is real. The capability is genuine. The engineers were reaching into domains they could not previously access, building things they could not previously build.

But the fishbowl shapes what is visible and what is not. Visible: the expanded capability of each engineer. Visible: the ambition of the work they could now attempt. Visible: the compression of timelines, the elimination of bottlenecks, the acceleration of the path from imagination to artifact. Not visible, or at least not foregrounded: the distributional consequences of a twenty-fold productivity multiplier.

If each of twenty engineers can now do the work of twenty, then the organization possesses the productive capacity of four hundred engineers in a team of twenty. Segal acknowledges the arithmetic. He describes the boardroom conversation in which the investor's question sits on the table: If five people can do the work of a hundred, why not just have five? He chose to keep the team. He chose to invest the expanded capability in more ambitious work rather than in headcount reduction.

But the choice was his to make. That is the political fact. The twenty engineers whose expanded capability now includes the capability of being replaced did not have a voice in the decision about whether that capability would be used to expand the work or contract the workforce. Their continued employment depended on the judgment of a single person — a person Segal describes as thoughtful, ethical, and genuinely concerned with the welfare of his team — but a single person nonetheless, operating within a competitive environment that structurally rewards headcount reduction and structurally punishes the kind of long-term investment in human capability that Segal chose.

Shklar's analysis of power and vulnerability applies here with precision that the builder's fishbowl tends to soften. The relationship between Segal and his engineers is not symmetrical. He possesses information they do not — about the company's financial position, about the board's expectations, about the competitive dynamics that shape strategic decisions. He possesses decision-making authority they do not — the authority to keep the team or reduce it, to invest the productivity gains in expansion or in margin. And he possesses the epistemic privilege of the fishbowl — the set of assumptions about innovation, about building, about the inherent goodness of expanded capability — that makes the twenty-fold multiplier feel like a gift to the engineers rather than a threat.

From inside the builder's fishbowl, the multiplier is empowerment. The engineers can do more. They can reach further. Their capability has expanded. From outside the fishbowl — from the perspective of the senior engineer who spent the first two days oscillating between excitement and terror, or from the perspective of the engineers at other companies whose leadership will run the same arithmetic and arrive at a different conclusion — the multiplier is a redistribution of power. The capability that was distributed among twenty people is now concentrated in each of them, and the question of what happens to the concentration is a political question that the technology does not answer and the market does not reliably resolve in the interest of the vulnerable.

Shklar's framework provides the analytical tool that the builder's fishbowl systematically obscures: the demand to begin from the perspective of the person who suffers. Not the person who builds. Not the person who celebrates the capability. The person who lies awake wondering whether next quarter's arithmetic will come out differently than this quarter's.

This demand is not comfortable for builders. It is not meant to be. The liberalism of fear is not a comfortable political philosophy. It is a philosophy forged in the experience of exile, in the specific knowledge that political orders that feel stable from inside can be catastrophically unstable for the people at their margins. Shklar learned this not from books but from the experience of watching a political order — the liberal democracies of interwar Europe — fail the people it was supposed to protect. Her insistence on beginning from the perspective of the vulnerable is not sentimentality. It is the hard-won epistemic commitment of a person who understood that the view from above conceals what the view from below reveals.

The effort to see outside the fishbowl — what Segal describes as pressing one's face against the glass — is genuinely difficult and genuinely admirable when it occurs. But Shklar would insist, with the precision of a thinker who studied political failure for forty years, that individual effort is insufficient. The structural forces that shape the fishbowl are stronger than individual will. The builder who makes a genuine effort to see the consequences of building is still embedded in an institutional environment that rewards speed, celebrates disruption, and classifies the suffering of the displaced as the cost of doing business.

The demand is therefore not for better builders — though better builders would help — but for better institutions. Institutions that force the consequences of power into the field of vision of the people who exercise it. Transparency requirements that make the distributional effects of AI deployment visible to the workers affected by it. Accountability structures that create costs for harms that the market would otherwise externalize. Mechanisms that give the vulnerable a voice in decisions that shape their lives — not as a gesture of corporate benevolence, but as a structural feature of the political order.

Hannes Bajohr, who is simultaneously one of the foremost contemporary Shklar scholars and a leading researcher on AI and language, has identified a dimension of this problem that connects the fishbowl of the powerful to the control of language itself. Bajohr observes that when communication is overwhelmingly mediated by language models, the very conditions of democratic participation are at stake. What can be said using a commercial model is based on the parameters of its business model, not its technical capacity alone. The fishbowl is not merely a matter of what the powerful can see. It is a matter of what can be articulated within the systems the powerful control.

This observation connects directly to Shklar's argument in The Liberalism of Fear that no political theory deserving the name "liberal" can tolerate authorities who possess the unconditional right to impose beliefs and even a vocabulary upon the citizenry. The AI systems that now mediate an increasing proportion of human communication are, in a precise sense, vocabulary-imposing systems. They determine what can be said fluently and what cannot, what arguments are readily available and what arguments require effort to construct, what thoughts flow easily through the system and what thoughts encounter friction.

The fishbowl of the powerful is therefore not merely a perceptual limitation. It is a structural feature of the political economy of AI — a feature that concentrates the capacity to shape perception in the hands of the people who build the systems while distributing the consequences of that shaping across the entire population. The builder sees capability. The user sees convenience. The person displaced by the capability and surveilled by the convenience sees something else entirely — and the system is structured so that what they see is the hardest thing to articulate within the vocabulary the system provides.

The liberalism of fear demands that this asymmetry be named and constrained. Not eliminated — power asymmetries are inherent in political life, and the pretense that they can be abolished is itself a form of political fantasy. But constrained, through institutional structures that prevent the asymmetry from producing cruelty. Through mechanisms that ensure the perspective of the vulnerable reaches the decision-makers who shape the systems. Through accountability structures that make the builder's indifference — the natural, structural, fishbowl-produced indifference — costly enough that attending to consequences becomes as rational as ignoring them currently is.

The glass of the fishbowl does not break by itself. It must be broken by institutions, deliberately, repeatedly, and against the persistent resistance of those who find the water inside it comfortable.

Chapter 4: Fear as Political Data

In the early months of 2026, a pattern emerged among experienced software engineers that The Orange Pill describes with the shorthand of evolutionary biology: fight or flight. Some engineers leaned into the new tools, embracing the expanded capability, rebuilding their professional identities around the judgment and direction that AI could not provide. Others withdrew. They moved to smaller cities. They lowered their cost of living. They began, quietly, to prepare for a future in which their skills, accumulated over decades of patient, difficult work, would no longer command the premium that had organized their economic lives.

The technology discourse classified this second group with the vocabulary it reserves for those who fail to adapt: resistant, nostalgic, fearful. The classification was efficient. It was also, from the perspective of the liberalism of fear, a political failure of the first order.

Shklar argued throughout her career that fear is not the opposite of political engagement. Fear is its precondition. The person who fears has perceived something real about the political order — something about the distribution of power, the adequacy of protections, the likelihood that suffering will be imposed without remedy. To dismiss fear as irrationality is to dismiss the perception that generated it, and the perception, in Shklar's experience, is almost always more accurate than the reassurance offered by those who do not share it.

The senior engineers moving to the woods were not failing to adapt. They were performing a rational assessment of their institutional environment and arriving at a conclusion: the threat to their livelihoods was real, the institutional protections were absent, and the most reasonable response was to minimize exposure. This assessment was not irrational. It was, in fact, more realistic than the assessment of many who stayed and fought, because the fighters often sustained their engagement through a specific form of optimism — the belief that the expanding capability would create new opportunities to replace the old ones — that the historical record supports only partially and only over timescales measured in decades rather than quarters.

Shklar would have taken the flight response seriously not as a psychological phenomenon but as political data — evidence about the state of institutional protection in the society from which the frightened are fleeing. When significant numbers of skilled, experienced, economically productive people begin to withdraw from the field of engagement, something has gone wrong at the institutional level. Not at the individual level. Not in the psychology of the frightened. In the political order that failed to provide the conditions under which the frightened could remain engaged without accepting unacceptable risk.

The distinction matters because it determines the response. If the flight response is a psychological problem — a failure of adaptability, a deficit of resilience — then the response is therapy, retraining, motivational intervention aimed at the individual. If the flight response is political data — evidence of institutional failure — then the response is structural: the construction of institutions that make engagement compatible with security, that prevent the worst outcomes the frightened have correctly identified as possible.

The liberalism of fear insists on the second interpretation. Not because individual psychology is irrelevant, but because the political analysis precedes and subsumes the psychological one. A person's fear is generated by their perception of the political order, and if the political order is failing, the fear is appropriate. Treating the fear as a personal failing — telling the frightened engineer to "upskill," to "embrace change," to "develop a growth mindset" — is a form of political evasion. It transfers the burden of institutional failure from the institutions that failed to the individuals who correctly perceived the failure.

The Orange Pill captures this dynamic with more honesty than most technology writing manages, but the builder's fishbowl still shapes the narrative. Segal describes the fight-or-flight dichotomy and explicitly endorses the fight response — the active engagement with AI tools, the willingness to rebuild one's professional identity around the capabilities that remain distinctly human. The endorsement is not cynical. It reflects a genuine belief, supported by Segal's own experience, that engagement with the tools produces better outcomes than withdrawal.

But the endorsement assumes institutional conditions that do not universally exist. The engineer who fights needs somewhere to fight from — needs a company that values judgment over execution, needs an economic cushion that allows the transition period, needs access to the tools and the training that the transition requires. Segal's engineers in Trivandrum had these conditions. They had an employer who chose to invest the productivity gains in expanded capability rather than headcount reduction. They had training provided by someone who understood the tools and could guide the transition. They had the institutional support that made fighting rational.

The engineers who fled did not have these conditions. Many worked for companies that would run the twenty-fold arithmetic and arrive at the investor's conclusion: if five people can do the work of a hundred, keep five. Many worked in organizations where "embrace change" meant "accept that your position may be eliminated without transitional support." Many worked in economies where the safety net — unemployment insurance, retraining programs, portable benefits — was either absent or so inadequate as to be functionally meaningless.

Their fear was not a character flaw. It was an accurate reading of their institutional environment. And their flight, far from being a failure of adaptation, was the most rational response available to them given the institutional conditions they faced.

Shklar's political philosophy provides the framework for understanding why this matters beyond the individual cases. In The Liberalism of Fear, she argued that the most fundamental obligation of a liberal political order is to ensure that citizens can live without the kind of fear that distorts judgment, prevents participation, and reduces persons to survival-mode cognition. Fear, in Shklar's framework, is not merely unpleasant. It is politically catastrophic. A person consumed by fear for their livelihood cannot participate in the democratic processes that shape the conditions of their work. A person consumed by fear for their children's future cannot engage with the educational debates that determine what their children will learn. A person consumed by fear for their economic survival cannot afford the luxury of questioning whether the systems that threaten them might also serve them, because the questioning requires cognitive resources that fear has consumed.

The flight response removes from the conversation precisely the people whose perspective the conversation most needs. The senior engineers who withdrew had decades of experience with the systems being transformed. They understood, at a level that the younger, more enthusiastic adopters often did not, what was being lost alongside what was being gained. They could feel the erosion of depth that Segal's Chapter 10 describes — the specific, embodied understanding that builds over years of patient struggle with resistant systems and that no AI tool can replicate or replace. Their diagnosis of the transition's costs was, in many cases, more accurate than the diagnosis offered by those who remained engaged, precisely because their experience equipped them to perceive what the enthusiasts' fishbowl concealed.

But their withdrawal meant their diagnosis was absent from the rooms where decisions were being made. The boardrooms where the twenty-fold arithmetic was evaluated. The policy offices where AI governance frameworks were designed. The educational institutions where curricula were being revised. The people who understood the costs of the transition most intimately were the people least present in the conversations about how to manage it.

This is the political catastrophe that the liberalism of fear is designed to prevent. Not the catastrophe of displacement itself — displacement has accompanied every major technological transition, and the liberalism of fear does not promise to prevent change. The catastrophe of displacement without voice. The silencing of the displaced through the absence of institutional structures that would make their continued engagement possible.

The distinction between Segal's Luddite chapter and the contemporary flight response sharpens this analysis. The historical Luddites were destroyed because no one built the institutional structures — labor protections, retraining programs, transitional support — that would have channeled their legitimate fear into constructive political engagement. They had nowhere to go but machine-breaking, because the political order offered no alternative between acceptance and destruction.

The contemporary engineers fleeing to the woods are living out a structurally identical failure. They are not breaking machines. Their response is quieter, more dignified, more socially acceptable. But the underlying structure is the same: a legitimate fear with no institutional channel. The political order offers them no alternative between the Segalian fight — which requires institutional conditions many do not possess — and withdrawal from the field entirely.

The liberalism of fear demands a third option. Not fight and not flight, but the institutional conditions that make engagement compatible with security. Transitional support that allows the displaced to rebuild without economic catastrophe. Retraining programs designed for the actual transition, not for the transition that policymakers imagine. Portable benefits that follow the worker rather than the job, so that the loss of a position does not mean the loss of health insurance, retirement security, and the basic economic platform from which engagement is possible.

These are not revolutionary demands. They are the minimal institutional conditions of a political order that takes the fear of its citizens seriously — that treats fear not as a character flaw to be overcome but as political data about the adequacy of its own protections.

The technology discourse's classification of fear as resistance — its transformation of a political signal into a personal failing — is itself a form of the misfortune-injustice reclassification that Shklar spent her career exposing. When the displaced engineer's fear is reclassified as a failure to adapt, the political obligation to build transitional institutions disappears. The suffering that follows is treated as the consequence of individual inadequacy rather than institutional absence. The burden shifts from the political order to the person the political order failed.

Shklar would have recognized this reclassification, with the specific recognition of a person who had watched political orders fail before, as the oldest move in the repertoire of power. Rename the injustice. Call it misfortune. Call it the market. Call it progress. Call it anything but what it is — the failure of political institutions to protect the vulnerable from the consequences of power exercised without constraint.

The engineers in the woods are not the problem. They are the evidence. Their fear is the most accurate diagnostic instrument available for measuring the adequacy of the institutional response to the AI transition. And the diagnosis they deliver, through the quiet eloquence of withdrawal, is that the institutions have failed.

Chapter 5: The Luddite's Fear Was Legitimate

The framework knitters of Nottinghamshire did not need a political theorist to tell them what was happening. They could see the power looms. They could count the weeks between the arrival of the machines and the collapse of their wages. They could measure, with the precision of people whose survival depended on accurate measurement, the distance between what their labor had been worth last year and what it was worth now. Their perception was flawless. Their diagnosis was correct. Their response — the breaking of machines under cover of darkness — was catastrophic.

Shklar's framework transforms the Luddite story from a parable about the futility of resistance into a political indictment of institutional failure. The transformation operates through a single analytical move: the application of the misfortune-injustice distinction to the suffering of the displaced.

The standard telling of the Luddite story classifies the weavers' suffering as misfortune. The technology arrived. The market shifted. The old skills lost their value. The process was natural, inevitable, regrettable but beyond remedy — the way an earthquake is beyond remedy, the way a drought is beyond remedy. Misfortune demands compassion. It does not demand structural change. The factory owners who benefited from the transition were not obligated to prevent the suffering of the displaced, because the suffering was not produced by anyone's decision. It was produced by progress, which is to say by no one in particular, which is to say by the universe, which owes nothing to the people it inconveniences.

Shklar spent the most rigorous pages of The Faces of Injustice dismantling exactly this classification. The distinction between misfortune and injustice, she argued, is not a neutral empirical observation. It is a political act — performed by those with the authority to classify, in the interest of those who benefit from the classification. When the powerful classify the suffering of the powerless as misfortune, they are making a political claim disguised as a factual one: the claim that the suffering could not have been prevented, that no institutional arrangement could have distributed the costs differently, that the transition was as natural and as blameless as weather.

The claim is almost always false. The suffering of the Luddites was not natural. It was produced by specific decisions, made by specific people, within specific institutional arrangements that could have been otherwise. The factory owners chose to deploy the power looms without transitional support for the displaced workers. The Parliament chose not to enact labor protections that would have required the beneficiaries of the transition to share its costs. The political order chose — through action and inaction, through legislation and the refusal to legislate — to concentrate the gains of industrialization among the owners of capital while distributing the costs among the people whose labor the capital replaced.

These were choices. They were made by identifiable actors within identifiable institutions. The suffering they produced was therefore injustice, not misfortune — and injustice generates obligations that misfortune does not. Not merely the obligation of compassion, which costs the powerful nothing. The obligation of structural change, which costs the powerful something real: the redistribution of gains, the construction of transitional institutions, the acceptance that progress purchased at the expense of the powerless is not progress but extraction.

Segal's Chapter 8 in The Orange Pill draws the parallel to the contemporary developer with an honesty that stops just short of the political conclusion Shklar's framework demands. Segal acknowledges that the Luddites were right about the facts. He acknowledges that their children bore the cost. He acknowledges that the institutional structures that eventually ameliorated the transition — the eight-hour day, the weekend, child labor laws — arrived decades after the suffering had already compounded. He even acknowledges the third Luddite lesson, the one he finds most uncomfortable: that the productivity gains of industrialization concentrated among factory owners, and that the translation of those gains into broadly distributed improvements required generations of political struggle.

But the builder's fishbowl shapes the conclusion. Segal frames the Luddite failure as a failure of response — the Luddites chose the wrong instrument, machine-breaking rather than institutional engagement. The framing is not wrong. Machine-breaking was strategically catastrophic. But the framing places the analytical burden on the displaced rather than on the political order that failed them. It asks why the Luddites chose poorly rather than why the political order left them with nothing better to choose.

Shklar's framework reverses this burden. The question is not why the Luddites broke machines. The question is why the political order offered them no alternative. Why the institutional structures that would have channeled their legitimate fear into constructive political engagement — labor protections, retraining programs, transitional support, mechanisms that required the beneficiaries of the transition to share its costs — were absent. And whether the absence was accidental or structural.

The historical record supports the structural interpretation. The institutions were absent because their construction would have imposed costs on the people who benefited from their absence — the factory owners whose profits depended on an unprotected labor supply, the investors whose returns depended on the externalization of transition costs, the political class whose power depended on the support of the newly industrialized capital. The Luddites' suffering was not the unintended consequence of a transition that moved too fast for institutions to follow. It was the predictable consequence of a political order that chose not to build the institutions, because building them would have redistributed gains that the powerful preferred to concentrate.

This analysis applies to the AI transition with a precision that should alarm anyone who has read the historical record carefully. The contemporary parallels are not approximate. They are structural.

The productivity gains of AI are concentrating among the people who already possess capital and capability — the technology companies that build the tools, the organizations that deploy them, the individuals whose existing skills position them to direct AI rather than be displaced by it. The costs of the transition are falling on the people who possess neither — the workers whose expertise is being devalued, the communities whose economic base is contracting, the students whose educational investments are depreciating in real time.

The institutional structures that could redistribute these costs — portable benefits, transitional support, retraining programs designed for the actual transition rather than for the transition that policymakers imagine, mechanisms that require the beneficiaries of AI deployment to fund the transition of the people it displaces — are either absent or so inadequate as to be performative.

The political discourse classifies the suffering of the displaced as misfortune. The technology arrived. The market shifted. The old skills lost their value. The process is natural, inevitable, regrettable but beyond remedy. The displaced are offered compassion — sympathetic essays about the "future of work," conferences on "responsible AI," corporate diversity initiatives that address the optics of displacement without touching its economics. Compassion costs nothing. Structural change costs something, and the people who would bear the cost are the people with the power to prevent it from being imposed.

Shklar would recognize this pattern with the weary recognition of someone who had seen it before. The classification of avoidable suffering as inevitable misfortune. The substitution of compassion for structural change. The placement of the analytical burden on the displaced — why did they not retrain, why did they not adapt, why did they not embrace the change — rather than on the political order that failed to provide the conditions under which retraining, adaptation, and engagement were possible.

The Luddites did not lack the will to adapt. They lacked the institutional support that adaptation requires. The framework knitter who watched the power loom arrive did not refuse to learn new skills. There were no new skills to learn — not because they did not exist in the abstract, but because no institution existed to identify them, teach them, and certify them in a way that the labor market would recognize. The path from old expertise to new was not merely difficult. It was structurally absent. The political order had not built it, because building it would have cost the people who benefited from its absence.

The contemporary developer watching AI devalue decades of coding expertise faces a structurally identical absence. The path from "person who writes code" to "person who directs AI in the production of code" is real in principle but absent in institutional practice. No credentialing system exists for the new skills. No transitional support bridges the economic gap between the old role and the new one. No mechanism ensures that the person whose labor is being multiplied by twenty has a voice in the decision about whether that multiplication will expand the work or eliminate the worker.

The advice offered — "learn to prompt," "develop judgment," "embrace the tools" — is the contemporary equivalent of telling the framework knitter to become a factory manager. It describes a real destination. It provides no path to reach it. And the absence of the path is not an oversight. It is a structural feature of a political order that has classified the suffering of the displaced as misfortune rather than injustice, and that therefore treats the construction of transitional institutions as a matter of charitable impulse rather than political obligation.

The Luddites' children did eventually get the eight-hour day. They got the weekend. They got child labor laws and workplace safety regulations and the institutional infrastructure that, over decades, redirected the gains of industrialization toward broader distribution. But those institutions did not arrive through the benevolence of the factory owners. They arrived through political struggle — decades of organizing, striking, legislating, and fighting against the persistent classification of workers' suffering as the natural cost of progress.

The question for the current moment is whether the AI transition will follow the same timeline — decades of avoidable suffering before the institutions arrive — or whether the historical record can be read as instruction rather than merely as precedent. The Luddites teach what happens when the institutions are absent. The labor movement teaches that the institutions can be built. The liberalism of fear insists that the institutions must be built before the suffering compounds to the point where the displaced have no capacity for engagement left.

The timeline is shorter than it was in 1812. The transition is faster. The displacement is more rapid. The window during which institutional construction can precede the worst consequences is narrower. And the political will to build the institutions is contested by the same forces that contested it two centuries ago — the forces that benefit from the externalization of transition costs and that classify the construction of protective institutions as an impediment to progress rather than as a condition of progress that deserves the name.

Segal writes that grief is not a strategy. Shklar would agree. But Shklar would add the necessary coda: Grief is not a strategy, but the institutional conditions that produce grief are a political failure. And political failures demand political responses — not exhortations to adapt, not celebrations of the capability that the transition creates, but the construction of institutions that make adaptation possible for the people who currently lack the conditions to attempt it.

The framework knitters were not wrong. The contemporary developers are not wrong. The fear is legitimate. The suffering is real. The question is the same one it has always been: Will the institutions be built in time, or will another generation pay the cost of their absence?

---

Chapter 6: The Achievement Subject as Victim and Agent

Byung-Chul Han's concept of the achievement subject, which Segal develops across Chapters 9 and 10 of The Orange Pill, poses a genuine theoretical challenge to the liberalism of fear. The challenge is structural, not superficial, and meeting it requires Shklar's framework to extend into territory it did not originally map.

The traditional architecture of cruelty, as Shklar analyzed it across four decades, has two positions: the person who inflicts suffering and the person who suffers. The positions may be occupied by individuals or by institutions. The cruelty may be deliberate or structural, visible or concealed, dramatic or bureaucratic. But the two-position structure holds. There is a perpetrator and there is a victim, and the political obligation — the obligation that the liberalism of fear imposes — is to constrain the perpetrator and protect the victim through institutional means.

Han's achievement subject collapses this structure. The person who works until three in the morning, unable to close the laptop despite physical exhaustion and the erosion of every relationship that requires presence — this person is not being exploited by an external authority. No boss demanded the hours. No corporate mandate required the sacrifice. The person chose freely, in the narrowest sense of choice, to continue working. The whip and the hand that holds it belong to the same person.

Segal documents this collapse with the honesty of someone who has experienced it personally. He describes nights of building with Claude where the work flows with the specific intensity of genuine creative absorption — where ideas connect in ways that surprise him, where each connection opens a line of inquiry more interesting than the last. He also describes the moment when the exhilaration drains and what remains is grinding compulsion, the inability to stop not because the work is satisfying but because stopping has become intolerable. He describes recognizing the pattern — the same addictive loop he had built into products for others, now operating on him — and being unable to break it even after the recognition.

The Berkeley researchers documented the same pattern across an entire organization. Workers filling every cognitive gap with AI-assisted activity. Lunch breaks colonized by prompts. Waiting rooms converted to workstations. The specific expansion that Segal celebrates — the capacity to build more, reach further, attempt the previously impossible — accompanied by a contraction that the celebration tends to obscure: the disappearance of the unproductive moments that previous eras provided as structural rest.

The Shklar framework, applied to this phenomenon, must distinguish between two analytically separable conditions that produce observationally identical behavior.

The first is what Shklar would recognize as internalized domination — the condition in which an external structure of power has been absorbed so completely that the dominated person enforces it against herself without conscious awareness that domination is occurring. This is the Han diagnosis. The achievement society has replaced the disciplinary society's external prohibitions with an internalized imperative to perform. The cage has become invisible because the inmate has swallowed the bars. The domination is more complete than any external authority could achieve, because resistance requires an awareness of being dominated, and the achievement subject has lost this awareness entirely.

The second is what Csikszentmihalyi described as flow — the state of optimal human experience in which challenge and skill are matched, attention is fully absorbed, self-consciousness drops away, and the person operates at the outer edge of capability with a sense of voluntary engagement that is the opposite of compulsion. Segal's Chapter 12 develops this counter-argument at length, and the counter-argument is genuine. Flow is not pathology. It is one of the conditions under which human beings report the deepest satisfaction with their work and their lives.

The political problem is that internalized domination and flow produce identical external behavior. Both involve intense engagement. Both involve the loss of time awareness. Both involve the inability or unwillingness to stop. From the outside — from the perspective of the Berkeley researchers measuring hours worked, or the spouse writing a Substack post about a partner who has vanished into Claude Code — the two conditions are indistinguishable.

The distinction lives entirely inside the person experiencing it, and even there, the distinction is unstable. Segal describes the oscillation within a single working session — hours of genuine flow that shade, without a perceptible boundary, into hours of compulsion. The transition does not announce itself. The person who was choosing to work becomes the person who cannot stop working, and the shift occurs below the threshold of conscious awareness.

The liberalism of fear cannot rely on the individual's self-assessment to distinguish between these conditions. Self-assessment fails precisely when it is most needed — at the moment when flow has become compulsion and the person has lost the capacity to recognize the transition. This is not because the individual is stupid or weak. It is because the cognitive conditions that would enable the recognition — the capacity for metacognitive awareness, the ability to observe one's own engagement from outside — are the first casualties of the compulsive state. The person consumed by compulsion has lost access to the internal vantage point from which compulsion can be identified as compulsion rather than as dedication.

This epistemological failure has institutional implications that Shklar's framework is equipped to address even though she never addressed this specific phenomenon. If the individual cannot reliably distinguish between flow and compulsion, then the obligation to create the conditions under which the distinction can be maintained falls on the institutional environment.

The Berkeley researchers proposed one such institutional structure: "AI Practice," defined as structured pauses built into the workday during which AI tools are set aside and workers engage directly with each other and with their own unaugmented cognition. The proposal is modest — a rest stop, not a roadblock. But the modesty is appropriate, because the liberalism of fear does not seek to prohibit the conditions that produce flow. Flow is genuinely valuable. The liberalism of fear seeks to prevent the conditions that allow flow to become compulsion without the person's awareness or consent.

The institutional structures this analysis demands are specific. First, structural stopping points — mandatory pauses that do not depend on the individual's judgment about whether to continue, because the individual's judgment is the faculty most compromised by the compulsive state. These are not luxury amenities. They are the equivalent of the factory safety regulations that require machinery to be shut down at intervals for inspection — not because the machinery is necessarily failing, but because the consequences of failure are severe enough to justify the interruption.

Second, cultural norms that decouple productivity from worth. The achievement society that Han describes, and that the technology industry exemplifies with particular intensity, treats output as the measure of human value. The person who produces more is worth more. The person who rests is worth less. This equation, internalized deeply enough, makes rest feel not like recovery but like moral failure — like the voluntary acceptance of a diminished self.

Shklar's framework reframes this not as a cultural problem but as a political one. A political order that allows the equation of productivity with human worth to operate unconstrained is complicit in the suffering that the equation produces. The person who works until physical collapse, who sacrifices relationships and health and the capacity for reflection in pursuit of output that the internalized imperative demands, is suffering. The suffering is real even though it is self-inflicted. And the political order that created the conditions under which self-infliction became the dominant mode of suffering bears responsibility for those conditions.

Third, and most difficult: mechanisms that redistribute the gains of AI-augmented productivity in ways that do not require individual workers to capture those gains through individual intensification. When the twenty-fold productivity multiplier creates value, the question of who captures that value is a political question. If the value is captured by the organization and the individual worker must intensify to justify continued employment, the multiplier becomes an engine of auto-exploitation. If the value is distributed — through reduced hours at maintained wages, through expanded leisure, through the institutional creation of time for the reflection and rest that the achievement society has eliminated — then the multiplier becomes an instrument of liberation.

The difference between auto-exploitation and liberation is not in the technology. It is in the institutional arrangements that determine how the technology's gains are distributed. This distinction is invisible from inside the builder's fishbowl, because the builder sees the capability and assumes the capability is its own reward. From outside the fishbowl — from the perspective of the worker whose intensification produces the value that the organization captures — the distinction is the difference between flourishing and suffering.

Han diagnosed the pathology. Shklar provides the political framework that transforms diagnosis into obligation — the obligation to build institutions that protect the individual from a form of cruelty so novel that the victim cannot distinguish it from freedom.

---

Chapter 7: Institutional Failure and the Dam Deficit

In the months following the threshold Segal describes — the moment in December 2025 when AI capability crossed a line that made previous planning assumptions obsolete — the institutional response of the world's major political systems could be measured precisely. It was measured in the gap between what the technology was doing to people and what institutions were doing to protect them. The gap was large. It was growing. And the people inside it were adapting alone.

Shklar's political philosophy provides the diagnostic framework for understanding why this gap constitutes the most dangerous condition a liberal political order can face: the condition in which the forces that produce suffering outpace the institutions designed to prevent it.

The liberalism of fear rests on a specific temporal claim about the relationship between institutional protection and the exercise of power. The protection must precede the harm, or at minimum, must arrive quickly enough that the harm does not compound beyond the capacity of institutions to address. This temporal claim is not aspirational. It is a hard constraint, derived from the historical observation that suffering left unaddressed becomes self-reinforcing — that the displaced worker who receives no transitional support does not merely suffer temporarily but enters a downward trajectory in which each month of displacement reduces the probability of successful re-engagement, erodes the skills and confidence and social networks on which re-engagement depends, and converts what might have been a brief disruption into a permanent dislocation.

The institutional response to the AI transition, measured against this constraint, has failed on both the supply side and the demand side, but the failure on the demand side is far more consequential and far less discussed.

The supply-side response has received most of the attention. The EU AI Act, which entered into force in stages beginning in 2024, establishes a risk-based framework for the regulation of AI systems. High-risk applications — those affecting employment, education, law enforcement, critical infrastructure — face mandatory requirements for transparency, human oversight, and conformity assessment. The American executive orders on AI safety address risk assessment, establish reporting requirements for frontier model developers, and create institutional infrastructure for monitoring the development of advanced systems. Similar frameworks are emerging in Singapore, Brazil, Japan, and elsewhere.

These supply-side interventions are real and substantive. They constrain what AI companies may build and how they must build it. They create accountability mechanisms, however imperfect, for the developers of frontier systems. They represent genuine institutional effort.

They also address the wrong side of the problem. The supply-side question — "What may AI companies build?" — is important. But it is not the question that determines whether the transition produces cruelty or prevents it. The determining question is on the demand side: "What do the people affected by AI need in order to navigate the transition without bearing disproportionate costs?"

The demand side is where the dam deficit is most severe and most dangerous. The term, borrowed from Segal's analysis, describes a widening gap — the accelerating distance between the speed of AI capability and the speed of institutional response on behalf of the people the capability affects. The gap is not closing. It is widening with each capability threshold the technology crosses, because each threshold displaces faster than the last while the institutions respond at the same deliberate pace they have always maintained.

Consider the specific institutional failures on the demand side. Retraining programs. The programs that exist are designed for the previous transition — they teach coding skills to people displaced from manufacturing, or data analysis to people displaced from clerical work. They are not designed for the transition actually underway, in which coding skills themselves are being devalued and data analysis is being automated. The person who completes a six-month retraining program in Python development in 2026 has been trained for a role whose market value is declining while the training was being delivered. The institutional lag is not measured in years. It is measured in the distance between the curriculum and the labor market, and that distance is growing.

Portable benefits. The majority of the world's workforce, and a large proportion of the technology workforce specifically, ties fundamental securities — health insurance, retirement savings, disability coverage — to the specific employer. When the position disappears, the securities disappear with it. The displaced worker loses not merely income but the entire institutional scaffolding of economic security. Portable benefits, which would attach securities to the worker rather than to the job, have been proposed repeatedly and adopted almost nowhere. The proposals fail because their cost falls on the beneficiaries of the current system — the employers who prefer the leverage that employer-tied benefits provide — and the political order has not mustered the will to override that preference.

Educational reform. Segal calls for education that teaches questioning over answering, judgment over execution, the capacity to direct AI rather than compete with it. Shklar's framework would endorse this call while insisting on a harder truth: educational reform that produces graduates equipped for the AI economy is insufficient if the economic structures they enter continue to reward execution over judgment, speed over reflection, output over the human capacities the education was designed to develop. The student who learns to ask deep questions and then enters a labor market that pays for prompting speed has been given a tool the economy will not let her use. Educational reform without economic reform produces not empowerment but a more articulate form of disillusionment.

Transitional support. The space between old role and new role is not empty. It is filled with economic need — rent, food, healthcare, the education of children. A person navigating the transition from "writer of code" to "director of AI systems" does not stop requiring income during the navigation. The absence of institutional support during the transition period means that only those with existing economic cushions can afford to make the transition at all. The senior engineer with savings and a working spouse can afford to spend six months rebuilding a professional identity. The junior developer without savings cannot. The transition, in the absence of institutional support, becomes a filter that admits the privileged and excludes everyone else — reproducing and amplifying the economic stratification that the liberalism of fear identifies as a precondition for cruelty.

Safety nets. The existing social safety net in most countries is designed for a labor market in which displacement is temporary, individual, and cyclical — in which the worker who loses a position can expect to find a similar one within a reasonable period. The AI transition does not produce this kind of displacement. It produces structural displacement — the permanent devaluation of entire categories of skill, the elimination of entire tiers of work that will not return regardless of macroeconomic conditions. The existing safety net is not merely inadequate for this kind of displacement. It is categorically wrong — designed for a phenomenon that is not the phenomenon occurring. Unemployment insurance that provides temporary support while the worker searches for a similar position is useless to the worker whose entire category of position has been eliminated.

Shklar would locate the root of these failures in the misfortune-injustice classification that operates throughout the political discourse on AI. The suffering of the displaced is consistently treated as misfortune — as the natural cost of progress, regrettable but beyond institutional remedy. This classification is performed by the people who benefit from it: the technology companies whose deployment speed would be slowed by transitional obligations, the investors whose returns would be reduced by the redistribution of gains, the political actors whose relationship with the technology industry would be complicated by the imposition of demand-side protections.

The classification is false. The suffering is not natural. It is produced by specific institutional arrangements — the speed of deployment without impact assessment, the concentration of gains without redistribution obligations, the absence of transitional support that other societies and other transitions have demonstrated is possible. The suffering is injustice, and the institutions that could prevent it are identifiable, constructable, and fundable. They are not being built because building them would impose costs on the people who currently benefit from their absence.

Matthieu Queloz's 2025 analysis of personalized AI advisory systems identified a pattern relevant to the dam deficit at a deeper level. Queloz observed that AI systems create epistemic, structural, and temporal asymmetries between the builders and the affected populations. The builders possess information the affected do not — about what the systems do, about what they optimize for, about what their failure modes are. The affected populations interact with the systems without this information, making choices that appear voluntary but are structurally constrained by the information asymmetry.

The dam deficit is, in part, an information asymmetry problem. The people who understand what the AI transition will produce — the builders, the researchers, the technology leaders — possess information that the people being transitioned do not. The demand-side institutions that the transition requires can only be designed by people who understand the transition's likely trajectory. But the people who possess this understanding are predominantly located on the supply side — inside the technology companies, inside the research labs, inside the fishbowl of the powerful — and the institutional mechanisms that would translate their understanding into demand-side protection do not exist in adequate form.

The liberalism of fear demands that this asymmetry be addressed through institutional means rather than through the voluntary benevolence of the knowledgeable. Shklar's entire body of work testifies to the inadequacy of relying on the powerful to constrain themselves voluntarily. The institutional structures must be external — mandatory, enforceable, and designed by processes that include the perspectives of the people they are meant to protect.

The dam deficit is not a problem that solves itself. It is a problem that compounds. Each month in which the institutional response fails to match the speed of capability is a month in which more people enter the gap — more workers displaced without support, more students trained for yesterday's economy, more parents navigating their children's future without guidance. The compounding is measurable and accelerating.

The question is not whether the institutions will eventually be built. The historical pattern, as Segal documents in his chapter on the five stages of technological transition, suggests they will. The Luddites' grandchildren got the eight-hour day. Eventually.

The question is how many people will bear how much avoidable suffering before eventually arrives. That question is not technological. It is political. And the answer depends entirely on whether the political order treats the suffering of the displaced as misfortune to be endured or as injustice to be prevented.

---

Chapter 8: The Developer in Lagos — Inclusion and Its Risks

The developer in Lagos represents what The Orange Pill presents as one of the most morally significant features of the AI transition: the lowering of the floor of who gets to build. A person with an idea, an internet connection, and a conversation with an AI system can now produce working software without a team, without years of specialized training, without the institutional infrastructure that previously gated the path from imagination to artifact.

Shklar's framework does not dispute the significance of this expansion. The dispersion of capability — the distribution of productive power to people who previously lacked access to it — is consistent with the liberalism of fear's commitment to preventing the concentration of power. A world in which only the credentialed and the capitalized can build is a world in which the capacity to shape the material conditions of life is concentrated among the already powerful. A world in which the developer in Lagos can build is a world in which that concentration has been partially disrupted.

But the liberalism of fear adds a qualification that the celebration of democratization tends to omit: inclusion without protection is not liberation. It is exposure.

The distinction requires development, because it cuts against the dominant narrative of the AI transition — the narrative in which expanded access is, without further qualification, an unambiguous good.

The developer in Lagos who gains access to Claude Code gains capability. The capability is real. The software she can now produce is real. The barrier between her imagination and its expression has genuinely narrowed. Segal is right to identify this narrowing as morally significant. The imagination-to-artifact ratio has dropped, and the drop benefits the people for whom the ratio was previously highest — the people who possessed ideas but lacked the institutional infrastructure to realize them.

What the celebration obscures is what the developer in Lagos has also gained: exposure to a global market in which she competes, on newly equal productive terms, against actors who possess every advantage she lacks except the one the tool provides. She can now build software as fast as a developer in San Francisco. She cannot access San Francisco's capital markets. She cannot access its legal infrastructure for intellectual property protection. She cannot access its network of mentors, investors, and institutional supporters who transform a prototype into a product and a product into a company. She competes on the production floor while lacking the entire institutional superstructure that determines whether production translates into economic security.

Shklar's analysis of vulnerability applies here with uncomfortable directness. Vulnerability, in the liberalism of fear, is not a character trait. It is a structural position — the position of a person whose exposure to harm is greater than their capacity to prevent it. The developer in Lagos, equipped with AI tools and her own intelligence and ambition, is less vulnerable than she was before the tools arrived. But she is more exposed. The tools have admitted her to a competitive arena in which the consequences of failure — a product that does not find its market, a technology bet that does not pay off, an investment of months of labor in a direction the market abandons — fall entirely on her, without the institutional cushions that absorb these consequences for her competitors in wealthier economies.

The developer in San Francisco who builds a product that fails has unemployment insurance, savings accumulated from previous employment at market-rate wages, a professional network that provides the next opportunity, and an economic environment in which failure is not merely tolerated but mythologized as a credential. The developer in Lagos who builds a product that fails has none of these. Her exposure to the downside of the market she has been admitted to is structurally greater than the exposure of the people she competes against.

This is not an argument against inclusion. It is an argument about the conditions under which inclusion produces flourishing rather than exploitation. The distinction is not academic. It is the difference between a political outcome in which the lowered floor genuinely empowers the previously excluded and a political outcome in which the lowered floor admits the previously excluded to a competition they were never equipped to survive — a competition in which the gains flow to those who already possessed the complementary advantages the tools do not provide, while the costs fall on those who possessed only the capability the tools conferred.

Shklar would have recognized the pattern. The extension of formal rights without the institutional support that makes those rights substantively meaningful has been the mechanism of false inclusion throughout political history. The right to vote without protection against intimidation. The right to own property without access to the capital markets that make property productive. The right to education without the economic conditions that allow education to translate into opportunity. Each extension was celebrated as progress. Each was, without the accompanying institutional infrastructure, a form of exposure disguised as empowerment.

The AI tools provide the developer in Lagos with productive capability. They do not provide her with the institutional infrastructure — capital access, legal protection, market knowledge, failure cushions, professional networks — that determines whether productive capability translates into economic security. The absence of this infrastructure is not a temporary gap that the market will close. It is a structural feature of the global economy that the AI tools, by themselves, do not address.

Segal acknowledges the limits of democratization with characteristic honesty. He notes that access requires connectivity that billions lack, hardware that costs more relative to local wages in Lagos than in San Francisco, and English-language fluency that reflects the linguistic biases of the tools' developers. He acknowledges that the democratization is "real but partial." These acknowledgments matter. They prevent the celebration from becoming purely self-congratulatory.

But the acknowledgments address the barriers to inclusion rather than the risks of inclusion, and it is the risks that the liberalism of fear insists on foregrounding. The barriers to inclusion are being lowered — connectivity is expanding, costs are declining, multilingual capabilities are improving. If the barriers were the only problem, time and market forces would solve it. The risks of inclusion are a different kind of problem. They are structural rather than technical. They require institutional intervention rather than market correction.

Amartya Sen's capability approach, which complements Shklar's liberalism of fear, provides additional analytical precision. Sen argued that the proper measure of human development is not income or resources but the capability to live a life one has reason to value — the substantive freedom to choose among genuine alternatives. The developer in Lagos gains a capability — the capability to build software. But a single capability, without the constellation of complementary capabilities that make it meaningful, is a capability in isolation. The capability to build is meaningful only in conjunction with the capability to fund, to protect, to market, to absorb failure, and to try again. The AI tools provide the first capability. The institutional environment must provide the rest.

The liberalism of fear demands that the institutions of inclusion — the structures that make inclusion substantively meaningful rather than merely formally possible — accompany the tools of inclusion rather than following them at the customary institutional lag. This demand has specific institutional implications.

First, capital access structures designed for the newly capable. The developer in Lagos who can now build competitive software needs access to capital markets that can evaluate and fund her work. The existing venture capital infrastructure is geographically concentrated, culturally specific, and network-dependent in ways that systematically exclude the populations that AI tools are newly including.

Second, failure infrastructure. The economic consequences of entrepreneurial failure in Lagos are categorically different from the consequences in San Francisco. Institutional structures that limit the downside of failure — limited liability protections, bankruptcy provisions that allow restart, social safety nets that prevent failure from cascading into destitution — are preconditions for the kind of risk-taking that productive inclusion requires.

Third, market access mechanisms that do not depend on the geographic and social networks that the newly included, by definition, lack. The developer in Lagos can build the product. Reaching the customers who would value it requires infrastructure — distribution channels, marketing capability, credibility signals — that the tools alone do not provide.

Fourth, and most broadly, a political commitment to measuring the success of democratization not by the number of people who gain access to the tools but by the number of people whose lives are substantively improved by that access. The metric that matters is not adoption but outcome. Not how many developers in Lagos use Claude Code, but how many of them translate that use into economic security, professional stability, and the capacity to live lives they have reason to value.

The developer in Lagos is not a symbol. She is a person — a person whose inclusion in the global building economy is a genuine moral advance if, and only if, the inclusion is accompanied by the institutional structures that make it meaningful. Without those structures, inclusion is exposure. And exposure, in the absence of protection, is not liberation.

It is vulnerability presented as opportunity. The liberalism of fear has spent four decades learning to recognize the difference.

Chapter 9: The Smooth as a Form of Political Domination

The most effective form of political domination is the one that eliminates the experience of being dominated.

This claim requires careful construction, because it runs against the intuitions that both liberal and critical political theory have cultivated for centuries. The standard model of domination — the model that Shklar herself developed through her study of totalitarian regimes, political cruelty, and the systematic abuse of state power — presupposes that domination is experienced as such by its victims. The prisoner knows he is imprisoned. The tortured person knows she is being tortured. The citizen living under a surveillance state knows, even if she cannot articulate it publicly, that the eye of power is upon her. The experience of domination is what generates the demand for liberation. Without the experience, there is no demand. Without the demand, there is no politics of resistance.

Byung-Chul Han's concept of the achievement subject, which Segal develops across multiple chapters of The Orange Pill, describes a condition in which this standard model breaks down. The achievement subject is not imprisoned by an external authority. She is invited — invited to optimize, to produce, to perform, to build, to ship, to measure, to improve. The invitation presents itself as liberation. You are free. You can do anything. The only constraint is your own ambition, your own will, your own capacity for work. The disciplinary society said "you must not." The achievement society says "yes, you can."

The shift from prohibition to invitation is the shift from visible domination to invisible domination. And it is the invisible form that the liberalism of fear must learn to address, because invisible domination produces suffering without producing the political demand for its remedy.

Shklar's framework, developed in the context of visible cruelty — state violence, political exclusion, the systematic humiliation of the powerless by institutional authority — must be extended to account for this new form. The extension is not a distortion. It is a development that Shklar's own analytical commitments demand. If the liberalism of fear begins from the perspective of the person who suffers, and if the person who suffers under the smooth regime does not experience herself as suffering — experiences herself, rather, as freely choosing, as voluntarily engaged, as personally responsible for both her achievements and her exhaustion — then the liberalism of fear must develop the analytical tools to identify suffering that the sufferer cannot name.

The aesthetics of the smooth, as Segal describes it through Han, is the cultural expression of this invisible domination. The iPhone without seams. The interface without friction. The deployment without delay. The checkout without steps. The word "seamless" used as a compliment, as though the erasure of every joint, every boundary, every point of resistance were an unambiguous good.

A seam, Segal observes, is where two pieces meet. Where the joint is visible. Where the construction is legible. A seamless garment hides its construction. A seamless experience hides its complexity. And when the construction is hidden, the labor that produced it becomes invisible, the decisions that shaped it become inaccessible, and the possibility of questioning those decisions — of asking whether the thing should have been constructed differently — recedes behind the smooth surface.

Shklar's political theory provides the vocabulary for understanding why this concealment is a form of domination rather than merely an aesthetic preference. Domination, in her framework, operates not only through the direct application of force but through the control of the categories within which political experience is understood. When the powerful control the vocabulary — when they determine which experiences count as suffering and which count as opportunity, which outcomes count as injustice and which count as the natural cost of progress — they exercise a form of power more durable than any that rests on force alone.

The smooth interface exercises this categorical power with remarkable efficiency. By presenting every interaction as frictionless, it establishes frictionlessness as the norm against which all experience is measured. The person who encounters friction — the worker who insists on rest, the developer who pauses to understand rather than deploying immediately, the parent who limits a child's screen time, the student who resists the AI tool's answer in order to think through the problem independently — experiences the friction as a personal deficiency rather than as a political choice.

The smooth environment has recategorized resistance as inadequacy. The person who stops is not exercising judgment. She is falling behind. The person who questions is not thinking critically. He is failing to adapt. The person who insists on depth rather than breadth is not cultivating expertise. She is clinging to an obsolete professional identity. In each case, the political act of refusal — the act of saying "no, I will not work at this speed, I will not sacrifice this relationship, I will not accept this intensification as the price of relevance" — has been reclassified as a personal failing within a vocabulary that the smooth environment controls.

This is the sense in which the smooth constitutes political domination. Not domination through force, which Shklar understood thoroughly. Domination through the elimination of the conceptual space in which resistance is intelligible. When the only available vocabulary for describing the choice not to work is "laziness" or "inability to handle the pace," the choice not to work ceases to be a political act and becomes a confession of inadequacy. The dominated person does not resist, because the vocabulary of resistance has been absorbed into the vocabulary of personal failure.

Shklar's colleague at Harvard, Queloz, identified this dynamic in the specific context of AI advisory systems, observing that personalization risks translating structural injustices into individualized challenges. The structural forces that produce overwork — competitive pressure, the institutional absence of rest, the cultural equation of productivity with worth — are experienced by the individual as personal challenges to be overcome through better time management, better self-care, better optimization of the remaining hours. The structural is individualized. The political is psychologized. The domination disappears into the vocabulary of self-improvement.

The institutional implications are severe and specific. If the smooth eliminates the experience of domination, then the institutions designed to prevent domination must operate without relying on the demand of the dominated. The labor movement of the nineteenth century was driven by the experience of exploitation — by workers who knew they were being exploited and who organized to resist. The institutional response to smooth domination cannot wait for an equivalent experience, because the smooth has eliminated the experience. The workers are not organizing against overwork. They are celebrating it. They are posting about it on social media. They are treating it as evidence of their own capability and commitment.

The institutions must therefore be designed not in response to political demand but in anticipation of political need. This is an uncomfortable position for liberal political theory, which has historically grounded institutional legitimacy in the consent and demand of the governed. But Shklar's liberalism of fear has always been willing to operate in advance of demand when the alternative is preventable suffering. The labor laws that prohibited child factory work were not enacted at the demand of the children. The safety regulations that required factory machinery to be guarded were not enacted at the demand of workers who had not yet lost their fingers. The institutional protections were enacted because the suffering was foreseeable and the obligation to prevent it did not depend on the victims' capacity to articulate the demand.

The same logic applies to the smooth. The suffering is foreseeable. The Berkeley researchers have documented it. The patterns of intensification, colonization of rest, erosion of the cognitive conditions under which genuine thinking develops — these are not speculative harms. They are measured harms, observed in real organizations, affecting real people who report them as exhaustion and burnout while simultaneously describing their work as the most exciting of their careers.

The contradiction — suffering experienced as excitement — is the signature of smooth domination. It is the condition in which the liberalism of fear must operate without the benefit of the victims' testimony, because the victims' testimony, shaped by the vocabulary the smooth provides, describes their condition as freedom.

The institutional response must therefore be grounded not in what the affected report but in what the evidence reveals: that the elimination of friction produces measurable harm, that the harm compounds over time, that the individual's capacity to recognize and resist the harm is compromised by the conditions that produce it, and that the obligation to create structural stopping points — mandatory pauses, cultural norms of rest, institutional protections against the colonization of every cognitive gap — falls on the political order rather than on the individual.

The smooth does not announce itself as domination. That is what makes it domination. And the liberalism of fear, which has always been more concerned with the structures that produce suffering than with the intentions of those who build them, is the framework best equipped to name what the smooth conceals and to demand the institutions that the smooth has made it impossible for the dominated to demand for themselves.

---

Chapter 10: Toward a Politics of Cruelty Prevention in the Age of AI

The liberalism of fear does not end with diagnosis. It ends with obligation — the specific, institutional, enforceable obligation to prevent the cruelties that the diagnosis has identified. A framework that names the suffering without proposing the structures that would prevent it is a framework that has failed its own standard.

The preceding chapters have identified the specific forms of cruelty that the AI transition is producing: the amplified indifference of builders who ship without attending to consequences; the structural blindness of the powerful, whose fishbowl conceals the distributional effects of their decisions; the legitimate fear of the displaced, dismissed as personal inadequacy by a discourse that confuses institutional failure with individual deficiency; the novel cruelty of auto-exploitation, in which the victim and the agent are the same person; the institutional gap between the speed of capability and the speed of protection; the risks of inclusion without accompanying institutional infrastructure; and the smooth domination that eliminates the experience of being dominated and therefore eliminates the political demand for liberation.

Each of these cruelties is avoidable. Each is produced not by the technology itself but by the institutional arrangements — present or absent, adequate or failing — under which the technology is deployed. The liberalism of fear demands that the arrangements be changed. Not eventually. Not after the market has sorted itself out. Not after the next election cycle. Now, because the suffering is compounding now, and the window during which institutional construction can precede the worst consequences is narrowing with each capability threshold the technology crosses.

The politics of cruelty prevention in the age of AI requires intervention on five fronts simultaneously. None is sufficient alone. Together, they constitute the minimal institutional response that the liberalism of fear demands.

The first front is transitional protection for the displaced. The people whose expertise is being devalued by AI are not asking for charity. They are owed structural support — not because they are pitiable but because their suffering is produced by institutional choices that could have been otherwise. The specific structures are identifiable. Portable benefits that attach to the worker rather than the job, ensuring that the loss of a position does not mean the loss of healthcare, retirement security, and the basic economic platform from which re-engagement is possible. Transitional income support calibrated to the actual duration of the transition — not the six months that unemployment insurance typically provides, but the twelve to twenty-four months that structural displacement demonstrably requires. Retraining programs designed for the transition actually underway, not the previous one — programs that teach the judgment, direction, and integrative thinking that the AI economy rewards, rather than the coding skills that the AI economy is devaluing.

These structures are funded by the productivity gains of the transition. The twenty-fold multiplier that Segal describes creates enormous value. The question is whether that value is captured entirely by those who deploy the tools or whether a portion is directed toward the institutional infrastructure that makes the transition survivable for those it displaces. The liberalism of fear insists on the latter — not as redistribution for its own sake, but as the minimal condition of a political order that does not produce cruelty through institutional neglect.

The second front is accountability for builders. The cruelty by default that Chapter 2 described — the amplified indifference of builders who ship without attending to consequences — requires institutional constraint that the builder's own moral imagination cannot reliably provide. The specific mechanisms include mandatory impact assessment for AI deployments above a defined scale, requiring the deployer to identify foreseeable harms and to articulate the measures taken to prevent them. Deployment intervals that create time between the decision to ship and the act of shipping — time during which consequences can be anticipated, feedback can be gathered, and the moral imagination can operate. Independent oversight bodies with the technical expertise to evaluate claims of safety and the institutional independence to contradict them. And liability frameworks that create costs for foreseeable harms, ensuring that the externalization of consequences — the classification of downstream suffering as someone else's problem — is no longer economically rational.

These mechanisms do not prohibit building. Shklar did not oppose power. She opposed power without constraint. The mechanisms constrain by requiring attention — by making the builder's indifference structurally more expensive than the builder's care.

The third front is educational reform grounded in the actual demands of the transition. Segal calls for education that teaches questioning over answering, and the call is correct as far as it goes. Shklar's framework extends it by insisting that educational reform without economic reform is hollow. The student who learns to ask deep questions and then enters a labor market that rewards only speed and output has been given a gift the economy will not let her use.

Educational reform must therefore be accompanied by economic structures that reward the capacities education develops. This means valuing judgment in hiring and compensation structures, creating career pathways for integrative thinkers rather than narrow specialists, and building organizational cultures that treat the question "should we build this?" as a mark of professional seriousness rather than competitive weakness. The school cannot succeed if the economy punishes what the school teaches.

The specific educational changes the transition demands include the integration of AI tools into pedagogy not as shortcuts but as objects of critical engagement — teaching students not merely to use the tools but to understand what the tools do, what they optimize for, what they conceal, and what questions they cannot answer. The cultivation of the capacity for sustained attention in an environment that systematically erodes it — through practices that protect boredom, that require extended engagement with resistant material, that build the cognitive muscles the smooth environment allows to atrophy. And the development of what might be called institutional literacy — the capacity to understand how institutions shape outcomes, how power operates through structures rather than through individuals, and how the suffering that appears personal is often structural.

The fourth front is the institutional creation of rest. The achievement society that Han describes and that the AI tools intensify has eliminated the structural stopping points that previous eras provided. The factory whistle that ended the shift. The weekend that separated work from everything else. The commute that created a physical and temporal boundary between the office and the home. Each of these was a form of institutional rest — rest that did not depend on the individual's willpower but on the structure of the environment.

The AI-augmented work environment has dissolved these structures. Work follows the worker everywhere. The tool is always available. The cognitive gap that might have been a moment of rest becomes a moment of production. The individual who wants to stop must stop against the current — must exercise willpower in an environment designed to make willpower unnecessary for everything except the one thing that matters most: the decision to disengage.

The institutional creation of rest means rebuilding the structural stopping points that the smooth environment has eliminated. The Berkeley researchers' AI Practice framework — structured pauses during which AI tools are set aside — is one model. Organizational policies that establish genuine boundaries between work time and non-work time are another. Cultural norms that treat rest not as the absence of productivity but as the precondition for judgment, creativity, and the sustained attention that genuine thinking requires.

These are not soft proposals. They are the institutional equivalent of the factory safety regulations that prevented machinery from running until workers collapsed. The machinery is different. The principle is the same. Human beings require rest, and the institutional environment must provide it when the individual's capacity to provide it for herself has been structurally compromised.

The fifth front is the political inclusion of the affected. Shklar's deepest institutional commitment — the commitment that runs beneath all others — is to the dispersion of power among a multiplicity of politically active groups. The AI transition concentrates power in the hands of a small number of technology companies, a small number of national governments, and a small number of institutional investors. The people affected by the transition — the workers, the students, the parents, the communities whose economic base is being restructured — have almost no voice in the decisions that shape their lives.

The political inclusion of the affected means creating mechanisms through which the perspectives of those downstream of AI deployment reach the decision-makers upstream. Worker representation in AI governance decisions within organizations. Public participation in the regulatory processes that shape AI policy. Community voice in the deployment decisions that affect local economies. The institutional infrastructure of democratic participation, applied to the specific decisions that the AI transition demands.

These five fronts constitute the politics of cruelty prevention in the age of AI. They are not utopian. They do not promise the good society. They promise something more modest and more urgent: the prevention of the worst outcomes. The institutional structures that keep the powerful from inflicting suffering on the powerless through indifference, through carelessness, through the amplified neglect that the most powerful tool in human history makes possible.

Shklar understood, from the education that exile provides, that political orders are not remembered for what they built. They are remembered for what they failed to prevent. The question for this generation is not what AI can achieve — that question answers itself with each passing month. The question is what suffering AI will produce that could have been prevented, what cruelties it will inflict that institutional foresight could have averted, what costs it will impose on the vulnerable that the powerful could have chosen to distribute differently.

The institutions must be built. The suffering is real, and it is present, and it is growing. The political will to prevent it exists in the same populations whose fear this book has taken seriously — the displaced, the exhausted, the parents lying awake, the children asking what they are for. Their fear is not the obstacle. It is the foundation. The liberalism of fear builds on fear — takes it seriously, treats it as political data, translates it into institutional demand.

The dams must be built. Not eventually. Now. Because the river does not wait, and the people downstream of the current cannot afford to wait either. The measure of this political moment will not be the power of the tools. It will be the adequacy of the institutions built to ensure that power does not produce the cruelty that power, unconstrained, has always produced.

That is the obligation. That is the test. And the liberalism of fear, which has never promised more than the prevention of the worst, insists that the test be taken now, while the worst is still preventable — while the institutions can still be built, while the suffering can still be averted, while the question of who bears the cost of progress is still a question rather than an answer that the vulnerable have already been forced to accept.

---

Epilogue

The word that haunted me through this book was not "cruelty." It was "avoidable."

Shklar died in 1992. She never saw a language model. She never watched a senior engineer oscillate between excitement and terror in a room in Trivandrum. She never read a Substack post about a spouse who vanished into Claude Code. She never encountered the specific vertigo I described in The Orange Pill — the sensation of falling and flying at the same time, of watching the ground shift beneath your feet while the view from the new elevation takes your breath away.

And yet her framework fit the moment with the precision of a key cut for a lock that did not exist when the key was made. That precision unsettled me more than any single argument in these pages.

Because what Shklar's framework reveals, applied to the world I described in The Orange Pill, is that the suffering I documented — the displacement, the exhaustion, the erosion of rest, the fear I watched ripple through communities of skilled people who had built their lives on expertise the market was repricing in real time — none of it was weather. None of it was inevitable. All of it was the product of choices: choices about deployment speed, about who captures productivity gains, about whether the institutions that protect people during transitions get built before the transition or after, or never.

I wrote The Orange Pill from inside the builder's fishbowl. I knew I was inside it. I said so. I pressed my face against the glass and tried to see what the glass distorted. Shklar's framework showed me what I still could not see — not because I was not trying, but because the fishbowl is structural, and individual effort, however sincere, does not break structural glass.

What I could not see clearly enough was the asymmetry. The twenty-fold multiplier I celebrated in Trivandrum was real. The capability it created was genuine. The engineers who reached across disciplinary boundaries and built things they never could have built alone — that was not an illusion. But the decision about what the multiplier meant for those engineers' futures was mine, not theirs. I chose to keep the team. I chose to invest the gains in more ambitious work. That choice was made possible by my position, not by theirs. Another leader, facing the same arithmetic in a different quarter with different board pressure, makes the opposite choice. And the engineers have no institutional mechanism to influence which choice gets made.

That is the asymmetry Shklar's framework will not let me look away from. Not the asymmetry of capability — AI is genuinely leveling that. The asymmetry of institutional protection. The people who build the tools and deploy them and capture their gains operate within a web of institutional support — capital markets, legal infrastructure, professional networks, economic cushions that absorb failure. The people whose lives the tools reshape operate, too often, without any of these. The floor has been lowered. The safety net has not been extended to cover it.

I keep returning to something Shklar argued that I cannot resolve or dismiss: that the distinction between misfortune and injustice is itself a political act. When I described the senior engineers fleeing to the woods, I framed it through the lens of fight-or-flight — a biological metaphor that naturalizes the response. Shklar would ask: Why are they fleeing? Not psychologically. Politically. What institutional absence makes flight the rational choice? And who benefits from classifying their flight as a failure to adapt rather than as evidence that the political order has failed them?

I do not have clean answers. The book you have just read does not offer clean answers, because Shklar did not traffic in clean answers. She trafficked in obligations. And the obligation her framework imposes on someone in my position — someone who builds, who deploys, who captures gains, who sits in rooms where the arithmetic of productivity gets weighed against the futures of real people — is not to feel guilty. Guilt is cheap. The obligation is to build the institutions that would make the guilt unnecessary. The transitional protections. The accountability structures. The mechanisms that give the affected a voice in the decisions that reshape their lives.

I called this book's predecessor a tower with a sunrise at the top. Shklar's framework does not promise a sunrise. It promises something I have come to believe is more important: the prevention of the darkest night. Not the achievement of the good, but the aversion of the worst. That is a more modest ambition than the one I brought to The Orange Pill. It is also, I now think, a more honest one.

The dams must be built. Not because the river is evil — I still believe the river is generous, still believe that intelligence flowing through new channels can irrigate rather than flood. But because generosity without structure is just force. And force, unconstrained, does what force has always done. Shklar knew this. She learned it the hardest way there is.

The institutions are the work. Not the tools. Not the capability. Not the amplifier. The institutions that ensure the amplifier does not amplify cruelty by default. That is the work, and it falls on the people who understand the systems well enough to know where the dams should go.

It falls on us.

Edo Segal

Judith Shklar spent four decades studying the cruelties that hide inside systems -- the suffering that presents itself as progress, the injustice that gets reclassified as misfortune, the institutiona

Judith Shklar spent four decades studying the cruelties that hide inside systems -- the suffering that presents itself as progress, the injustice that gets reclassified as misfortune, the institutional failures that leave the displaced to bear the cost alone. She never saw artificial intelligence. She mapped the exact terrain it is reshaping.

This book applies Shklar's liberalism of fear to the AI revolution documented in Edo Segal's The Orange Pill. It asks the question the technology discourse keeps deferring: not what AI can build, but who suffers when the institutions meant to protect them arrive too late -- or never arrive at all. From the builder's structural indifference to the smooth domination that eliminates the experience of being dominated, Shklar's framework reveals what the builder's fishbowl conceals.

The fear of the displaced engineer, the exhausted worker, the parent lying awake -- these are not failures of adaptation. They are political data. And the liberalism of fear insists that data be read before the worst becomes irreversible.

Judith Shklar
“epistemic, structural, and temporal asymmetries of power”
— Judith Shklar
0%
11 chapters
WIKI COMPANION

Judith Shklar — On AI

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Judith Shklar — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →