Cruelty by default is the Shklarian extension of putting cruelty first to the specific conditions of contemporary technological production. The concept names a structural phenomenon: cruelty that is not intended, that is not enacted by identifiable agents with malicious purpose, and that is nonetheless produced reliably by institutional environments which systematically prevent builders from attending to the consequences of their products. The builder does not choose indifference. The environment produces indifference as a structural feature. Competitive pressure, deployment velocity, cultural norms that celebrate shipping over reflection, the specific exhilaration of operating at the frontier — these conditions combine to produce moral imagination deficits that no individual builder can correct through effort alone.
The analytical leverage of the concept comes from its decoupling of intention from outcome. Classical cruelty analysis presupposes a cruel actor — a torturer, a tyrant, a commander who orders the bombardment. Shklar studied these figures and understood them, but her mature framework focused on structural cruelty that operates through systems rather than through persons. Edo Segal's candid confession in The Orange Pill provides the paradigm case: he built addictive products knowing they exploited cognitive vulnerabilities, not because he intended harm but because the competitive environment rendered hesitation professionally suicidal. The knowledge was present. The will to act on it was defeated by the institutional structure within which building occurred.
The speed of contemporary AI deployment maximizes the production of cruelty by default. The faster the cycle between decision and shipping, the less time exists for the moral imagination to operate. Moral imagination — the capacity to anticipate consequences for people not in the room — is not instantaneous. It requires reflection, consultation, the imagination of the person downstream who will be affected. The Berkeley researchers documented the elimination of precisely these cognitive gaps. When AI tools fill every interval with productive activity, the spaces that previously served as sites of reflection disappear. The builder is always building. The question of whether the thing being built will cause harm is deferred — not deliberately, but structurally.
Shklar's framework locates the moral responsibility at the institutional level rather than the individual level, which is where the concept's analytical power becomes politically consequential. The individual builder cannot, within the competitive environment, unilaterally slow down without bearing disproportionate costs. The rational fear of competitive marginalization is real. But rational fear does not eliminate moral responsibility — it redistributes it. If individual builders cannot exercise the moral imagination that cruelty prevention demands, the responsibility shifts to the regulatory frameworks, professional norms, and accountability structures that should exist to make the exercise of moral imagination compatible with competitive survival. The absence of those structures is not the builder's fault. But the suffering their absence produces is not the displaced worker's fault either, and someone must bear the obligation of prevention.
Matthieu Queloz's 2025 application of Shklar's framework to AI advisory systems identified a pattern directly relevant: the epistemic, structural, and temporal asymmetries between builders and users. Builders possess information users do not — about how systems work, what data trained them, what failure modes exist, what they optimize for. This information asymmetry is a form of power, and power without accountability is the precondition for cruelty. The technology priesthood's self-assessment — we are building responsibly, thinking about safety, committed to beneficial outcomes — is not an adequate substitute for external constraint, because self-regulation reliably fails at the moment of maximum pressure, when the priesthood's interests and the public's interests diverge.
The concept is developed in the Shklar volume of the Orange Pill Cycle as a Shklarian extension of her framework to the specific phenomenology of contemporary technological production. It synthesizes Shklar's structural analysis of cruelty with Segal's confessional account of product design and with Matthieu Queloz's 2025 scholarly application of the liberalism of fear to AI systems.
Intention is not the relevant variable. The cruelty produced by default is real regardless of whether any individual intended it — the structural conditions of its production are what matter.
Speed eliminates moral imagination. Deployment cycles compressed to days or weeks systematically prevent the reflective cognition that anticipation of consequences requires.
The competitive environment rewards indifference. Builders who slow down to assess consequences bear disproportionate costs in industries that celebrate velocity, creating a selection pressure against moral attention.
Self-regulation reliably fails. The technology priesthood's self-assessments cannot substitute for external constraint because the moment of maximum need is also the moment of maximum pressure against restraint.
Responsibility shifts upward when individuals cannot exercise it. When structural conditions prevent individual moral agency, the moral responsibility migrates to the institutional level where structural change is possible.