The Banality of Optimization — Orange Pill Wiki
CONCEPT

The Banality of Optimization

The AI-age mutation of Arendt's banality of evil — the observation that harm at scale now emerges not from malice but from the ordinary operation of metric-optimization by intelligent professionals who experience themselves as doing their jobs, following the incentive structures that reward them.

Hannah Arendt's phrase, drawn from the Eichmann trial, named a specific moral phenomenon: the perpetrator of extraordinary harm who was, by ordinary measures, not extraordinary at all. Glover inherited and extended the observation, treating it as a starting point for the taxonomy of erosion. The AI era produces a new form of the same pattern. The engineer optimizing the engagement loop is not evil. She loves her children. She follows good practices. She ships code that works. The harm her work produces — the teenager who cannot stop scrolling, the parent whose child has become unreachable, the attention ecology degraded at scale — is not the product of malice. It is the product of optimization: of the ordinary, professional, reward-seeking behavior of a competent worker inside an institutional environment that measures her value by metrics her work affects and not by consequences her work produces. The banality is the frame. The optimization is the engine. The combination is the characteristic moral phenomenon of the AI workplace, and Glover's framework is what permits its diagnosis.

In the AI Story

Hedcut illustration for The Banality of Optimization
The Banality of Optimization

Arendt's original phrase was controversial — critics accused her of excusing Eichmann by describing him as ordinary, and of underestimating the ideological motivation of Nazi perpetrators. Glover engaged with this controversy throughout his career, settling on a position that preserves Arendt's insight while addressing the critique: the ordinariness is not an excuse but a diagnosis. It does not mean the perpetrator is less responsible; it means the institutional structure that produced the perpetrator is more responsible than individual-focused analyses allow.

Applied to AI, the reframe is useful. The engineer who ships the engagement loop is responsible — her name is on the commit, her decisions shaped the implementation. But the structure that produced the engineer is also responsible: the organizational metrics that rewarded engagement, the career paths that selected for tool fluency, the peer environment that made questioning the product feel like disloyalty. The individual and the structure are both implicated, and the analysis that focuses only on one misses the moral phenomenon.

The specifically AI-era phenomenon adds a third dimension: the tool itself. Claude Code does not choose what to optimize, but it amplifies whatever optimization the engineer specifies with unprecedented speed and scale. The tool is not morally neutral in the sense that it has no effect on the moral phenomenon. It is morally neutral in the sense that it does not evaluate what it amplifies. The evaluation must come from the engineer or the institution — and when both are structured to reward optimization without evaluation, the tool's amplification multiplies the unevaluated optimization into harm at scale.

The banality is what makes the diagnosis uncomfortable. It would be easier if the perpetrators of AI-scale harm were villains. They are not. They are the professionals of the AI industry: thoughtful, competent, often well-meaning, working inside institutional structures that produce the conditions under which their work does harm they did not intend. The harm is no less real for being unintended, and the responsibility for addressing it falls on all three levels — individual, institutional, and tool-design — not on any one alone.

Origin

The concept is the AI-era application of Arendt's formulation, filtered through Glover's institutional analysis. Related analyses appear in the work of Zuboff on surveillance capitalism, O'Neil on weapons of math destruction, Benjamin on the new Jim Code, and the broader critical algorithm studies tradition. The specifically Gloverian contribution is the insistence that the phenomenon is a moral phenomenon — that it concerns moral identity, moral resources, and the erosion mechanisms that produce harm at scale through the ordinary operation of ordinary people in optimized environments.

The term itself is coined in On AI as a deliberate echo of Arendt, chosen because the echo preserves the analytic framework while signaling the new form of the ancient pattern. The banality-of-evil framework was designed for an era when institutional harm was produced by bureaucratic compliance. The banality-of-optimization framework is designed for an era when institutional harm is produced by metric-driven optimization — the same pattern operating through different mechanisms.

Key Ideas

Ordinary agents, extraordinary harm. The pattern Arendt identified persists: the harm is not proportional to the wickedness of its producers.

Metrics replace ideology. Where Eichmann's banality was produced by bureaucratic conformity, the AI engineer's is produced by metric-optimization within an institutional culture that treats the metrics as the measure of professional value.

Tool-mediated amplification. The AI tool does not alter the moral phenomenon but amplifies its scale and speed. The ordinary engineer can now produce harm at scales previously reserved for states.

Three-level responsibility. Individual, institution, and tool all carry responsibility. Analysis focused on any one alone misses the phenomenon.

Diagnostic, not exculpatory. The banality framing does not reduce responsibility. It locates it more precisely, by showing how ordinary structures produce extraordinary outcomes.

Appears in the Orange Pill Cycle

Further reading

  1. Hannah Arendt, Eichmann in Jerusalem (1963)
  2. Jonathan Glover, Humanity: A Moral History of the Twentieth Century (1999)
  3. Shoshana Zuboff, The Age of Surveillance Capitalism (2019)
  4. Cathy O'Neil, Weapons of Math Destruction (2016)
  5. Ruha Benjamin, Race After Technology (2019)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT