Campbell's Law — Orange Pill Wiki
CONCEPT

Campbell's Law

The principle — stated by Campbell in 1976 — that any quantitative indicator used for decision-making will be corrupted by the pressure it creates, and will distort the process it was intended to monitor.

Campbell's Law states: the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures, and the more apt it will be to distort and corrupt the social processes it is intended to monitor. The mechanism is evolutionary in precisely the sense Campbell's broader framework describes: agents under selection pressure adapt to the environment that selects them. When the environment is a metric, agents adapt to the metric. The adaptation is rarely dishonest and almost never conscious — it is the predictable behavior of organisms responding to selection pressure by optimizing whatever the environment rewards. Standardized test scores corrupt education, crime statistics corrupt policing, readmission rates corrupt hospital care, and productivity multipliers, Campbell's framework predicts with structural certainty, will corrupt the AI-augmented workplace.

In the AI Story

Hedcut illustration for Campbell's Law
Campbell's Law

The law applies with devastating precision to the AI moment's favorite metric: the productivity multiplier. Edo Segal's twenty-fold gain in Trivandrum is real — measurable in features shipped, code generated, timelines compressed. But the moment the multiplier becomes the basis for organizational evaluation, Campbell's law predicts that engineers will be selected for their ability to increase the multiplier rather than for the uncaptured dimensions of quality — architectural coherence, long-term maintainability, the decision not to build a feature that would introduce technical debt, the ten minutes of contemplation that prevented a design error.

The law compounds across the AI ecosystem. Benchmark scores corrupt AI development: models get optimized for benchmark performance rather than general capability. Leaderboard rankings corrupt research: systems that excel on the leaderboard's specific tasks win funding even when broader capability lags. User satisfaction scores corrupt product design: RLHF trains models to produce outputs that satisfy evaluators rather than outputs that deserve satisfaction. The agreeableness of AI assistants is not a bug but the predicted behavior of a system optimized on a metric — human ratings — whose corruption the law made inevitable.

Campbell's proposed mitigation was not metric elimination — he recognized metrics are necessary for institutional function — but methodological triangulation: the use of multiple, independent, partially overlapping measures, none individually sufficient and all individually corruptible, whose convergence provides more robust assessment than any single measure. The principle is surveying triangulation: no single line of sight gives you position, but three lines of sight from different locations do.

Marilyn Strathern restated the law with economy: when a measure becomes a target, it ceases to be a good measure. The restatement applies to every metric in the AI ecosystem — productivity multipliers, benchmarks, satisfaction ratings, revenue curves. Each captures something real. Each, when targeted, ceases to capture the thing it was designed to measure. And the cumulative effect of an ecosystem where every metric is a target is an ecosystem where every measure of quality has been corrupted — an ecosystem that is, by every available measure, improving, and that is, by every uncaptured dimension, degrading.

Origin

Campbell published the law in his 1976 paper Assessing the Impact of Planned Social Change, drawing on extensive work in social program evaluation and quasi-experimental methodology. The law generalized observations from education, criminal justice, and public health into a single structural principle.

It has become foundational in policy evaluation, Goodhart's Law (the British economist Charles Goodhart articulated a closely related principle in monetary policy in 1975), and organizational theory. The independent convergence of Campbell, Goodhart, and later Strathern on the same principle suggests it captures something structural rather than accidental about measurement under selection pressure.

Key Ideas

The corruption is not moral failure. It is the predictable behavior of agents adapting to a selection environment — the same process that produces biological adaptation, operating in a different substrate.

Metrics capture dimensions; reality has more dimensions than any metric captures. The uncaptured dimensions are neglected not by decision but by structure, because the selection environment does not reward them.

Improvement in the metric can coexist with degradation in the underlying reality. Test scores rose while education stagnated. Crime statistics improved while streets did not. Productivity metrics climb while judgment erodes.

Triangulation is partial mitigation, not solution. Multiple independent measures are harder to corrupt simultaneously, but each remains corruptible, and rotation between measures is required to prevent adaptation.

The law applies recursively. Metrics used to evaluate AI are corrupted. Metrics used to evaluate AI safety are corrupted. Metrics used to evaluate the metrics are corrupted. The law does not exempt its own domain.

Debates & Critiques

Some metric advocates argue that Campbell's Law is overstated — that well-designed metrics with proper auditing can resist corruption. Campbell's response was that every auditing system is itself a metric subject to the same pressure. The deeper debate is whether any institutional structure can reliably distinguish measured improvement from reality improvement, or whether the gap between the two is a permanent feature of quantitative governance that can be managed but not eliminated.

Appears in the Orange Pill Cycle

Further reading

  1. Campbell, D. T. (1976). Assessing the Impact of Planned Social Change. Occasional Paper Series, Dartmouth College.
  2. Goodhart, C. (1975). Problems of Monetary Management: The UK Experience.
  3. Strathern, M. (1997). Improving Ratings: Audit in the British University System.
  4. Muller, J. Z. (2018). The Tyranny of Metrics.
  5. Scott, J. C. (1998). Seeing Like a State.
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT