Algorithm aversion designates the asymmetric reaction to errors made by algorithms versus identical errors made by humans. The asymmetry is irrational in the technical sense — the source of the error should be irrelevant to assessing its severity — but it is psychologically real and politically consequential. Research across domains from medical diagnosis to financial forecasting to parole decisions documents that people become less willing to use algorithmic tools after observing algorithmic errors than they do after observing equivalent human errors, even when aggregate algorithmic performance exceeds human performance by substantial margins. The aversion means deployment of AI decision support faces social resistance disproportionate to actual risk, and the resistance is not responsive to evidence about comparative accuracy. Understanding and designing around algorithm aversion is essential to realizing the welfare gains that Choice Engines make possible.
The welfare cost of algorithm aversion is measured in outcomes foregone. In bail decisions, algorithms outperform human judges by margins that, if deployed, would reduce crime rates while holding incarceration constant — or reduce incarceration while maintaining crime rates. The refusal to deploy, driven by aversion rather than evidence, costs public safety and individual liberty simultaneously. In medical diagnosis, aversion-driven resistance to algorithmic tools costs lives with a directness that evidence-based policy cannot easily defend but that political processes nonetheless produce.
The institutional response is not to dismiss algorithm aversion but to design around it. Transparency reduces aversion: people are more willing to accept algorithmic recommendations when they understand how recommendations were generated. Human oversight reduces aversion: people are more comfortable with algorithms that inform human decisions than with algorithms that replace them. Demonstrable track records reduce aversion: people who have personally experienced benefits of algorithmic assistance are less averse than people whose only exposure is abstract. Each finding suggests specific design features for AI decision-support systems that would increase social acceptance without compromising accuracy.
The framework's relationship to broader AI governance is instructive. Algorithm aversion is a cognitive bias with political consequences: it systematically underestimates the welfare gains available from well-designed algorithmic systems. Regulatory architectures that respond to aversion by restricting deployment may appear responsive to public concern but actually impose costs on the populations whose welfare the restriction claims to protect. The corrective is not ignoring aversion — that produces political failure and its own costs — but designing deployment structures that address the aversion's underlying sources while preserving the accuracy gains algorithmic tools provide.
The phenomenon was named and characterized by Berkeley Dietvorst, Joseph Simmons, and Cade Massey in a 2015 Journal of Experimental Psychology paper that became the foundation of subsequent research. The broader pattern — asymmetric trust in human versus algorithmic judgment — had been observed earlier but not systematically theorized. Sunstein's integration of the framework into AI governance theory emerged in papers from 2020 onward.
The aversion is asymmetric. Identical errors trigger greater rejection when attributed to algorithms than when attributed to humans, independently of aggregate performance.
The aversion is evidence-resistant. Providing comparative accuracy data rarely reduces aversion; the cognitive mechanism operates beneath the level that information can reach directly.
Design features can reduce aversion. Transparency, human oversight, and demonstrated track records address the aversion's operational sources without requiring users to overcome the bias through reasoning alone.
The welfare cost is real. Aversion-driven refusal to deploy accurate algorithmic tools imposes costs measured in lives, liberty, and welfare across every domain where algorithms outperform humans.