The Turing Trap names the structural bias in contemporary AI development toward building systems that substitute for human labor rather than augment it. The trap is set by Alan Turing's 1950 proposal that machine intelligence be measured by indistinguishability from human performance — a standard that, applied as a research goal, orients the entire field toward replacement rather than partnership. The trap is reinforced by a tax code that subsidizes capital investment while taxing labor, by research benchmarks that measure machine-vs-human competition rather than human-AI collaboration, and by organizational incentives that favor cost reduction over capability expansion. The resulting bias is not a conspiracy but an emergent property of thousands of locally rational decisions that, in aggregate, produce a technology trajectory that concentrates economic and political power among those who control the automated systems. Brynjolfsson laid out the argument in a 2022 Dædalus paper and sharpened it in subsequent public lectures, calling for specific policy interventions to rebalance the incentive structure toward augmentation.
The logic of the trap is precise. If the benchmark for AI achievement is human-level performance on human tasks, then each advance in capability brings machines closer to matching humans on another task — and each task matched is a task that no longer requires a human. Success by the field's own standard implies human obsolescence. The better the AI works, the more humans it displaces. "As machines become better substitutes for human labor," Brynjolfsson wrote, "workers lose economic and political bargaining power and become increasingly dependent on those who control the technology."
The distinction between automation and augmentation uses the same technology but points it in different directions. Automation asks: can the machine do this task instead of the human? Augmentation asks: can the machine enable the human to do something neither could do alone? Both can be valuable, but the aggregate balance determines distributional outcomes. The current incentive structure — tax subsidies for capital, benchmark competitions framed as human-vs-machine, cost-reduction metrics in corporate deployment — systematically tilts the balance toward substitution.
The empirical evidence partially supports and partially complicates the argument. Brynjolfsson's 2023 customer service study — Generative AI at Work — showed AI amplifying novice workers more than experts, compressing skill gaps rather than widening them. But subsequent data showed junior hiring collapsing in AI-exposed occupations. The existing workforce was being augmented. The future workforce was being thinned. Both dynamics operated simultaneously, and both could be influenced by deployment choices.
Ajay Agrawal, Joshua Gans, and Avi Goldfarb challenged the distinction in a Brookings paper, arguing that "one person's substitute is another's complement" — that a tool built with automation intent often augments the majority of workers who remain employed. Brynjolfsson acknowledged the complexity without conceding the argument: the boundary was blurry, but the aggregate direction still mattered, and policy levers could shift it meaningfully.
Brynjolfsson's 2022 paper The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence, published in Dædalus (the journal of the American Academy of Arts and Sciences), named and formalized the concept. The paper drew on decades of his research on technology and labor markets, sharpened by the acceleration of AI capabilities in the years leading up to its publication.
The argument's intellectual lineage runs through earlier work by Douglas Engelbart on augmentation over automation, through the sociology-of-technology tradition that emphasized design choices as political choices, and through the labor economics literature on skill-biased technical change. Brynjolfsson's contribution was to synthesize these threads into a specific diagnosis of contemporary AI development and a set of policy prescriptions for escaping the trap.
Turing's benchmark sets the wrong goal. Measuring AI by indistinguishability from humans orients the field toward replacement rather than complementarity.
The trap operates through incentives. Tax code, research culture, and organizational metrics jointly tilt deployment toward automation over augmentation.
Automation and augmentation are design choices. The same technology can be built for either purpose; the choice is made by humans, not dictated by the technology.
Political consequences beyond economics. Workers whose labor is substitutable have diminishing bargaining power, concentrating both economic and political power among those who control the AI systems.
Escape requires deliberate intervention. Rebalancing incentives through tax reform, augmentation-oriented research funding, and deployment reporting can shift the default trajectory.
Agrawal, Gans, and Goldfarb's Turing Transformation critique argued the automation-augmentation distinction is unstable in practice — the same tool produces different effects for different workers in different contexts. Brynjolfsson accepted the complexity but maintained that aggregate direction remains policy-relevant. A separate debate concerns whether the tax code rebalancing he proposes is politically feasible; critics argue that capital-labor tax differentials reflect broader political economy forces that piecemeal reform cannot address.