In a sequence of papers from 2018 through 2022, Daron Acemoglu and Pascual Restrepo developed a task-based framework that separates automation's effects into two competing forces. The displacement effect captures the loss of tasks workers previously performed to machines. The reinstatement effect captures the creation of new tasks for which human labor is newly productive. When reinstatement exceeds displacement, productivity gains flow to workers through wage growth. When displacement exceeds reinstatement, productivity rises while wages stagnate and labor's share of income falls. The empirical finding that unsettled the AI discourse: since roughly 1980, the balance has tilted toward displacement, and the AI wave threatens to extend this imbalance rather than correct it.
The framework displaces the standard economic presumption — inherited from the postwar experience — that technology and labor are complements whose returns rise together. That presumption was empirically valid for the period it described. Acemoglu and Restrepo's contribution was to show, with task-level data, that complementarity is a conditional property, not a structural law. The conditions that produced mid-century wage growth — strong unions, collective bargaining, public investment in complementary skills, tax structures that incentivized labor-augmenting innovation — have weakened or inverted.
The AI application is particularly sharp because large language models target the reinstatement side of the equation. Previous automation waves created new task categories — software developer, data analyst, UX designer — that absorbed displaced workers at higher wages. Current AI absorbs those same categories. The software death cross is the reinstatement effect running backward: the new tasks created by the previous wave are being automated by the current one before new human tasks have emerged to replace them.
The framework also diagnoses what Acemoglu calls so-so automation — technologies productive enough to displace workers but not productive enough to generate the output expansion that historically drove reinstatement. Self-checkout kiosks are the canonical example. The distributional consequences of so-so automation are uniformly bad: workers lose tasks, productivity rises modestly, capital captures the gains, and no expanding sector creates the jobs that would have absorbed displacement in earlier eras.
The policy implication — developed in Acemoglu and Restrepo's 2020 work with Johnson — is that the direction of technological development is itself a choice. Tax codes that favor capital over labor, research subsidies directed at automation rather than augmentation, and corporate incentives measured in headcount reduction all push technology toward the displacement side of the ledger. The framework makes these choices visible as choices rather than natural laws.
The foundational paper, 'The Race between Man and Machine,' appeared in the American Economic Review in 2018. The framework was extended in 'Automation and New Tasks' (Journal of Economic Perspectives, 2019) and 'Robots and Jobs' (Journal of Political Economy, 2020), which used US commuting-zone data to establish that industrial robot adoption reduced employment and wages in exposed regions.
Automation is task-level, not occupation-level. The relevant unit of analysis is the task, because occupations are bundles of tasks that can be unbundled when some are automated.
Displacement and reinstatement are independent forces. They can grow together, shrink together, or move in opposite directions depending on the character of technological change and institutional context.
So-so automation is worse than full automation. Partial automation that displaces workers without generating large productivity gains produces the worst distributional outcomes because the offsetting output effect is absent.
The direction of innovation is endogenous. Research incentives, tax structures, and corporate priorities shape whether firms invest in displacement or reinstatement technologies.
Mainstream growth economists including Erik Brynjolfsson have argued that reinstatement takes longer than the framework implies and that current AI productivity gains are early-cycle rather than terminal. Acemoglu's response is that the null hypothesis should be institutional failure, not automatic correction, and the burden of proof falls on those predicting reinstatement rather than those observing its current absence.