The task-based framework is David Autor's signature analytical contribution to labor economics. Rather than treating jobs as indivisible units that either survive or disappear under technological change, the framework decomposes every occupation into a bundle of discrete tasks — some routine, some non-routine; some cognitive, some manual; some interpersonal, some analytical. Technology enters this bundle selectively, automating some tasks while leaving others untouched, and in doing so transforms the occupation without eliminating it. The framework's empirical power lies in its predictive precision: by measuring the task composition of an occupation and the automation susceptibility of each task, one can forecast with unusual accuracy which jobs will be hollowed out, which will expand, and which will fundamentally change in character. Applied to AI, the framework predicts a reorganization rather than a collapse of the labor market.
The framework emerged in the early 2000s from Autor's collaboration with Frank Levy and Richard Murnane, whose 2003 paper 'The Skill Content of Recent Technological Change: An Empirical Exploration' established the canonical distinction between routine and non-routine tasks. The breakthrough was methodological: by mapping tasks rather than jobs, the authors could explain why computerization had produced job polarization rather than uniform displacement. Routine middle-skill tasks — bookkeeping, assembly-line work, basic data processing — collapsed in value as computers performed them at lower cost. Non-routine cognitive tasks at the top and non-routine manual tasks at the bottom proved resistant to automation, producing the hollowing-out pattern that has defined Western labor markets for four decades.
The framework's application to AI represents its most consequential test. Previous computerization automated explicitly codifiable rules; the large language models of the 2020s perform tasks that resist codification — writing, summarizing, coding, designing. This has forced the framework to evolve. The routine/non-routine boundary, once stable, now shifts dynamically as AI capabilities expand. Tasks that were non-routine in 2020 — legal research, medical diagnosis, architectural drafting — have become partially routine by 2026. The framework remains intact, but its application requires continuous recalibration of which tasks sit on which side of the automation frontier.
Autor's task-based thinking provides the analytical foundation for Segal's intuitions in The Orange Pill. Where Segal describes ascending friction phenomenologically — the engineer who no longer struggles with syntax struggles instead with architecture — Autor's framework specifies the mechanism: syntax is a routine task AI has absorbed; architecture is a non-routine task that remains human, at least for now. The twenty-fold productivity gains Segal observed at Trivandrum reflect not enhancement of human capability but substitution of AI for routine components of engineering work, freeing human attention for the non-routine components.
Autor developed the framework during the late 1990s and early 2000s as a direct response to the inadequacy of the prevailing skill-biased technological change (SBTC) hypothesis, which predicted that technology would uniformly benefit educated workers. SBTC could not explain why the wage distribution was hollowing out rather than simply stretching. The task-based framework resolved this puzzle by shifting the unit of analysis from workers (skilled vs. unskilled) to tasks (routine vs. non-routine), allowing heterogeneous effects within skill categories.
Jobs are task bundles. No occupation is a monolith; each consists of multiple distinct tasks with different automation susceptibilities, and technology operates on tasks, not jobs as wholes.
Substitution is partial. AI substitutes for some tasks within an occupation while complementing others, producing transformation rather than elimination of most jobs.
The frontier moves. Which tasks count as routine versus non-routine is not fixed; AI capabilities continuously redraw the boundary, requiring dynamic rather than static analysis.
Task reallocation is the mechanism. Productivity gains emerge from reallocating human effort from automatable tasks to tasks that remain non-routine, which is why AI can produce twenty-fold multipliers without twenty-fold unemployment.
The framework has been criticized for treating task automation susceptibility as a technical property rather than a socially negotiated one. Critics including Daron Acemoglu argue that which tasks get automated depends as much on institutional choice as on technological capability — firms can design work to complement workers or replace them, and the framework does not adequately theorize this choice.