The Jevons Paradox of Intelligence extends William Stanley Jevons's 1865 observation about coal to computational cognition. Jevons demonstrated that as steam engines became more efficient—extracting more work per ton of coal—total coal consumption increased because efficiency reduced cost, expanding the range of economically viable applications. Smil applies this structural pattern to AI: as tools make knowledge workers more productive, they do not reduce total computational demand but expand it. Workers take on more tasks, tackle adjacent domains, fill pauses with prompts—the Berkeley study documented this empirically. The twenty-fold productivity multiplier Segal celebrates translates, under sustained behavioral patterns, into twenty-fold growth in inference queries, token generation, GPU-hours, electricity consumption, heat dissipation, and cooling water evaporation. Efficiency gains per operation are real and substantial; they are outpaced by demand growth, producing net increases in aggregate resource consumption. The paradox operates at individual and systemic scales simultaneously, making it one of the most reliable structural predictors of AI's physical footprint.
Jevons documented the coal paradox in The Coal Question (1865), warning that Britain's coal reserves were finite and that efficiency improvements, rather than conserving coal, were accelerating its depletion. The thesis was controversial—classical economics assumed efficiency reduced consumption—but empirically validated across subsequent decades. Smil has extended the pattern across lighting (LED efficiency increased 20x, total lighting electricity tripled), automotive fuel (efficiency up 30%, total gasoline consumption rose), refrigeration (unit efficiency improved 75%, total consumption constant because units grew larger and more numerous), and air conditioning (efficiency doubled, total consumption quadrupled because deployment expanded geographically and temporally). The pattern is not universal but dominant: efficiency gains reduce total consumption only in mature, saturated markets where demand cannot expand. When demand is elastic—new users, new applications, new scales of operation become accessible—the rebound dominates.
The AI productivity literature provides direct evidence. Brynjolfsson, Li, and Raymond's 2023 study of customer service agents using generative AI found productivity gains of 14% on average, 34% for the least experienced workers—but no reduction in total work hours or total queries handled. Workers used the freed time to handle more complex cases, expand service scope, and increase throughput. The Berkeley ethnography by Ye and Ranganathan documented workers expanding AI use into lunch breaks, elevator rides, gaps between meetings—'task seepage' colonizing every available moment. Nat Eliason's public testimony: 'I have never worked this hard.' The pattern is not confined to outliers: surveys across knowledge work sectors find that AI adopters report working more, not less, with higher subjective intensity and no decrease in total hours. The efficiency created capacity; the capacity was immediately absorbed into expanded output.
The thermodynamic consequence follows mechanically. If a developer generates twenty times more code using AI, and the code quality is high enough to ship, the computational demand of serving that developer has increased roughly twentyfold. Each prompt requires inference computation. Each iteration requires additional tokens. The aggregate electricity, cooling, and water costs scale approximately with aggregate computational demand, modified by efficiency improvements that are significant (5-10% annual gains in energy per operation) but smaller than demand growth (estimates ranging from 50-100%+ annual growth in AI computational workloads 2023-2026). The per-operation cost falls. The total cost rises. The pattern is Jevons, realized in joules and liters rather than tons of coal.
The systemic amplification exceeds individual amplification because it includes user base growth. Each individual developer becoming 20x more productive contributes to demand growth. The expansion from millions to hundreds of millions of AI users contributes orders of magnitude more. The extension of AI into new domains—healthcare diagnostics, legal research, scientific simulation, educational tutoring, creative production—contributes further. The compound growth rate of total computational demand reflects the multiplication of per-user intensity increases times user base expansion times domain proliferation. Smil's framework predicts this compound growth will continue until it encounters a binding physical constraint—energy availability, chip supply, water resources, or heat dissipation capacity. Which constraint binds first determines the shape of the AI S-curve's eventual deceleration.
William Stanley Jevons articulated the efficiency paradox in The Coal Question: An Inquiry Concerning the Progress of the Nation, and the Probable Exhaustion of Our Coal-Mines (1865), motivated by concern that Britain's industrial supremacy depended on finite coal reserves being depleted faster as engines improved. Economists debated the thesis for decades; empirical validation came through the twentieth century's overwhelming evidence that efficiency and total consumption rose together across virtually every energy technology. The term "Jevons Paradox" entered common usage in environmental economics in the 1980s-1990s.
Smil's application to AI is implicit in his February 2026 Bankinter webinar and explicit in the Vaclav Smil—On AI simulation's Chapter 7. The framework synthesizes his documentation of efficiency rebounds across Energy Transitions, Energy and Civilization, and Energy Myths and Realities. The AI-specific evidence comes from the Berkeley study, Brynjolfsson's customer-service research, and aggregate data center energy consumption trends showing acceleration despite chip efficiency improvements. The synthesis is structurally identical to Jevons's original argument, transposed from coal tonnage to computational demand measured in kilowatt-hours and inference queries.
Efficiency enables expansion. Productivity tools that make work cheaper per unit do not reduce total work when demand is elastic; they expand the domain of economically viable tasks, increasing aggregate resource consumption.
Individual and systemic scales. The paradox operates at both levels—each user works more (individual rebound) and the user base expands (systemic rebound)—producing compound growth in total computational demand that outpaces per-operation efficiency gains.
Thermodynamic inevitability. Total energy consumption tracks total computation; efficiency improvements reduce energy per operation but not total energy when total operations grow faster—a pattern governed by thermodynamics and demand elasticity, not by software optimization.
No saturation evidence. AI demand shows no signs of approaching saturation; workers fill every available moment, new users adopt at record rates, new domains open continuously—suggesting demand remains highly elastic and rebound will dominate over efficiency.
Historical precedent reliability. The pattern held for coal, gasoline, lighting, refrigeration, air conditioning, and every other energy-consuming technology Smil examined; the burden of proof falls on those claiming AI will be the exception.