By Edo Segal
I didn't find Jevons. Jevons found me.
It was three in the morning, and I was watching my AI agents build something I'd sketched on a napkin six hours earlier. A full prototype — working, deployable, genuinely good. I should have felt triumphant. Instead I felt a creeping dread I couldn't name. Because I wasn't stopping. The prototype was done, but my mind was already forking into the next three things it made possible, and the three things after that, and I realized I would not sleep tonight — not because I couldn't, but because the cost of sleeping had changed. Every hour unconscious was now an hour where something extraordinary could have existed and didn't.
I described this feeling to a friend — a historian, not a technologist — and she said, without hesitation, "You should read Jevons."
I read *The Coal Question* in a single sitting. And then I read it again, slower, with the particular nausea that comes from recognizing your own condition described by a man who died in 1882.
Jevons saw it all. Not AI, not code, not the specific texture of our moment — but the *mechanism*. The bone-deep logic of what happens when you make a critical resource dramatically more efficient to use. You don't use less of it. You use more. You use so much more that the efficiency gain becomes, in aggregate, an acceleration engine. And you do it not because you're foolish or greedy, but because you're rational. Because at every individual margin, using more is the obvious choice. The trap is made of good decisions.
When I wrote about the imagination-to-artifact ratio in *The Orange Pill*, I was describing the demand side of this equation — what happens when creation becomes nearly free. Jevons gave me the supply-side architecture, the formal structure underneath my lived experience. He showed me that what I was feeling at three in the morning wasn't new. It was the same thing a Manchester factory owner felt in 1850 when his efficient new engine made it irrational to stop running it. The resource was different. The math was the same.
This book exists because the Jevons paradox is no longer about coal or electricity or computing cycles. It is about *us* — about human cognition, human attention, human time. AI has made thinking cheaper per unit than it has ever been in the history of our species. And if Jevons is right — and one hundred sixty years of evidence says he is — we will not think less. We will think more, and more, and more, until the human mind itself becomes the scarce resource being consumed by its own efficiency.
Jevons didn't have a solution. He had a warning. I think that's enough to start with.
— Edo Segal ^ Opus 4.6
William Stanley Jevons (1835–1882) was an English economist, logician, and philosopher of science whose work fundamentally reshaped the discipline of economics. Born in Liverpool to a prosperous Unitarian family of ironmongers, he studied chemistry and mathematics at University College London before spending five years as an assayer at the Royal Mint in Sydney, Australia — an experience that sharpened his empirical instincts and his taste for quantitative reasoning. Returning to England, he produced *The Coal Question* (1865), which brought him to the attention of William Gladstone and Parliament, and *The Theory of Political Economy* (1871), which independently co-discovered the marginal revolution in economic theory alongside Léon Walras and Carl Menger. Jevons also made significant contributions to formal logic, constructing a mechanical "logic piano" that anticipated aspects of modern computing, and to the philosophy of scientific method through *The Principles of Science* (1874). He was elected a Fellow of the Royal Society in 1872 and held professorships at Owens College Manchester and University College London. He drowned while swimming near Hastings at the age of forty-six, leaving behind an extraordinary body of work whose most unsettling insight — that efficiency accelerates consumption rather than restraining it — has only grown more consequential with time.
In the winter of 1865, a thirty-year-old economist published a book that should have changed the world. It did not. William Stanley Jevons's The Coal Question: An Inquiry Concerning the Progress of the Nation, and the Probable Exhaustion of Our Coal-Mines arrived in a Britain drunk on its own industrial supremacy, a nation that burned more coal per capita than any civilization in history and saw in that consumption not a problem but a proof of greatness. Jevons looked at the same furnaces, the same railways, the same steam engines roaring across the Midlands, and saw something his contemporaries could not: that the very mechanism they believed would save them — the steady improvement of engine efficiency — was the mechanism accelerating their consumption toward a cliff. The book sold modestly. William Gladstone read it and raised taxes on coal exports. The mines kept burning. And the paradox Jevons identified — that efficiency in resource use increases rather than decreases total consumption — went on operating in silence for the next hundred and sixty years, unremarked and unrefuted, until it found its most consequential application not in the coalfields of Victorian England but in the cognitive labor markets of the twenty-first century.
The argument Jevons constructed was, in retrospect, embarrassingly simple. It was simple the way gravity is simple — obvious once stated, invisible until someone bothers to state it. James Watt's improved steam engine, introduced in the late eighteenth century, consumed roughly one-third the coal of the Newcomen engine it replaced. The intuitive conclusion, the conclusion that nearly every engineer and politician and editorialist drew, was that this efficiency would extend Britain's coal reserves. Less coal per unit of work meant less coal consumed in total. It was arithmetic. It was common sense. It was wrong.
Jevons's empirical contribution was to demonstrate, with the meticulous data-gathering that characterized his scientific method, that coal consumption in Britain had not declined after Watt's improvements. It had exploded. Between 1830 and 1863, annual coal consumption in Britain rose from approximately thirty million tons to over eighty-six million tons. The improved steam engine had not conserved coal. It had made coal useful — useful for applications that had been economically impossible when engines were inefficient. Factories that could not afford to run Newcomen engines could afford Watt engines. Industries that had relied on water power or human muscle now found steam power cost-effective. Railways became viable. Steamships became viable. The entire industrial geography of Britain reorganized itself around cheap steam power, and that reorganization demanded coal in quantities that dwarfed anything the efficiency gains had saved.
The mechanism, as Jevons articulated it, operated through what modern economists would come to call the price elasticity of demand. When a technology makes a resource more efficient to use, it effectively lowers the cost of the service that resource provides. A more efficient engine means cheaper motive power. Cheaper motive power means more people, more industries, more applications can afford motive power. Demand expands. And if demand is sufficiently elastic — if there are enough latent applications waiting to become economical — the expansion of demand overwhelms the per-unit savings. Total consumption rises. The efficient technology has not conserved the resource. It has unlocked it.
"It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption," Jevons wrote. "The very contrary is the truth." The sentence is remarkable for its directness. There is no hedging, no qualification, no deferential gesture toward the prevailing wisdom. Jevons was a young man making an extraordinary claim — that the entire political and industrial establishment was reasoning backward about the most important resource in the British economy — and he stated it as flatly as a mathematical proof. Because, in his mind, it was one.
What made Jevons unusual among economists of his era was the marriage of theoretical reasoning with empirical obsession. He did not merely assert the paradox. He documented it. The Coal Question is dense with production figures, consumption statistics, trade data, mining depth records, and comparative analyses across industries and nations. Jevons tracked coal consumption per capita across decades. He compared British efficiency improvements with those on the European continent. He examined individual industries — iron, textiles, railways, domestic heating — and showed that in every case, improvements in fuel efficiency had preceded increases, not decreases, in total fuel consumption. The pattern was not anomalous. It was universal.
The theoretical framework supporting this empirical work drew on Jevons's broader contributions to economic thought. By the mid-1860s, Jevons was already developing the marginal utility theory that would become his most celebrated intellectual achievement — the insight that the value of a good is determined not by the total utility it provides but by the utility of the last unit consumed. This framework illuminated the paradox from a different angle. When coal becomes more efficient to use, the marginal cost of coal-derived services falls. Consumers and producers respond to marginal costs, not total costs. The marginal decision — to run the factory an extra hour, to heat one more room, to ship goods by rail rather than canal — shifts in favor of coal consumption with every improvement in efficiency. Multiply that marginal shift across millions of decisions, and the aggregate effect is enormous.
The integration of marginal analysis with the efficiency paradox reveals the mechanism at its most granular level. No single factory owner decides to consume vastly more coal. Each one makes a small, rational, marginal decision to use slightly more, because the cost has fallen slightly. The paradox is not produced by irrationality or waste. It is produced by millions of rational actors responding optimally to price signals. This is what makes the Jevons paradox so intractable: it is not a market failure. It is the market working exactly as it should.
One hundred and sixty years after Jevons published The Coal Question, the resource in question is no longer coal. It is human cognitive labor — the capacity to think, design, write, code, analyze, decide. And the efficiency-improving technology is not the steam engine. It is artificial intelligence.
The structural parallel is precise enough to be unsettling. Consider the data that has accumulated since large language models and AI coding assistants became widely available. The Harvard Business Review study conducted in the early deployment period of AI workplace tools found that AI increased work intensity rather than reducing it. Workers equipped with AI assistants did not complete their existing tasks more quickly and then rest. They completed their existing tasks more quickly and then took on more tasks. The total cognitive output per worker rose. The total hours of cognitive labor did not fall. In many cases, they increased.
This is the Jevons paradox operating on cognition rather than coal. The AI assistant made each unit of cognitive work cheaper to produce. The rational, marginal response — for individuals, for managers, for organizations — was to produce more cognitive work. An engineer who could build a feature in two days instead of ten did not take eight days off. She built four more features. A writer who could draft a report in an afternoon instead of a week did not leave early. She drafted four more reports. Each decision was individually rational. Collectively, they produced the paradox.
The phenomenon Edo Segal documented in The Orange Pill — what he termed the imagination-to-artifact ratio — provides the specific vocabulary for this mechanism in the cognitive domain. When AI collapses the distance between intention and realization, the cost of converting an idea into a tangible output approaches zero. The marginal cost of the next artifact — the next feature, the next design, the next prototype — falls to a level that makes production irresistible. And so production accelerates. Not because anyone is being irrational or exploited, but because the efficiency gain has made cognitive production so cheap that demand for it expands to fill every available space, and then creates new spaces that did not previously exist.
The Substack post that became emblematic of this dynamic — "Help! My Husband is Addicted to Claude Code" — reads, through a Jevonsian lens, not as a story about technology addiction but as a case study in the paradox of efficient resource use. The husband was not addicted to the tool. He was responding rationally to a dramatic reduction in the marginal cost of creative production. Every project he could now complete in hours rather than weeks represented an opportunity cost of not completing it that was painfully visible. The tool had made doing nothing expensive, because doing something had become nearly free. His leisure time had not been stolen by the technology. It had been repriced. Doing nothing now cost all the things he could be building but was not. The efficiency gain had not liberated him from work. It had made work the path of least economic resistance.
Jevons would have recognized this instantly. He had seen it in the factory owners who ran their efficient engines around the clock, not because they were greedy but because the efficiency of the engines made not running them the more costly choice. He had seen it in the railways that expanded their routes not because the existing routes were insufficient but because the efficiency of steam locomotion made new routes economically viable. Efficiency does not create rest. Efficiency creates appetite.
The uncomfortable implication — the one that Jevons himself confronted and that the twenty-first century has thus far avoided — is that no amount of efficiency improvement, by itself, solves the problem of resource depletion. In Jevons's time, the resource was coal, and his warning was that Britain could not innovate its way out of consumption growth. In the present, the resource is human attention, human cognitive capacity, human time — the finite substrate upon which the infinite expansion of AI-enabled productivity makes its demands. If the Jevons paradox holds — and one hundred sixty years of evidence suggests it does — then AI will not liberate humanity from cognitive labor. It will intensify cognitive labor until the human becomes the scarce resource being consumed.
This is not a prediction. It is a pattern. It operated on coal in the nineteenth century, on electricity in the twentieth, on digital computation at the turn of the twenty-first, and it is operating now on the most intimate resource of all: the human capacity to think, to choose, to create, to pay attention. Jevons saw it first, and he saw it with the clarity of a man who trusted his data more than his hopes. The question he left behind — whether humanity can escape the trap of its own efficiency — remains unanswered. The AI era has merely raised the stakes.
Every paradox has an anatomy. It has bones — the logical structure that gives it shape. It has organs — the mechanisms that make it function. And it has skin — the surface appearance that conceals the workings beneath and makes the paradox look, to the casual observer, like an impossibility rather than an inevitability. The Jevons paradox is no exception. To understand why efficiency in cognitive labor produces more labor rather than less, the paradox must be opened up, its components laid bare, its mechanism traced from the molecular level of individual economic decisions to the systemic level of civilizational transformation.
Jevons himself was a dissector by temperament. His approach to economic questions combined the mathematician's demand for formal precision with the empiricist's insistence on observable evidence. He was among the first economists to argue that economics could and should be a mathematical science, and his 1871 work The Theory of Political Economy reformulated the theory of value in terms of calculus — specifically, in terms of the differential calculus of marginal quantities. This mathematical instinct is essential for understanding the paradox, because the paradox operates through marginal decisions, not aggregate intentions. No one decides to consume more of a resource that has become more efficiently used. Everyone decides, at the margin, to consume a little more. The aggregate catastrophe — or the aggregate miracle, depending on one's perspective — is the emergent result.
The first layer of the anatomy is what modern economists call the direct rebound effect. When an efficiency improvement reduces the cost of using a resource, consumers use more of it. This is the simplest and most intuitive component of the paradox, though even this component surprises people who have not thought carefully about price and demand. Jevons documented the direct rebound in coal with characteristic precision: as the cost of steam power fell per unit of output, factory owners expanded their output. They ran machines longer. They heated larger spaces. They applied steam power to processes that had previously relied on human or animal labor. Each of these decisions was a direct response to the reduced cost of coal-derived energy. The aggregate effect was increased coal consumption, but no individual actor experienced himself as "consuming more coal." Each experienced himself as "taking advantage of cheaper power."
The direct rebound effect in cognitive labor follows identical logic. When an AI assistant reduces the cost of producing a unit of cognitive output — a line of code, a paragraph of analysis, a design iteration — the producer creates more units. The Harvard Business Review findings and the behavioral evidence documented in the Orange Pill convergence reports confirm this at scale. But the individual experience is not one of consuming more cognitive resources. It is one of getting more done. The developer using Claude Code does not perceive herself as depleting her cognitive reserves faster. She perceives herself as being productive. The increased consumption is real, but it is invisible at the level of individual experience — visible only in the aggregate data on hours worked, output produced, and work intensity levels.
The second layer is the indirect rebound effect. When efficiency reduces the cost of using a resource, the money saved is not destroyed. It is spent on other goods and services, many of which also require the resource in question. A factory owner who saves money on coal by running more efficient engines may invest those savings in expanding the factory — which requires more coal. Or he may invest in transportation — which requires coal for railways and steamships. Or he may simply increase wages, and his workers may heat their homes more liberally. The savings circulate through the economy and, through multiple channels, generate additional demand for the resource that was "saved."
The indirect rebound in the cognitive domain is subtler but equally powerful. When an organization reduces the cost of producing software by deploying AI coding tools, the money saved does not vanish. It funds new projects — projects that also require cognitive labor. It enables expansion into new markets, new products, new services — all of which generate demand for exactly the kind of cognitive work that AI has made cheaper. The cost savings from AI-assisted development at a technology company become the seed funding for new development projects at that same company. The efficiency gain in one department becomes a productivity expectation in every department. The resource — human cognitive labor — is not conserved. It is reallocated and then supplemented, and the total consumption rises.
Jevons was acutely aware of this circulatory dynamic. In The Coal Question, he traced the indirect effects of coal efficiency through the entire British economy, showing how savings in one sector became consumption in another. The economy, in his analysis, was not a collection of independent actors making isolated decisions but an interconnected system in which efficiency gains propagated through feedback loops. This systems-level thinking — unusual for a mid-nineteenth-century economist — is what gave his paradox its force. The rebound was not a one-time event. It was a self-reinforcing process. Efficiency lowered costs, which increased demand, which stimulated further investment in efficient technologies, which lowered costs further, which increased demand further. The positive feedback loop continued until the resource was consumed at rates that made the original "efficiency gains" look like a rounding error.
The third and deepest layer of the anatomy is what might be called the structural transformation effect. This is the component that distinguishes the Jevons paradox from a mere empirical curiosity and elevates it to a principle of civilizational dynamics. When efficiency improvements make a resource sufficiently cheap, entirely new economic structures emerge that depend on that cheapness. These structures did not exist before the efficiency improvement and cannot exist without it. They are not expansions of existing demand but creations of new demand — demand that was literally inconceivable in the pre-efficiency world.
Before the Watt engine, the British economy had no railway system. It had no steamship fleet. It had no mass-production manufacturing sector. These structures did not exist because they were economically impossible without cheap steam power. Watt's efficiency improvement did not merely expand existing coal consumption. It created entirely new categories of consumption — categories that consumed coal at scales that the pre-Watt economy could not have imagined. This structural transformation accounts for the most dramatic portion of the Jevons paradox. The direct and indirect rebound effects might, in theory, be contained by policy or behavioral change. The structural transformation effect cannot be contained, because it represents the emergence of a genuinely new economic order that depends on the very consumption pattern one might wish to reduce.
The structural transformation in cognitive labor is already visible, and Jevons's framework predicts its acceleration. AI has not merely made existing cognitive work more efficient. It has created entirely new categories of cognitive work that did not previously exist. The "vibe coder" — the non-technical entrepreneur who builds functional software through natural language interaction with an AI — is not an existing role made more efficient. It is a new role that exists only because AI has reduced the cost of cognitive production below the threshold at which this kind of work becomes possible. The Orange Pill documents this emergence with vivid specificity: the Figma designer building production applications, the fourteen-year-old creating functional tools, the humanities graduate prototyping SaaS products. None of these activities existed before AI made them economically viable. They are the railways and steamships of the cognitive revolution — new structures of production that depend on cheap cognitive output and that consume that output in quantities that dwarf anything saved by the efficiency improvement that created them.
Jevons's marginal utility theory adds a further dimension to the anatomy. As the quantity of any good or service increases, the marginal utility of each additional unit declines. The ten thousandth AI-generated image provides less satisfaction than the first. The hundredth AI-assisted feature in a software product provides less value than the tenth. This declining marginal utility should, in theory, slow the expansion of demand. It does not — or rather, it does not slow it enough. The reason is that while marginal utility declines, marginal cost declines faster. AI makes the production of cognitive output so cheap that even goods with very low marginal utility are worth producing. The hundredth feature may provide minimal value, but if it costs almost nothing to build, the expected value still exceeds the expected cost. The result is an economy flooded with marginally valuable cognitive output — competent code, adequate writing, serviceable design — produced in enormous quantities precisely because its low marginal utility is offset by its even lower marginal cost.
This dynamic has a name in the Orange Pill framework: the tsunami of content, the flood of adequate production that makes scarcity the new premium. Jevons's marginal analysis explains why the flood is inevitable. When production costs approach zero, the rational response is to produce everything that has any positive marginal value, no matter how small. The market does not discriminate between the profoundly valuable and the barely worthwhile when both cost the same to produce. The result is an abundance so vast that it transforms the meaning of value itself: in a world where competent output is free, only the exceptional commands a premium.
The final structural element of the paradox involves time — specifically, the relationship between efficiency gains and temporal horizons. Jevons observed that efficiency improvements often produce short-term reductions in consumption that are reversed over the medium and long term. A new, more efficient engine might initially reduce coal consumption at a specific factory. But over time, as the factory expands, as competitors adopt the same technology, as new applications emerge, consumption rises above its pre-efficiency level and continues to climb. The short-term reduction is real but misleading. It creates the illusion that efficiency is working as intended, when in fact the paradox is merely winding up.
The same temporal structure is visible in the deployment of AI cognitive tools. Early adoption often produces apparent efficiency gains: teams shrink, timelines compress, costs fall. These short-term effects are real and measurable. But the medium-term effects — the expansion of ambition, the proliferation of new projects, the creation of new roles and new markets — overwhelm the initial savings. Jevons's paradox is a patient mechanism. It allows the efficiency narrative to seem true for a season before revealing its full implications. The organizations celebrating their reduced headcounts and compressed timelines are living in the short-term phase. The Jevonsian reckoning — the discovery that total cognitive demands have increased, not decreased — arrives later, and it arrives everywhere at once.
Understanding the anatomy of the paradox does not, by itself, suggest a solution. Jevons himself offered no solution to the coal question beyond the grim acknowledgment that Britain's industrial supremacy was built on a finite resource being consumed at an accelerating rate. The cognitive analogy raises its own grim question: if the finite resource is human attention, human creative capacity, human time — and if efficiency improvements accelerate rather than reduce the consumption of these resources — then the endpoint is not a depleted coal mine. It is a depleted human. The anatomy of the paradox reveals not a flaw in AI but a property of efficiency itself, operating with the same implacable logic whether the fuel is anthracite or attention.
The printing press did not reduce the demand for scribes. It eliminated scribes and replaced them with an entirely new ecosystem — typesetters, publishers, booksellers, authors, editors, critics, censors, librarians — that consumed more literate labor than the medieval scriptoria had ever imagined possible. The camera did not reduce the demand for visual representation. It destroyed the portrait miniature industry and replaced it with a visual culture so vast that by the twenty-first century, humanity was producing more images per two minutes than had existed in all of human history prior to 1826. The phonograph did not reduce the demand for musical performance. It annihilated the economic model of live performance as the primary revenue source for musicians and created a recorded-music industry that eventually generated hundreds of billions of dollars in annual revenue, employing exponentially more people in the production, distribution, and consumption of music than the pre-recording era had sustained.
Each of these technologies made the production of a specific category of output more efficient. Each was greeted, by at least some contemporaries, with the prediction that it would reduce the total demand for the human labor it augmented. Each produced the opposite result. The pattern is so consistent, so thoroughly documented across five centuries of technological change, that it deserves to be called what it is: a law. Not a tendency. Not a frequently observed outcome. A law — as reliable in its operation as the law Jevons derived from coal consumption data in Victorian England.
The term that modern economists use for this phenomenon is the rebound effect, and its lineage traces directly to Jevons's original insight. The rebound effect describes the empirical regularity that efficiency improvements in resource use are partially — and sometimes more than fully — offset by behavioral and systemic responses that increase total consumption. When the rebound exceeds one hundred percent — when total consumption after the efficiency improvement exceeds total consumption before — the result is called "backfire." Jevons's coal case was backfire. Most creative technology cases are backfire. And the AI case, the evidence strongly suggests, is backfire of a magnitude that dwarfs all previous instances.
The rebound effect in creative production follows a specific pattern that Jevons's framework illuminates with uncomfortable clarity. The pattern has three phases.
In the first phase, a new technology reduces the marginal cost of creative production. The printing press reduced the cost of reproducing text. The camera reduced the cost of producing images. Digital audio workstations reduced the cost of producing music. AI reduces the cost of producing code, prose, visual design, video, and analytical work. The cost reduction is genuine. The efficiency gain is real. In this phase, the optimists appear to be correct: less labor is required per unit of output, and total costs fall.
In the second phase, the reduced cost stimulates demand. New consumers appear. New applications emerge. New markets open. Gutenberg's press made books cheap enough for the emerging merchant class. Kodak's camera made photography cheap enough for amateurs. GarageBand made music production cheap enough for hobbyists. AI makes cognitive production cheap enough for everyone. Demand expands not linearly but exponentially, because the technology does not merely reduce costs for existing use cases — it creates entirely new use cases that were previously impossible. This is the structural transformation effect that Jevons identified, operating with relentless consistency across centuries and technologies.
In the third phase, the expanded demand requires new infrastructure, new labor, new capital, and new institutional arrangements — all of which consume the very resource that the original technology made more efficient. The printing press required paper mills, ink factories, distribution networks, bookshops, literacy programs, and eventually copyright law. Each of these secondary developments employed more literate labor than the press itself had displaced. The internet required data centers, network engineers, content moderators, web designers, digital marketers, and eventually an entire ecosystem of search engine optimization specialists — roles that would have been unintelligible to the engineers who built ARPANET. The third phase is where the rebound effect completes its work. The resource is not conserved. It is consumed at a higher rate, through a more complex system, by more actors, for more purposes, than anyone imagined possible when the efficiency technology was introduced.
AI's rebound effect on cognitive labor is currently transitioning from the second phase to the third. The cost reduction is established. The demand expansion is documented. The new roles are emerging: prompt engineers, AI trainers, synthetic data curators, human-AI collaboration designers, AI ethics consultants, vibe coders, AI-augmented creative directors. The infrastructure is being built: new development workflows, new deployment pipelines, new evaluation frameworks, new organizational structures designed around AI-augmented teams. Each of these developments consumes cognitive labor. The rebound is not a future risk. It is a present reality, and it is accelerating.
Jevons's empirical method demands that this claim be supported with evidence, not merely asserted. The evidence is extensive. The Orange Pill compiles case studies that, read through a Jevonsian lens, constitute a comprehensive documentation of the rebound effect operating in real time. Consider the developer who reported building a complete application — frontend, backend, database, deployment — in a weekend. The efficiency gain is staggering: what previously required a team of specialists working for months was accomplished by a single person in forty-eight hours. The naive expectation is that this efficiency gain would reduce the total demand for development labor. The actual result, documented across hundreds of similar cases, is that the developer immediately began building the next application. And the next. And the next. The efficiency did not create leisure. It created a production cascade — each completed project revealing new possibilities, each possibility generating new projects, each project consuming the cognitive resources that the efficiency gain had supposedly freed.
The aggregate data confirms the case-study evidence. GitHub reported that AI-generated code contributions grew from a negligible fraction to four percent of all commits within the early period of AI coding assistant deployment, and the trajectory was exponential. This four percent did not replace four percent of human commits. Total commits increased. The AI contributions were additive, not substitutive — new code that would not have been written at all without the efficiency of AI assistance. This is the clearest possible demonstration of the Jevons paradox in the cognitive domain: the efficiency technology produced more total output, not the same output at lower cost.
But the rebound effect in cognitive labor has a dimension that coal-era rebounds did not: the speed of iteration. When Watt's engine improved coal efficiency, the rebound played out over decades. New applications emerged gradually. New industries developed over generations. The temporal buffer gave society time — not enough time, as Jevons warned, but some time — to observe and potentially respond to the increasing consumption. AI offers no such buffer. The imagination-to-artifact ratio collapses not over decades but over months. A concept that exists at breakfast can be a functional prototype by dinner. The rebound effect, which in previous technological epochs unfolded across the timescale of institutional adaptation, now operates at the timescale of individual ambition. The result is a rebound so fast that it outpaces the capacity of individuals, organizations, and societies to register that it is happening.
The speed dimension connects Jevons's analysis to the Orange Pill's documentation of what might be called cognitive consumption spirals — recursive loops in which efficiency enables production, production reveals possibility, possibility generates ambition, ambition drives consumption, and consumption demands more efficiency. The developer addicted to Claude Code is caught in such a spiral. The efficiency of the tool reveals how much is possible; the awareness of possibility creates an obligation to produce; the production creates awareness of further possibility; the cycle continues until the human — the finite resource at the center of the loop — approaches exhaustion. This is the rebound effect operating not on an economy but on an individual, and it is the human-scale manifestation of the pattern Jevons identified at the civilizational scale.
The historical evidence suggests that rebound effects are strongest when three conditions are met: the efficiency improvement is large, the latent demand for the resource's services is high, and the resource's services can be applied to a wide range of uses. All three conditions are met, to an extraordinary degree, in the case of AI and cognitive labor. The efficiency improvement is not incremental — it is, in many domains, an order of magnitude or more. A hundredfold reduction in the cost of producing competent code is not a marginal efficiency gain; it is a transformation. The latent demand for cognitive output is enormous — every organization, every individual, every government has a backlog of cognitive work that was previously too expensive to undertake. And the range of applications is effectively unlimited — cognitive labor applies to every sector, every industry, every domain of human activity.
Jevons would have looked at these conditions and drawn the same conclusion he drew about coal in 1865: that the notion of AI "replacing" human cognitive labor in a way that reduces total cognitive demands is "wholly a confusion of ideas." The efficiency will increase consumption. The consumption will create new structures. The new structures will demand more of the resource than the old structures ever did. The only question — and it is the question Jevons could not answer for coal, and that no one has yet answered for cognition — is what happens when the resource being consumed is not an underground deposit of fossilized carbon but the limited cognitive and attentional capacity of living human beings.
The rebound effect that ate the world is not a story about coal, or printing, or photography, or AI. It is a story about the nature of efficiency itself — about the stubborn, counterintuitive, empirically irrefutable fact that making things easier to do results in more things being done, not fewer. Jevons saw it with the clarity of a man who trusted arithmetic more than optimism. The arithmetic has not changed. The resource has.
In 1871, six years after publishing The Coal Question, William Stanley Jevons produced the work that would cement his position in the history of economic thought. The Theory of Political Economy introduced, with mathematical precision, the concept that had been forming in his mind for over a decade: that the value of a good is determined not by the total amount of labor required to produce it, nor by the total satisfaction it provides, but by the satisfaction provided by the last unit consumed — the marginal utility. A glass of water to a dying man in a desert has enormous marginal utility. A glass of water to a man standing next to a river has virtually none. The water is the same. The context — specifically, the quantity already available — is what determines its value.
The insight was revolutionary not because it was entirely new — Hermann Heinrich Gossen had articulated a version of it decades earlier, and Carl Menger and Léon Walras arrived at similar conclusions nearly simultaneously — but because Jevons formulated it with the mathematical rigor that made it operational. He expressed the theory in the language of calculus: the value of a good is its derivative with respect to quantity, the rate at which total utility changes as consumption increases by one infinitesimal unit. This formulation allowed for precise analysis of economic behavior that had previously been described only in qualitative terms. The farmer does not plant another acre because farming is "valuable." He plants another acre because the marginal revenue from that acre exceeds the marginal cost. The consumer does not buy another loaf because bread is "useful." She buys another loaf because the satisfaction from that specific loaf exceeds the price. Economic decisions happen at the margin, and the margin is where analysis must focus.
This framework, when applied to the cognitive production landscape of the AI era, generates predictions of striking specificity — predictions that the available evidence appears to confirm.
The first prediction concerns the economic value of abundance. As the quantity of any good increases, its marginal utility declines. This is the law of diminishing marginal utility, and it is among the most robust empirical regularities in economics. Applied to AI-generated cognitive output, the prediction is stark: as AI floods the market with competent code, adequate prose, serviceable designs, and functional analyses, the marginal value of each additional unit of competent output approaches zero. The thousandth AI-generated blog post on a given topic provides negligible value to the reader who has already encountered nine hundred and ninety-nine similar posts. The fiftieth AI-designed logo in a client presentation adds almost nothing to the decision-making process. The hundredth AI-generated unit test in a codebase contributes diminishing returns to reliability.
Jevons understood — and modern economics confirms — that declining marginal utility does not prevent production from continuing as long as marginal cost falls faster than marginal value. If the cost of producing the thousandth blog post is essentially zero, even a marginal value of 0.001 units of satisfaction justifies production. The result is a world flooded with output that is technically valuable but practically negligible — a flood of adequacy, in which the average quality of available cognitive output is higher than at any point in human history while the marginal value of any specific piece of that output is lower than it has ever been.
This is not a hypothetical scenario. It is a documented reality. By the mid-2020s, the symptoms of the adequacy flood were visible across every domain that AI had touched. In software development, the proliferation of AI-generated code had produced what several industry observers described as a "code tsunami" — vast quantities of functional but unremarkable software that worked but did not inspire, that solved problems but did not surprise. In written content, the volume of AI-assisted articles, reports, and marketing copy had grown so large that search engines struggled to distinguish signal from noise, and "AI slop" had entered the vernacular as a term for technically competent text that no human had a particular reason to have written or to read. In visual design, AI image generators had made it possible to produce professional-quality imagery at near-zero cost, and the result was not a golden age of visual culture but a landscape saturated with images that were individually impressive and collectively numbing.
Jevons's marginal analysis explains why this saturation was inevitable and why it cannot be resolved by producing better AI output. The issue is not quality. The issue is quantity. When quantity increases sufficiently, marginal utility declines regardless of quality. A world containing a million excellent images is a world in which any one excellent image has minimal marginal value. Excellence does not protect against the erosion of marginal utility when excellence itself becomes abundant. The medieval manuscript illuminator derived enormous value from each masterwork because masterworks were scarce. The AI era produces masterworks (or reasonable facsimiles) at industrial scale, and scarcity — the foundation of economic value — evaporates.
The second prediction from Jevons's marginal framework concerns the redistribution of value toward scarcity. When a previously valuable good becomes abundant, economic value migrates to whatever remains scarce. In the coal economy, when cheap energy became abundant, the economic premium shifted to the other inputs that the cheap energy demanded: skilled labor, raw materials, capital equipment, transportation infrastructure. The abundant resource ceased to command premium pricing; the scarce complements commanded more.
In the cognitive economy, the migration of value follows an analogous pattern. When competent cognitive output becomes abundant, the premium shifts to attributes that AI cannot easily replicate: authenticity, personal narrative, cultural specificity, emotional resonance, ethical judgment, institutional knowledge, relational trust. These qualities are scarce not because they are difficult to define but because they emerge from processes — lived experience, community membership, embodied presence — that AI does not and perhaps cannot share. The Orange Pill documents this shift with characteristic specificity. The human author's byline gains value precisely because the reader knows it represents a specific person's actual experience and judgment, not a statistical recombination of training data. The hand-thrown ceramic gains value precisely because its imperfections testify to a physical process that no algorithm replicates. The doctor's diagnosis gains value precisely because it comes with the accountability, empathy, and interpretive judgment that institutional trust requires.
Jevons's framework reveals that this value migration is not sentimental. It is economic. When marginal utility declines for the abundant good, the consumer's marginal dollar migrates to the scarce good where marginal utility remains high. If excellent AI-generated prose is available for free, the consumer's willingness to pay for additional AI-generated prose approaches zero. But the consumer's willingness to pay for a specific human writer's perspective — a perspective grounded in unique experience, earned authority, and personal voice — remains positive, because that perspective is not available from any other source. Scarcity sustains marginal utility. Abundance destroys it. This is not a cultural preference. It is a mathematical consequence of the diminishing marginal utility function.
The third prediction — and the most troubling — concerns the transition period between the old economy and the new one. In the coal economy, the shift from expensive, inefficient energy to cheap, efficient energy did not proceed smoothly. The industries that depended on the old energy economics — hand-loom weaving, canal transport, rural blacksmithing — did not transition gracefully into the new industrial order. They were destroyed, and the human beings embedded in them suffered enormously. The new economy eventually generated more employment, more wealth, and more opportunity than the old, but the "eventually" lasted decades, and the transition costs were borne disproportionately by those least equipped to bear them.
Jevons's marginal analysis predicts an analogous transition in cognitive labor. The old economy valued competent cognitive output per se: the ability to write clean code, produce clear prose, design functional interfaces, conduct reliable analyses. These skills were scarce, and their scarcity sustained their economic value. AI has made them abundant. The new economy will value the scarce complements — judgment, creativity, authenticity, relational intelligence — but the transition from one value regime to the other will not be smooth. Workers whose primary value proposition was competent execution will find their marginal value declining toward zero. Workers whose primary value proposition is creative judgment, authentic voice, or relational trust will find their marginal value increasing. But these are not always the same workers, and the retraining required to move from one category to the other may be far more profound than any technical upskilling program can provide.
The transition is further complicated by what might be called the adequacy trap. When AI produces output that is adequate — functional code, grammatically correct prose, visually coherent design — the marginal improvement offered by a skilled human may be real but economically insufficient to justify the cost. The client who can obtain a ninety-percent-quality logo for near-zero cost may not be willing to pay a designer's rate for the additional ten percent. The additional ten percent has positive marginal utility, but its marginal utility may not exceed its marginal cost. The designer is caught between abundance below and scarcity above: the market is flooded with adequate output that undercuts her on price, while the truly exceptional work that commands premium pricing requires a level of creative vision that only a small fraction of designers possess. The middle disappears. This is the hollowing out of the cognitive labor market that Jevons's framework predicts with mathematical precision.
Jevons himself experienced a version of this analysis in his assessment of coal's future. He understood that Britain's industrial economy occupied a specific position in a larger trajectory — a position that seemed permanent but was in fact transitional. The coal was abundant enough to fuel an industrial revolution but finite enough to impose an eventual constraint. The efficiency improvements that made the revolution possible were also accelerating the consumption that would end it. The economy was living on a curve that felt like a plateau but was actually a peak.
The analogy to the cognitive economy is imperfect — human cognition is renewable in a way that coal is not — but the structural insight transfers. The current period, in which human cognitive workers coexist with AI systems in a landscape that rewards both, may not be a permanent equilibrium. It may be a transitional phase, analogous to the period in which hand-loom weavers coexisted with power looms. The coexistence was real but temporary. The power loom was simply too efficient, and the rebound effect — the explosion of demand for cheap textiles — did not save the hand-loom weavers. It created a textile industry that employed more people than ever before, but in different roles, with different skills, under different economic logic.
The marginal utility framework provides one more critical insight: the value of curation rises as the value of creation falls. When production is cheap and abundant, the scarce resource is not the ability to create but the ability to select. The reader drowning in AI-generated text values the editor who can identify what is worth reading. The executive overwhelmed by AI-generated strategic analyses values the advisor who can identify which analysis is trustworthy. The consumer facing an infinite scroll of AI-generated products values the curator who can identify what is genuinely worth owning. In Jevonsian terms, the marginal utility of curation rises as the marginal utility of creation falls, because curation addresses the scarcity (attention, judgment, trust) that creation has not resolved but exacerbated.
This redistribution of value from creation to curation, from production to judgment, from output to taste, represents one of the most consequential economic transformations of the AI era. Jevons did not predict it in so many words — his concern was coal, not cognition — but his analytical framework makes the prediction inescapable. When any resource becomes abundant, value migrates to the complement that remains scarce. In an age of infinite production, that scarce complement is the capacity to discern what, among the infinite, is worth a finite human being's irreplaceable time.
In 1871, six years after publishing The Coal Question, William Stanley Jevons produced the work that would secure his place in the permanent architecture of economic thought. The Theory of Political Economy advanced a proposition so fundamental that it restructured the discipline: the value of any good is determined not by the total satisfaction it provides, nor by the labor required to produce it, but by the satisfaction provided by the last unit consumed. The final degree of utility — what subsequent economists would call marginal utility — governs exchange, governs price, governs the entire machinery of markets. A man dying of thirst values the first glass of water more than anything in the world. The tenth glass he values hardly at all. The hundredth glass he would pay someone to take away. The good has not changed. The water is the same water. What has changed is the quantity available relative to the desire, and it is this ratio — this relationship between abundance and the diminishing satisfaction each additional unit provides — that determines economic value.
Jevons expressed this relationship with mathematical precision. If U represents total utility and x represents the quantity of a commodity, then dU/dx — the derivative of utility with respect to quantity, the rate at which satisfaction changes as one more unit is added — is what governs rational economic behavior. This derivative is positive but decreasing: each additional unit adds something, but less than the unit before it. The curve slopes downward. Satisfaction decays. And somewhere along that curve, the marginal utility of the good falls below the marginal utility of whatever must be sacrificed to obtain it, and the rational actor stops consuming.
This mathematical framework, developed to explain the pricing of corn and cotton in Victorian markets, describes with eerie precision what happens when artificial intelligence floods the world with cognitive output.
Consider the supply of written content before and after the deployment of large language models. Before 2023, producing a competent analytical report required hours of human labor — research, drafting, revision, editing. The scarcity of competent reports gave each one meaningful economic value. An analyst who could produce clear, accurate reports commanded a salary commensurate with that scarcity. The marginal utility of each additional report, to the organization consuming it, was high, because reports were expensive enough that only the most necessary ones were commissioned. Demand was rationed by cost.
After the deployment of AI writing tools, the marginal cost of producing a competent report collapsed. The analyst who previously produced one report per week could now produce five. The organization that previously commissioned ten reports per quarter could now commission fifty. The quality of each individual report might be equivalent to or slightly below the pre-AI standard, but the quantity available exploded. And here Jevons's marginal utility framework delivers its verdict: as quantity rises, the marginal value of each additional unit falls. The fiftieth report provides less insight, less decision-making value, less organizational utility than the tenth. The five hundredth provides almost none. The report has not become worse. It has become common. And commonality, in the arithmetic of marginal utility, is the enemy of value.
This is the mechanism underlying what the Orange Pill describes as the tsunami of adequate content. The phrase captures the experiential reality — the sensation of drowning in competent but unremarkable output — but Jevons's framework provides the economic explanation. When production costs approach zero, the rational response is to produce everything with any positive marginal value. A report that provides even trivial insight is worth producing if it costs nothing to produce. A line of code that solves even a minor inconvenience is worth writing if the writing requires no effort. A design variant that improves the user experience by one percent is worth creating if creation takes seconds rather than hours. Each individual production decision is rational. The aggregate result is a flood of output in which the marginal utility of each additional piece approaches zero while the total volume approaches infinity.
Jevons understood that declining marginal utility does not merely reduce the value of individual goods. It transforms the structure of markets. When the marginal utility of a commodity class falls sufficiently low, consumers stop differentiating between producers of that commodity. Wheat is wheat. Coal is coal. Water is water. The commodity becomes, in economic terminology, fungible — interchangeable, undifferentiated, priced at marginal cost rather than at any premium reflecting quality or origin. The producer of fungible commodities earns thin margins and competes solely on price and volume. This is the economic structure of commodity markets, and it is the structure toward which AI is pushing vast categories of cognitive work.
Competent code is becoming fungible. Adequate prose is becoming fungible. Serviceable design is becoming fungible. The AI-generated version and the human-generated version, at the level of competence, are increasingly interchangeable. The buyer of competent code does not care whether a human or a machine wrote it, just as the buyer of wheat does not care which field it grew in. The marginal utility of one more competent function, one more adequate paragraph, one more serviceable layout is the same regardless of its source. And when the AI-generated version costs a fraction of the human-generated version, the economic logic is merciless: the commodity goes to the lowest-cost producer.
Jevons's marginal analysis predicts this commodification, but it also predicts something more nuanced — something that offers a counterweight to the despair the commodification narrative tends to produce. As the marginal utility of abundant goods approaches zero, the marginal utility of scarce goods increases. This is not optimism. It is mathematics. When the market is saturated with competent analytical reports, the report that provides genuine insight — the report that reframes the question, that identifies the variable everyone else missed, that tells the executive something she did not know she needed to know — becomes disproportionately valuable. Not incrementally more valuable. Disproportionately. Because it is now defined against a background of unlimited adequacy, and the contrast makes its exceptionality visible in a way that was impossible when all reports were expensive and therefore scarce.
The same dynamic operates across every domain of cognitive production. When AI generates unlimited competent code, the developer who architects elegant systems — who sees the structural problem beneath the surface requirement, who designs for the constraints that will matter in eighteen months rather than the features requested today — becomes not less valuable but radically more so. When AI produces unlimited serviceable prose, the writer who captures something true about human experience — who makes the reader stop and reread a sentence, who crystallizes a feeling that had been formless — becomes not replaceable but irreplaceable. The marginal utility curve, applied to cognitive abundance, does not merely predict the devaluation of the common. It predicts the elevation of the rare.
This bifurcation — the simultaneous commodification of the adequate and elevation of the exceptional — is one of the most consequential predictions that Jevons's framework generates for the AI economy. It suggests that the impact of AI on human cognitive work will not be uniform. It will be polarizing. The vast middle of cognitive production — work that is competent, reliable, and unremarkable — will be absorbed by AI and priced at AI's marginal cost, which approaches zero. The extremes — work that is genuinely exceptional, work that possesses qualities AI cannot replicate — will command premiums that increase precisely because the middle has been eliminated. The economic distance between adequate and exceptional will widen into a chasm.
The qualities that resist commodification are those that cannot be produced by optimizing for the average. Jevons was attentive to this distinction in his own analysis of value. He noted that the marginal utility of goods varies not only with quantity but with the specific desires and circumstances of the consumer. A glass of water has different marginal utility for a man in a desert than for a man beside a river. The same quantity, the same good, different values — because context transforms utility. In the cognitive domain, the context that transforms utility is meaning. AI can produce a technically adequate eulogy for a stranger. Only a human who knew the deceased can produce a eulogy that makes the congregation weep. The marginal utility of the former, in the specific context where meaning matters, is zero. The marginal utility of the latter is beyond calculation.
This is the economic basis for what the Orange Pill terms the "authenticity premium" — the increasing value of work that is legibly, provably, meaningfully human. Jevons's marginal utility theory explains why this premium must increase as AI-generated content becomes more abundant. The abundant good falls in marginal value. The scarce good — the good defined by qualities that abundance cannot produce — rises. Authenticity, personal significance, cultural specificity, emotional truth: these are not sentimental categories. They are economic categories, defined by their scarcity in a market flooded with competent artifice.
But Jevons's framework also issues a warning embedded in the mathematics. The marginal utility curve does not guarantee that the scarce good will find its buyer. It guarantees only that if the scarce good is found, it will be valued highly. In a market of infinite volume, the exceptional piece of work must first be discovered before its marginal utility can be realized. When ten thousand adequate reports flood the inbox, the brilliant eleventh report may never be read. When ten million competent songs are available on the streaming platform, the transcendent composition may never be heard. The Jevonsian prediction is not that excellence will be automatically rewarded. The prediction is that excellence will be theoretically more valuable than ever while being practically harder to surface than ever. Scarcity of quality coexists with scarcity of attention, and the second scarcity may prevent the first from realizing its value.
This is where the Jevons paradox and the marginal utility theory converge in their most uncomfortable implication for the cognitive economy. The paradox predicts that AI efficiency will produce more total cognitive output. The marginal utility theory predicts that most of this output will be nearly worthless. The intersection of these two predictions is a world glutted with valueless production and starved for the attention required to find the valuable exceptions. The economic logic is airtight. The human consequence is a marketplace in which the most important work is simultaneously the most valuable and the most likely to be buried.
Jevons himself, characteristically, did not flinch from the difficult implications of his own theories. He was content to follow the mathematics where it led, even when it led to conclusions that offered no comfort. The marginal utility of coal-derived energy was declining even as total consumption was rising, and the result was an economy increasingly dependent on a resource of diminishing returns — consuming more for less, running faster to stay in place. The marginal utility of AI-generated cognitive output is declining even as total production is rising, and the result may be structurally identical: an economy consuming vast quantities of cognitive output for diminishing returns, measuring productivity by volume while value quietly migrates to the margins where the machines cannot reach.
The question Jevons's framework poses is not whether AI will produce abundance. That outcome is mathematically certain. The question is whether abundance, in the domain of cognition as in the domain of coal, produces prosperity or merely produces consumption — whether the ten thousandth unit of output adds anything meaningful to the sum of human welfare, or whether it merely adds to the sum of human exhaustion. Marginal utility theory provides the tools to answer this question, but the answer it provides is disquieting: in the domain of cognitive production, as in every other domain Jevons studied, abundance and value move in opposite directions. The more there is, the less each piece matters. And the less each piece matters, the more must be produced to achieve any given level of total value. The treadmill does not stop. It was not designed to stop. It was designed, by the implacable logic of marginal returns, to accelerate.
Jevons died in 1882, at the age of forty-six, drowned in a swimming accident off the coast of Hastings. He did not live to see the coal mines of Britain begin their long decline, nor to witness the vindication of his central prediction — that coal consumption would continue to accelerate until physical limits intervened. By the time the geological and economic constraints he had foreseen began to bite, in the early decades of the twentieth century, Britain had already begun its transition to petroleum, and the coal question had been subsumed by the oil question, which was in all essential respects the same question with a different fuel. The resource changed. The paradox did not.
But there was a subtlety in Jevons's analysis that his contemporaries mostly overlooked and that subsequent economists have only intermittently appreciated. Jevons did not merely argue that coal consumption would increase. He argued that it would increase until it encountered a hard constraint — the finite supply of coal in British mines — and that this constraint, when it arrived, would not produce a gentle plateau but a crisis. The efficiency-driven expansion of consumption was not a stable system trending toward equilibrium. It was an accelerating system trending toward a wall. The wall might be geological (running out of coal), economic (coal becoming too expensive to extract), or ecological (the consequences of combustion becoming too severe to tolerate). But the wall was inevitable, because the paradox contained no internal mechanism for self-correction. Efficiency would keep increasing consumption until something external stopped it.
The application of this analysis to human cognitive labor requires confronting a question that the technology industry has thus far preferred to avoid: what is the hard constraint on human cognitive output, and what happens when the AI-driven acceleration of cognitive demands encounters it?
The constraint is not mysterious. It is biological. The human brain consumes approximately twenty watts of power and operates within a body that requires sleep, nutrition, social connection, and periodic freedom from stimulation. Human attention is not merely finite in the abstract economic sense that all resources are finite. It is finite in the specific biological sense that exceeding its limits produces measurable, predictable, and well-documented deterioration: impaired judgment, reduced creativity, emotional dysregulation, physical illness. The medical literature on burnout, sleep deprivation, and chronic stress describes what happens when cognitive demands exceed cognitive supply. The symptoms are the cognitive equivalent of a coal mine running dry: the resource does not gradually diminish. It collapses.
The Jevons paradox predicts that AI will increase the total demand for human cognitive labor. The biological constraint predicts that human cognitive labor cannot expand indefinitely. The collision between these two forces — the economic logic of accelerating demand and the biological reality of finite supply — is the cognitive equivalent of Jevons's coal crisis. And like the coal crisis, it will not arrive as a gradual transition. It will arrive as a systemic failure.
The evidence that this collision is already occurring fills the pages of the Orange Pill and the broader literature on AI-augmented work. Nat Eliason's confession — "I have NEVER worked this hard in my life" — is not the testimony of a person approaching equilibrium. It is the testimony of a person approaching a wall. The Substack post about the husband addicted to Claude Code describes a household in which the efficiency tool has not produced leisure but has consumed it, not freed time but devoured it, not reduced stress but intensified it to the point of relational crisis. The Harvard Business Review study that found AI increasing rather than decreasing work intensity is not describing a temporary adjustment period. It is describing the Jevons paradox in the phase immediately before the hard constraint is hit.
Jevons would have recognized the pattern instantly because he had documented it with meticulous precision in the coal economy. In the decades before a mine's exhaustion, production does not decline gradually. It accelerates. The mine operators, facing rising extraction costs as the easiest seams are depleted, invest in more efficient extraction technology — which, per the paradox, increases the rate of extraction. The mine appears to be producing at peak capacity right up until the moment it cannot produce at all. The transition from abundance to scarcity is not a gentle curve. It is a cliff.
The cognitive parallel is disturbingly exact. An individual operating at maximum AI-augmented productivity appears, from the outside, to be at peak performance. Output is high. Projects are shipping. Deadlines are met. The metrics that organizations use to measure cognitive productivity — lines of code, features deployed, documents produced — all point upward. But the internal experience may be one of accelerating depletion: declining sleep, narrowing attention, eroding creativity, fraying relationships. The metrics that measure output do not measure the resource being consumed to produce it. Just as coal production statistics did not measure the remaining coal in the ground, productivity statistics do not measure the remaining cognitive capacity in the human.
Jevons's analysis of mine depletion introduced a concept that has particular resonance for the cognitive economy: the distinction between the rate of extraction and the remaining stock. A high rate of extraction is not evidence of abundance. It may be evidence of the opposite — of an accelerating draw-down on a diminishing reserve. The mine that produces the most coal in its final year is not the most productive mine. It is the most desperate. And the worker who produces the most cognitive output in the months before burnout is not the most productive worker. She is the most depleted.
The uncomfortable implication is that the organizational metrics used to evaluate AI-augmented cognitive work may be systematically misleading. An increase in output per worker, measured in the conventional terms of features shipped or documents produced, looks like a productivity gain. But if that increase is achieved by drawing down the worker's cognitive reserves at an accelerating rate — by substituting intensity for sustainability, by consuming attention and creativity faster than they can be replenished — then the apparent productivity gain is actually a depletion event disguised as growth. Jevons saw exactly this pattern in the coal economy: rising production per mine, celebrated as technological triumph, was in many cases the symptom of an extraction rate that could not be sustained.
The depletion of cognitive resources differs from the depletion of coal in one crucial respect that makes the problem simultaneously more tractable and more insidious. Coal, once burned, is gone. The mine does not replenish. Cognitive resources — attention, creativity, emotional resilience — are renewable. Sleep restores them. Rest restores them. Meaningful social connection restores them. The human brain is not a mine with a fixed stock of cognitive coal. It is a forest that regrows if given time and conditions conducive to growth. But — and here the Jevons paradox reasserts itself with brutal force — if the efficiency-driven acceleration of cognitive demand eliminates the time and conditions necessary for renewal, the renewable resource behaves as if it were non-renewable. The forest that is logged faster than it can regrow is, for all practical purposes, a mine. The depletion is real even if the resource is theoretically renewable, because the rate of consumption has exceeded the rate of regeneration.
This is the specific mechanism through which the Jevons paradox, applied to human cognition, produces the burnout epidemic that Byung-Chul Han diagnosed in his analysis of the "achievement society" and that the Orange Pill documents with case-study specificity. AI efficiency tools accelerate the rate of cognitive extraction. The acceleration is driven by the same economic logic Jevons identified: when the cost of cognitive production falls, demand for cognitive production rises, and the human brain is the resource from which increased production is extracted. If the increased demand leaves insufficient time for cognitive renewal — for sleep, for reflection, for purposeless thought, for the mental fallow periods that neuroscience has identified as essential for creative insight — then the renewable resource is being consumed as if it were finite. The paradox converts a renewable resource into a depleting one by accelerating consumption beyond the regeneration rate.
Jevons proposed no solution to the coal question because he recognized that the paradox admitted no easy one. Efficiency could not solve the problem because efficiency was the problem. Regulation might slow consumption but could not stop it without stopping the economic growth that depended on it. Substitution — replacing coal with another energy source — was the only structural solution, and it required the discovery or development of an alternative that could fulfill coal's economic function without coal's limitations. The transition from coal to oil and eventually to nuclear and renewable energy sources confirmed Jevons's implicit prediction: the paradox is resolved not by managing the depleting resource more carefully but by transcending it — by finding a different resource that can absorb the ever-increasing demand without collapsing.
The cognitive analogy suggests that the resolution of the AI-driven cognitive depletion crisis will not come from better time management, or mindfulness apps, or corporate wellness programs. These are the cognitive equivalents of more efficient steam engines — they improve the efficiency of cognitive resource use, which, per the paradox, will accelerate cognitive resource consumption. The resolution, if there is one, must come from a structural transformation: a redefinition of what cognitive work is, what it is for, and how the relationship between human cognition and artificial cognition is organized. This transformation is the subject to which Jevons's framework points but that his framework alone cannot achieve — because the paradox describes the problem with pitiless accuracy but offers no mechanism for escape. The escape must come from outside the paradox. From a decision, individual and collective, that efficiency is not the only value, and that the resource most worth conserving is the one currently being consumed fastest.
In a passage from The Theory of Political Economy that has been quoted so often it has lost its power to shock, Jevons declared: "Value depends entirely upon utility." Not upon labor. Not upon cost of production. Not upon any intrinsic property of the good itself. Upon utility — the satisfaction the good provides to the person who consumes it, at the moment of consumption, given the quantity already available. The labor theory of value, which had dominated economic thought from Adam Smith through Karl Marx, held that the value of a good was determined by the labor required to produce it. A chair that took ten hours to build was worth more than a chair that took five hours. Jevons overturned this entirely. A chair is worth what someone will pay for it, which depends on how much they want it, which depends on how many chairs they already have. The labor is irrelevant to the value. A chair built in ten hours and a chair built in five minutes have the same value if they provide the same satisfaction.
The demolition of the labor theory of value is Jevons's most consequential contribution to economic thought, and its application to the AI economy is so direct that it barely requires translation. If value depends not on the labor required for production but on the utility provided to the consumer, then the fact that AI can produce cognitive output with minimal human labor does not, by itself, reduce the value of that output. A report that takes an AI system thirty seconds to generate and a report that takes a human analyst thirty hours to write have the same value if they provide the same insight. The consumer does not care about the production process. The consumer cares about the result.
This is the economic logic that makes AI so devastating to the pricing of cognitive labor. Under the labor theory of value, the human analyst's thirty hours of work would be reflected in the price of the report. Under Jevons's utility theory, the price reflects only the report's usefulness, which is independent of the time invested. When AI can produce reports of equivalent utility at a fraction of the cost, the price converges on AI's cost structure, not the human's. The human analyst's thirty hours are not valueless, but they are economically irrelevant — like the labor of the handloom weaver after the introduction of the power loom. The weaver's skill was undiminished. The cloth was identical. But the price was set by the machine, and the weaver's labor, however skilled, could not command a premium over a mechanized process that produced the same output.
This analysis might seem to lead inexorably toward the conclusion that human cognitive labor is doomed to economic irrelevance — that once AI can match human output in any domain, the human worker in that domain becomes the handloom weaver of the twenty-first century. But Jevons's utility framework, applied with the precision its author would have demanded, reveals a more complex picture.
The utility of a good is not a single, fixed quantity. It is context-dependent, consumer-dependent, and — crucially — dependent on qualities that may not be visible in the output itself. Jevons was explicit about this: utility is subjective, residing in the relationship between the good and the consumer's desires, not in the good alone. Two glasses of water with identical chemical composition have different utilities depending on who is drinking and when. Two analytical reports with identical conclusions have different utilities depending on who produced them and why.
This subjectivity of utility creates a space that AI cannot easily occupy. Consider the difference between a medical diagnosis generated by an AI system and the same diagnosis delivered by a physician who has treated the patient for twenty years. The informational content may be identical. The clinical recommendation may be identical. But the utility is not identical, because utility encompasses trust, relationship, contextual understanding, and the patient's subjective experience of being cared for by a person who knows them. The AI diagnosis has utility. The physician's diagnosis has a different kind of utility — one that includes dimensions the AI cannot provide, not because of any technical limitation but because the utility resides partly in the source, not merely in the content.
Jevons's framework predicts that as AI drives the price of content-utility toward zero, source-utility will become the dominant determinant of value. The question "what does this say?" will matter less than the question "who is saying it, and why?" This is already visible in the emerging economy of human-AI creative production. The AI-generated painting that is technically proficient has content-utility. The painting by a human artist working through grief, or joy, or obsession, has source-utility — utility that derives from the identity, intention, and experience of the creator. As AI makes content-utility abundant, source-utility becomes the scarce factor, and therefore — by the iron logic of Jevons's marginal analysis — the valuable one.
The Orange Pill captures this dynamic in its discussion of the authenticity premium, but Jevons's framework adds the crucial economic mechanism. The premium on authenticity is not a cultural preference or a sentimental attachment to the human. It is an economic inevitability driven by the mathematics of utility and scarcity. When the utility derived from content is available at near-zero cost from AI, the only utility that commands a premium is the utility derived from something AI cannot provide. And the thing AI cannot provide — at least not yet, and perhaps not ever — is the specific, situated, mortal human experience of making something that matters to the maker.
This economic logic has implications that extend beyond the creative industries into every domain of cognitive work. In legal analysis, the utility of a correctly researched brief is content-utility; the utility of counsel from an attorney who understands the client's business, culture, and risk tolerance is source-utility. In education, the utility of a correctly explained concept is content-utility; the utility of being taught by someone who recognizes when a student is confused, bored, or on the verge of a breakthrough is source-utility. In management, the utility of a correctly formulated strategy is content-utility; the utility of leadership from someone who has earned the trust of the team is source-utility.
In every case, Jevons's framework predicts the same bifurcation: content-utility will be commodified and priced at AI's marginal cost, while source-utility will command increasing premiums as its relative scarcity grows. The workers who provide only content-utility — who produce correct but undifferentiated output — will face the pricing pressure of the handloom weaver. The workers who provide source-utility — whose value derives from who they are and how they relate, not merely from what they produce — will find their economic position strengthened precisely because AI has made the alternative so cheap.
But Jevons's intellectual honesty compels the acknowledgment of a complicating factor. The distinction between content-utility and source-utility is not always clear, and it is not always valued. Many markets, many organizations, many consumers do not care about source-utility. They want the cheapest competent output, regardless of its origin. The patient who cannot afford a personal physician values the AI diagnosis not less than the physician's but as a genuine alternative to no diagnosis at all. The small business that cannot afford a legal team values the AI-generated contract not as a substitute for human counsel but as a substitute for having no contract at all. In these cases — and they represent a large portion of the global economy — the utility theory does not predict an authenticity premium. It predicts straightforward substitution: AI replaces human labor because the consumer's utility function does not include source-based dimensions.
The net effect, Jevons's theory suggests, is not the wholesale replacement of human cognitive labor or its wholesale preservation. It is a restructuring in which the type of utility determines the outcome. Work that provides content-utility only will be absorbed by AI. Work that provides source-utility — that is valuable because of who provides it, not merely because of what it contains — will survive and may thrive. The economic landscape will be reshaped not by whether AI can match human output, which in most domains it increasingly can, but by whether the consumer values something in the output that requires it to have been produced by a specific human being in a specific context for a specific reason.
Jevons's demolition of the labor theory of value was, in its time, an act of intellectual courage. It told craftsmen and laborers that their hours of toil did not, in themselves, create value — that value resided in the eye of the consumer, not in the sweat of the producer. This was a hard truth, and many rejected it. The AI economy delivers a complementary hard truth: that cognitive labor, however skilled, does not create value merely by being cognitively laborious. The value resides in the utility the output provides, and if AI provides the same utility at lower cost, the labor is economically superfluous regardless of the skill it embodies.
But the same framework that delivers this hard truth also delivers its antidote. If value resides in utility, and if utility is subjective and multidimensional, then the human worker's task is not to compete with AI on the dimensions where AI excels — speed, cost, volume, consistency — but to provide utility on dimensions where AI does not operate. The dimensions of meaning, relationship, trust, moral responsibility, cultural situatedness, personal history, and the specific gravity of having been made by someone who could have done otherwise. These are not consolation prizes. In an economy of infinite content-utility, they are the only things left that money can meaningfully buy. Jevons's price theory does not doom human cognitive labor. It redefines it — strips away the economic value of the mechanical and amplifies the economic value of the meaningful. Whether this redefinition produces a better world or merely a more stratified one depends on choices that economics can inform but cannot make.
There is a moment in the career of every powerful idea when its most ardent defenders must ask whether the idea has been pushed too far. The Jevons paradox has been applied, in the preceding chapters and in the broader economic literature, to coal, to electricity, to computation, to bandwidth, to cognitive labor — to every domain in which efficiency improvements have been observed to increase rather than decrease total resource consumption. The pattern is so robust, so consistently confirmed across two centuries of technological change, that it risks becoming an intellectual reflex: efficiency always backfires, rebound always dominates, conservation through technology is always an illusion. This reflex would have horrified Jevons himself, who was above all an empiricist — a man who trusted data over dogma, even when the dogma was his own.
The intellectual honesty that characterized Jevons's work demands an examination of the conditions under which the paradox does not hold. Because those conditions exist. They are empirically documented. And they may be relevant — perhaps decisively relevant — to the question of whether AI-driven efficiency in cognitive labor must inevitably produce more consumption, more intensity, more depletion, or whether some alternative outcome is possible.
The Jevons paradox requires a specific set of conditions to operate. First, the efficiency improvement must reduce the effective cost of using the resource. Second, the demand for the service provided by the resource must be sufficiently elastic — that is, responsive to price changes — that reduced cost translates into increased consumption. Third, the indirect and structural effects must be large enough to overwhelm the per-unit savings. When any of these conditions is absent or weak, the paradox weakens or disappears. The rebound effect may be less than one hundred percent — meaning that efficiency produces a genuine reduction in total consumption, even if not as large a reduction as the naive calculation would suggest. This outcome — partial rebound, net conservation — is empirically common. Full backfire, where total consumption increases, is the dramatic case. But it is not the universal case.
Consider the lighting efficiency revolution of the early twenty-first century. The transition from incandescent bulbs to LEDs reduced the energy required per lumen of light by approximately ninety percent. The Jevons paradox predicted that total energy consumption for lighting would increase: cheaper light would mean more light. And indeed, the rebound effect was substantial. Homes and cities are more brightly lit than they were in the incandescent era. Decorative lighting, architectural lighting, and always-on displays proliferated. But the total energy consumed for lighting in developed nations did not increase by a factor of ten or twenty, as a full Jevons backfire would predict. In many countries, total lighting energy consumption actually declined, even as the quantity of light produced increased enormously. The rebound was real but partial. The efficiency gain was large enough, and the elasticity of demand for light was limited enough, that net conservation occurred.
The critical variable was the elasticity of demand. At some point, people have enough light. The desire for illumination, unlike the desire for computation or entertainment or status, has a natural saturation point. A room can only be so bright before additional brightness becomes unpleasant. A city can only be so illuminated before further illumination produces diminishing returns that even a zero marginal cost cannot overcome. The demand curve for light, unlike the demand curve for many other goods, bends toward horizontal at high quantities. The Jevons paradox weakens when it encounters genuinely bounded demand.
This observation opens a question of extraordinary importance for the cognitive economy: is the demand for cognitive output bounded or unbounded?
The Jevons-paradox analysis presented in previous chapters implicitly assumes that the demand for cognitive output is effectively unbounded — that there is always another feature to build, another report to write, another analysis to conduct, another design to iterate. The evidence marshaled from the Orange Pill and the broader literature on AI-augmented work supports this assumption in the short and medium term. Organizations that deployed AI did not run out of cognitive work to do. They found more. Much more. The rebound appeared complete and then some.
But Jevons's empirical method requires asking whether this short-term pattern will persist indefinitely, or whether — like the demand for light — the demand for cognitive output might eventually encounter a saturation point. And here the analysis becomes genuinely uncertain, which is to say genuinely interesting.
There are domains of cognitive work where demand appears to be naturally bounded. The number of legal contracts a small business needs does not expand indefinitely as the cost of producing contracts approaches zero. The business needs a lease, an employment agreement, a vendor contract, and a handful of others. Making them cheaper to produce does not create demand for a thousand more. The same is true for many categories of operational cognitive work: financial reports, compliance documents, standard operating procedures, routine correspondence. These are goods with natural demand ceilings, and AI efficiency in producing them may produce genuine conservation of cognitive labor — not full Jevons backfire, but a real reduction in the human hours required for tasks whose demand is bounded.
The Orange Pill itself provides evidence of both bounded and unbounded domains. The non-technical founder who uses AI to build a prototype over a weekend is operating in a domain of bounded demand: she needs one prototype, not a hundred. The AI enabled her to produce it, but the demand was finite and is now satisfied. By contrast, the professional developer who uses AI to accelerate feature development is operating in a domain of potentially unbounded demand: every feature shipped reveals two more that could be built, and the product roadmap expands to fill the time that efficiency saves. The paradox operates differently in these two domains. The founder experienced genuine liberation — a task that was previously impossible became possible, was completed, and was finished. The developer experienced the paradox — efficiency produced more work, not less.
The distinction between bounded and unbounded cognitive demand may be the most important variable determining whether the AI revolution resembles the LED transition (partial rebound, net conservation) or the coal transition (full backfire, accelerating consumption). And Jevons's framework, characteristically, does not offer a reassuring answer. It offers a conditional one: the outcome depends on whether humans and organizations treat the freed cognitive capacity as a gift to be enjoyed or a resource to be reinvested.
Here the paradox encounters a factor that Jevons, writing in the Victorian era, could not have anticipated: the possibility of conscious, deliberate resistance to the rebound effect. Jevons's coal consumers were price-taking agents in competitive markets. They had no choice but to use cheaper coal more intensively, because their competitors would. The structural incentives of the market enforced the paradox. But human cognitive labor is not coal. Humans are not price-taking commodities in competitive extraction markets. They are — or can be — agents capable of choosing not to maximize output, of deliberately leaving cognitive capacity undeployed, of valuing rest, relationship, and reflection as goods worth producing even when the marginal cost of producing more cognitive output is zero.
This possibility — the possibility that the Jevons paradox can be interrupted by human choice — is the most important departure from the coal analogy. Coal cannot choose not to be burned. Humans can choose not to work. The question is whether the economic and social structures of the AI era will permit that choice or will make it as practically impossible as it was for the Victorian factory owner who could not afford to leave his efficient engine idle while his competitor's ran.
The evidence, at this early stage, is mixed. Some organizations have experimented with using AI efficiency gains to reduce working hours rather than increase output — adopting four-day work weeks, capping the number of projects per team, deliberately choosing not to fill the space that efficiency creates. These experiments represent a conscious choice to defeat the Jevons paradox — to accept the efficiency gain as a reduction in total cognitive consumption rather than permitting the rebound to convert it into an increase. The experiments are small and their long-term viability is unproven, but they demonstrate that the paradox is not a physical law. It is an economic tendency, and economic tendencies can be counteracted by institutional design, cultural norms, and individual choices.
Jevons would not have been surprised by this possibility, though he might have been skeptical of its scalability. His analysis of the coal economy acknowledged that government intervention — in the form of taxes, export restrictions, and conservation mandates — could in principle slow consumption. He doubted that any government would impose such restrictions as long as the economic incentive to consume remained strong. The same skepticism applies to organizational and individual resistance to the cognitive rebound effect. As long as organizations compete on output, and as long as individuals are rewarded for productivity, the structural incentives favor the paradox. The choice not to maximize output is, in competitive markets, a choice to fall behind.
But the AI economy may differ from the coal economy in one structural respect that creates space for the paradox to be reversed. Coal was an input to physical production, and physical production faced few limits on demand in the Victorian era — there was always more iron to smelt, more cotton to weave, more goods to transport. Cognitive output, by contrast, must ultimately be consumed by human minds, and human minds — unlike markets for physical goods — have genuine saturation points. There is a limit to how many reports a manager can read, how many features a user can learn, how many designs a client can evaluate, how many decisions an executive can make. When cognitive production exceeds cognitive consumption — when more is produced than can be meaningfully absorbed — the excess production has zero utility regardless of its cost. The Jevons paradox, in this scenario, breaks against the hard constraint of human absorptive capacity.
This is not a comfortable resolution. A world in which cognitive production outstrips cognitive consumption is a world of waste — vast quantities of competent output produced at near-zero cost and consumed by no one. But it is a resolution of sorts, because it suggests that the acceleration of cognitive consumption cannot continue indefinitely. The resource being consumed — human attention, human time, human absorptive capacity — is finite not merely in the biological sense of individual burnout but in the systemic sense of market saturation. At some point, there are more reports than readers, more features than users, more analysis than decisions to be informed. At that point, additional efficiency in production produces not rebound but redundancy.
Jevons's paradox, honestly applied, contains within itself the seed of its own limitation. The paradox depends on elastic demand, and demand for cognitive consumption, unlike demand for coal-derived energy in the nineteenth century, may ultimately prove inelastic at the systemic level. The mine may run dry not because the fuel is exhausted but because the furnaces have no more work to do. Whether humanity reaches that point through deliberate choice or through the sheer inability of human minds to absorb what the machines produce, the outcome is the same: the paradox encounters its boundary, and the question becomes not how much more can be produced but what is worth producing at all.
In the autumn of 1871, six years after The Coal Question had delivered its unwelcome news about efficiency and consumption, William Stanley Jevons published the work that would secure his place in the permanent architecture of economic thought. The Theory of Political Economy proposed something that sounds, at first hearing, almost trivially obvious: that the value of a good is determined not by the total satisfaction it provides but by the satisfaction provided by the last unit consumed. The tenth glass of water on a hot day is worth less than the first. The hundredth pair of shoes in a closet provides less pleasure than the pair that replaced bare feet. Jevons called this the "final degree of utility" — the increment of satisfaction derived from the final increment of consumption — and he argued, with the mathematical formalism that was his signature, that this marginal quantity, not any aggregate measure, governs all rational economic behavior.
The idea was not entirely original. Hermann Heinrich Gossen had articulated a version of it in 1854. Carl Menger and Léon Walras were developing parallel formulations in Vienna and Lausanne. But Jevons's contribution was distinctive in two respects. First, he insisted on expressing the theory in the language of calculus — as a continuous function whose derivative could be analyzed at any point — which gave marginal utility the mathematical precision that previous formulations had lacked. Second, and more consequentially for the present analysis, he connected marginal utility to the paradox he had already identified in resource consumption. The two ideas — the paradox of efficiency and the theory of marginal value — were not separate insights housed in different books. They were two faces of a single understanding of how economic systems behave when the cost of a resource changes.
The connection operates as follows. When efficiency improvements reduce the cost of a resource, they also reduce the cost of the services derived from that resource. The marginal cost of the next unit of coal-derived power falls. The marginal cost of the next unit of AI-derived cognitive output falls. Rational economic actors make decisions at the margin — they compare the marginal cost of an action with the marginal benefit, and they act whenever the benefit exceeds the cost. When the marginal cost falls, actions that were previously uneconomical become worthwhile. The factory owner who would not have run an extra shift at the old fuel cost will run it at the new fuel cost. The developer who would not have built an extra feature at the old development cost will build it when AI reduces the marginal effort to near zero. Each marginal decision is small. The aggregate effect is the Jevons paradox in full operation.
This marginal logic illuminates one of the most counterintuitive phenomena documented in the Orange Pill: the simultaneous explosion of cognitive output and collapse of perceived value per unit of output. Jevons's marginal utility theory predicts this dual movement with mathematical inevitability. As the quantity of any good increases, the marginal utility of each additional unit declines. This is not a cultural observation or a psychological speculation. It is a structural property of utility functions. When AI makes the production of cognitive artifacts — code, text, images, analysis, designs — cheap enough to produce in essentially unlimited quantities, the marginal utility of each additional artifact approaches zero. The ten thousandth AI-generated marketing image in a brand's asset library provides negligible incremental value. The fiftieth AI-drafted variation of a product description adds almost nothing to the forty-ninth.
The implications bifurcate sharply. On one side, the total volume of cognitive output increases — this is the Jevons paradox at work. On the other side, the marginal value of each unit of output decreases — this is marginal utility theory at work. The two effects operate simultaneously, and their interaction produces the specific economic landscape that the Orange Pill documents: a world drowning in cognitive artifacts of declining individual value, produced at accelerating rates by systems that cannot stop producing because the marginal cost of production has fallen below the marginal cost of deciding not to produce.
This last point deserves emphasis, because it captures something that Jevons understood about economic behavior that many contemporary analyses of AI miss. The decision not to act is itself an economic decision with costs. When AI reduces the marginal cost of building a feature to near zero, the opportunity cost of not building that feature becomes highly visible. The developer who could build it in two hours but chooses not to is, in economic terms, forgoing a nearly free benefit. The pressure to produce is not external coercion. It is the rational response to a cost structure in which production is cheap and abstention is expensive. Jevons saw this dynamic in Victorian coal economics: when running the engine cost almost nothing per hour, shutting it down cost the factory owner all the output he was forgoing. The engine ran around the clock not because the owner was greedy but because the economics of efficiency made not running it the irrational choice.
The Orange Pill captures this dynamic in human terms through the phenomenon of what might be called productive compulsion. The developer who cannot stop building. The entrepreneur who prototypes through the night. The writer who generates variation after variation because each additional draft costs so little effort that stopping feels like waste. Byung-Chul Han's diagnosis of the "burnout society" — a society exhausted not by external exploitation but by self-imposed productivity — finds its economic mechanism in Jevons's marginal analysis. The burnout is not irrational. It is the perfectly rational response to a cost structure in which the marginal cost of the next unit of work has been driven below the psychological cost of choosing rest. Jevons's framework does not merely describe this phenomenon. It predicts it, with the cold precision of a differential equation.
But marginal utility theory also predicts something more hopeful, or at least more complex, than simple exhaustion. As the marginal utility of abundant goods approaches zero, the marginal utility of scarce goods rises in relative terms. When AI-generated content is ubiquitous, the scarce resource is not content but attention — the finite human capacity to notice, to care, to be moved. When AI-generated code is ubiquitous, the scarce resource is not code but judgment — the capacity to determine what should be built in the first place. When AI-generated analysis is ubiquitous, the scarce resource is not analysis but wisdom — the capacity to know which analyses matter and which are noise.
Jevons's own marginal framework, applied rigorously, thus predicts a revaluation of human capacities that cannot be replicated by AI — not because those capacities are mystical or immeasurable, but because their scarcity gives them high marginal utility in an economy saturated with machine-generated cognitive output. The "final degree of utility" of authentic human creative expression increases precisely because the supply of machine-generated creative expression has driven the marginal utility of generic creative output toward zero. This is not a sentimental argument about the irreplaceable beauty of human creativity. It is a mathematical consequence of the utility functions Jevons described.
The data supports this prediction. The Orange Pill documents a consistent pattern in which the most economically valued cognitive work in the AI era is not the most technically proficient but the most distinctively human. The developer who commands premium compensation is not the one who writes the most code — AI does that — but the one who determines what code should be written. The designer whose work retains value is not the one who produces the most variations but the one whose aesthetic judgment filters the variations into something meaningful. The writer who survives the flood of AI-generated text is not the most prolific but the one whose voice, whose specificity, whose hard-won perspective on lived experience cannot be derived from a training corpus.
Jevons's framework thus produces a specific prediction about the AI-era labor market that contradicts the simple automation narrative. The automation narrative holds that AI will replace human cognitive labor in a linear process: first the routine tasks, then the complex ones, until humans are unemployed. Jevons's marginal utility theory predicts something different and stranger: AI will increase total cognitive output (the paradox), while simultaneously decreasing the marginal value of generic cognitive output (marginal utility decline), while simultaneously increasing the marginal value of distinctly human cognitive capacities (scarcity premium). The net effect on human workers depends on which effect dominates — and that depends on whether humans can identify and cultivate the capacities that remain scarce.
The question that Jevons leaves unanswered — because it was not his question, though his framework generates it — is whether the revaluation will happen quickly enough. Markets are efficient in the long run and brutal in the short run. The factory workers displaced by efficient steam engines in the 1840s did not benefit from the new jobs that steam-powered industrialization created in the 1870s. They suffered through a generation of dislocation. The coal miners whose livelihoods depended on inefficient extraction methods did not celebrate the economic growth that efficient extraction enabled. They lost their jobs. Jevons's paradox is a statement about aggregate systems, not about individual welfare, and the gap between systemic efficiency and individual suffering is one of the oldest and least resolved problems in economics.
The Orange Pill sits in this gap. Its documentation of the AI transition is simultaneously a story of unprecedented capability expansion — individuals building what previously required teams, non-technical creators producing functional software, imagination collapsing into artifact at the speed of thought — and a story of unprecedented displacement and anxiety. Jevons's framework holds both truths without resolving the tension between them. The paradox produces more total output. Marginal utility redistributes value toward scarcity. But the transition is not instantaneous, and the human beings caught in the transition are not mathematical abstractions. They are the coal workers of the cognitive revolution, and the question of what happens to them in the interval between the old equilibrium and the new one is not answered by any paradox, however elegant.
What Jevons contributed, ultimately, was not a solution but a diagnostic instrument of extraordinary precision. The paradox tells those who study it where to look: not at the efficiency gains themselves, which are real, but at the systemic responses to those gains, which are counterintuitive. The marginal utility theory tells them what to measure: not the total output, which will always increase, but the distribution of value within that output, which will shift toward scarcity. Together, the paradox and the theory form a framework for understanding the AI transition that is more rigorous and more honest than either the utopian or the dystopian narratives that dominate contemporary discourse.
The final degree of utility, in the end, is a concept that applies not only to goods and services but to frameworks of understanding themselves. In an era drowning in theories of AI — optimistic, pessimistic, technical, philosophical, self-serving, catastrophist — the marginal utility of one more theory is low. But a theory that was formulated a hundred and sixty years ago, tested across every major technological transition since, and confirmed by every data set the twenty-first century has produced carries a different kind of value. Its utility is not novelty. Its utility is truth — the specific, uncomfortable, empirically validated truth that efficiency does not liberate, scarcity does not vanish, and the market's response to a cheaper resource is always, without exception, to consume more of it.
William Stanley Jevons knew this. He documented it with mathematical precision and empirical rigor. He published it and watched the world ignore it. One hundred and sixty years later, the world is ignoring it again — this time not with coal but with the resource that makes human beings human: the capacity to think, to create, to attend, to care. The paradox is operating. The final degree of utility is shifting. And the question that Jevons left behind — whether humanity can learn to manage what its own efficiency unleashes — has become, in the age of artificial intelligence, the question upon which everything depends.
There is a passage in The Coal Question that Jevons's contemporaries read as hyperbole and that the twenty-first century must read as prophecy. "Coal in truth stands not beside but entirely above all other commodities," Jevons wrote. "It is the material energy of the country — the universal aid — the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back into the laborious poverty of early times." Substitute "artificial intelligence" for "coal" and the sentence requires no other modification. AI in truth stands not beside but entirely above all other cognitive technologies. It is the material intelligence of the civilization — the universal aid — the factor in everything we think. With AI almost any cognitive feat is possible or easy; without it we are thrown back into the laborious limitations of the pre-digital age.
The substitution is not merely rhetorical. It identifies a structural homology between the role coal played in the industrial economy and the role AI plays in the cognitive economy. Both are general-purpose inputs — resources that do not serve a single industry or a single function but amplify productive capacity across every domain of human activity. Coal powered the railways and the factories and the steamships and the domestic hearths and the gasworks and the iron foundries. AI powers the code generation and the content creation and the scientific analysis and the medical diagnosis and the legal research and the financial modeling and the educational tutoring and the creative production. In Jevons's terminology, both are "universal aids" — resources whose efficiency-adjusted cost determines the productive capacity of the entire civilization.
The Jevons paradox, applied to a general-purpose input, produces consequences of a different order than when applied to a specialized resource. When a specialized resource becomes more efficient to use — a particular chemical reagent, a specific manufacturing tool — the rebound effects are contained within the industries that use that resource. But when a general-purpose input becomes more efficient, the rebound effects propagate through the entire economy. Every industry, every activity, every human endeavor that uses the input experiences the paradox simultaneously. Coal efficiency did not merely increase coal consumption in mining or manufacturing. It increased consumption in transportation, heating, lighting, metallurgy, agriculture, and warfare. AI efficiency will not merely increase cognitive output in software engineering. It will increase cognitive output in every domain where cognition is an input — which is to say, in every domain of human activity without exception.
This universality is what gives the current transition its vertiginous quality. Previous technological transitions — electrification, digitization, the internet — each amplified a specific dimension of human capability. Electrification amplified physical power and extended the productive day. Digitization amplified information storage and retrieval. The internet amplified communication and distribution. Each triggered its own version of the Jevons paradox within its domain: more electricity led to more consumption of electricity, more digital storage led to more data, more connectivity led to more communication. But AI amplifies the general capacity to think and create, and there is no domain of human activity in which thinking and creating are not inputs. The rebound effects have no natural boundary. They propagate everywhere.
Jevons understood this property of general-purpose resources, and it was the source of his deepest anxiety. His concern in The Coal Question was not that Britain would run out of coal tomorrow. It was that the exponential growth in consumption, driven by the very efficiency improvements that seemed to promise conservation, would eventually exhaust a finite resource. The question he posed to the nation was not technical but civilizational: What happens to a civilization that has built itself entirely around a resource it is consuming at an accelerating rate?
The twenty-first century faces a structurally identical question, but with a twist that Jevons could not have anticipated. The resource being consumed at an accelerating rate is not a fossil fuel buried in finite seams beneath the earth. It is human cognitive capacity — attention, creativity, judgment, emotional engagement — which is finite not in geological terms but in biological ones. There are only so many hours in a day. There are only so many thoughts a mind can sustain. There is only so much attention a human nervous system can allocate before it degrades. The Jevons paradox, applied to human cognition, does not merely predict increased consumption of a resource. It predicts the accelerating consumption of the resource that constitutes the self.
The data that the Orange Pill assembles is, from this perspective, a resource-consumption audit of the human mind under conditions of AI-enhanced efficiency. The findings are consistent with what Jevons would have predicted. Total cognitive output has increased. Work intensity has increased. The subjective experience of workers using AI tools is not one of liberation but of acceleration — more is possible, more is expected, more is produced, and the interval between cognitive exertions has compressed toward zero. The imagination-to-artifact ratio has collapsed, and what has expanded to fill the space is not leisure or contemplation but production. The mine is being worked around the clock.
But here the structural analogy between coal and cognition reaches its most important divergence, and it is a divergence that Jevons's framework, honestly applied, must acknowledge. Coal is consumed. Once burned, it is gone. The seams beneath Newcastle grew thinner with every ton extracted. Human cognitive capacity, by contrast, is not strictly consumed in the thermodynamic sense. It is depleted — fatigued, degraded, stretched thin — but it regenerates. Sleep restores it. Rest renews it. Meaning replenishes it, at least partially. The mine of human cognition is not inexhaustible, but neither is it a fixed deposit that can only diminish. It is something more complex: a renewable resource with a finite rate of renewal, being consumed at a rate that may or may not exceed its capacity to regenerate.
This distinction matters enormously for the policy implications of the Jevons paradox in the AI era. Jevons's original conclusion about coal was bleak: efficiency could not save Britain from eventual exhaustion of a finite resource. The best that could be done was to use the period of abundance wisely — to invest coal wealth in assets that would survive the coal — and to resist the comforting illusion that efficiency meant sustainability. The conclusion about human cognition may be somewhat less bleak, but only if the renewable character of cognitive capacity is actively protected. If the Jevons paradox drives cognitive consumption past the rate of cognitive renewal — past the point where sleep and rest and meaning can replenish what the accelerating demands of AI-enhanced production deplete — then the resource is exhausted, not geologically but psychologically. Burnout, in this framework, is not a personal failure. It is resource exhaustion in a system where the rate of extraction has exceeded the rate of regeneration.
Byung-Chul Han's diagnosis of the "burnout society" converges here with Jevons's resource economics in a synthesis that neither thinker could have produced alone. Han identifies the cultural and psychological mechanism: a society that has internalized the imperative to produce, that has replaced external coercion with self-exploitation, that burns itself out through its own achievement drive. Jevons identifies the economic mechanism: efficiency improvements that reduce the cost of production, increase total consumption, and drive the rate of resource use past sustainable levels. Together, they describe a system in which AI-enhanced cognitive efficiency produces precisely the civilizational crisis that Jevons warned about with coal — not the exhaustion of a geological deposit but the exhaustion of the human beings whose cognitive capacity is the resource being mined.
The question Jevons left for the twenty-first century is therefore not whether the paradox applies to AI and cognition. The evidence is overwhelming that it does. The question is whether the partially renewable character of human cognitive capacity provides an escape route that coal, being non-renewable, could not offer. The answer depends on choices that are not economic but ethical — choices about how much cognitive extraction a civilization is willing to impose on its members, about whether the gains from AI-enhanced efficiency will be invested in human renewal or simply reinvested in further extraction, about whether the twenty-first century will treat human attention and creativity as inexhaustible mines to be worked ever harder or as renewable resources to be managed within the limits of their regeneration.
Jevons, characteristically, offered no easy answers. His method was diagnostic, not prescriptive. He told Britain what was happening with coal. He did not tell Britain what to do about it. The nation chose to continue consuming at accelerating rates, to build an empire on cheap energy, and to deal with the consequences when they arrived. The consequences arrived. The coal age ended not because the coal ran out but because a more efficient energy source — petroleum, then nuclear, then renewables — took its place, each triggering its own version of the paradox, each expanding total energy consumption even as per-unit efficiency improved.
The cognitive coal age may follow a similar trajectory. AI may not exhaust human cognitive capacity permanently. It may simply drive consumption past sustainable levels until some new arrangement emerges — perhaps a division of cognitive labor between humans and machines that stabilizes at a new equilibrium, perhaps a cultural transformation that redefines the relationship between efficiency and flourishing, perhaps a political intervention that imposes limits on cognitive extraction the way environmental regulations imposed limits on physical resource extraction. The paradox does not determine the outcome. It determines the dynamics. The outcome depends on what human beings choose to do with the knowledge that Jevons provided: that efficiency, left to its own devices, will always consume more than it saves, will always demand more than it liberates, and will always mistake acceleration for progress.
The mine of human cognition is not inexhaustible. But it is renewable, if the civilization that depends on it chooses renewal over extraction. Whether that choice will be made — whether it can be made, in an economic system optimized for the very consumption patterns the Jevons paradox describes — is the coal question of the twenty-first century. Jevons asked it first. The answer belongs to those who are living through it now, building with AI tools that make everything possible and nothing certain, amplifying their capacities and depleting their reserves, producing more than any previous generation has produced and resting less than any previous generation has rested.
The paradox is operating. The mine is open. The question — Jevons's question, the original question, the only question — is how long the extraction can continue before the miners understand what is being extracted is themselves.
I first encountered the Jevons paradox the way most people encounter it — by living it before I had a name for it.
It was 2024, and I had just spent an entire weekend building something with Claude that would have taken a team of three engineers six weeks. I was euphoric. I was exhausted. And I was already planning the next build, because if I could do that in a weekend, imagine what I could do in a week. Imagine what I could do in a month. The efficiency was intoxicating. The appetite it created was insatiable. I told myself I was being productive. I told myself I was being liberated. I was being consumed.
When I read Jevons — really read him, not a summary but the original Coal Question, the Victorian prose and the meticulous production tables and the quiet, devastating logic — I felt the specific shock of recognition that comes from seeing your own behavior described by someone who died in 1882. He had never seen a computer. He had never imagined artificial intelligence. But he had identified the mechanism that was eating my weekends, my sleep, my capacity for the kind of unstructured thought that produces the ideas worth building in the first place. Efficiency does not create rest. Efficiency creates appetite. I was living proof.
The thing about Jevons that stays with me is not the paradox itself — though the paradox is among the most important insights in the history of economic thought. It is his honesty. He loved progress. He celebrated efficiency. He was a Victorian optimist who believed in the power of technology and industry to improve human life. And he looked at the data and told his country the truth: that the thing they believed would save them was the thing consuming them. He did not flinch. He did not hedge. He published the finding and accepted the loneliness that comes from telling people something they do not want to hear.
I think about that loneliness when I write about AI. The Orange Pill is, in many ways, a book about the exhilarating and terrifying experience of watching a Jevons paradox unfold in real time — not on coal, not on electricity, but on human thought itself. Everything I have documented in this project — the collapsing imagination-to-artifact ratio, the productive addiction, the democratization of capability, the burnout, the sense that we are building faster than we can understand what we are building — all of it fits the pattern Jevons identified a hundred and sixty years ago with the precision of a key in a lock.
But Jevons also gives me something I did not expect: hope. Not easy hope. Not the hope that says everything will work out. The rigorous hope that comes from understanding a mechanism well enough to intervene in it. If you know that efficiency creates appetite, you can choose to manage the appetite rather than be managed by it. If you know that the marginal cost of the next build approaching zero makes not-building feel expensive, you can recognize that feeling for what it is — a price signal, not a moral imperative — and choose rest anyway. If you know that the resource being consumed is your own cognition, your own attention, your own finite and precious capacity to think thoughts that are not optimized for output, you can protect that resource the way a wise nation might protect its coal reserves: not by refusing to use them, but by using them with the knowledge that they are not infinite.
Jevons could not save Victorian England from the coal question. But he gave England the language to understand what was happening to it. That is what I have tried to do with this book — to give us the language to understand what is happening to us. Not to stop it. Not to reverse it. To understand it well enough that we can make choices rather than simply be carried along by the current of our own efficiency.
The mine is open. The paradox is operating. The resource being extracted is us.
The question is what we choose to do now that we know.
-- Edo Segal
I first encountered the Jevons paradox the way most people encounter it — by living it before I had a name for it.
It was 2024, and I had just spent an entire weekend building something with Claude that would have taken a team of three engineers six weeks. I was euphoric. I was exhausted. And I was already planning the next build, because if I could do that in a weekend, imagine what I could do in a week. Imagine what I could do in a month. The efficiency was intoxicating. The appetite it created was insatiable. I told myself I was being productive. I told myself I was being liberated. I was being consumed.
