By Edo Segal
The calculation I never ran was the one about myself.
I have spent my entire career investing. Not in stocks — in skills, in teams, in the accumulated intuition that comes from decades of building technology products. I never thought of it as investment. I thought of it as becoming. You learn a language, a framework, a way of seeing systems. You debug for ten thousand hours and something deposits in your nervous system that no documentation can transmit. You call it experience. You call it expertise. You call it who you are.
Gary Becker would call it capital. And capital depreciates.
That single reframing hit me harder than any philosophical argument about smoothness or friction or the nature of consciousness. Because depreciation is not a metaphor. It is a rate. It has a schedule. And in the winter of 2025, the schedule accelerated so violently that millions of knowledge workers felt the ground shift — not as an abstraction but as a recalculation happening in real time inside their own careers.
Becker was an economist who applied the logic of investment and return to domains where no economist had looked before: families, crime, discrimination, addiction. He treated human beings not as bundles of sentiment but as agents making calculations — often unconsciously, often imperfectly, but always responding to incentives. The framework sounds cold until you realize it explains, with uncomfortable precision, why some engineers ran for the woods while others leaned into the fight. Both responses are rational. They just reflect different portfolios.
This book takes Becker's framework and aims it directly at the AI transition. What happens to the return on deep expertise when a machine can replicate it in seconds? What happens to the training pipeline when the entry-level work that produced experienced judgment is automated away? What happens inside a household when the shadow price of productive work drops to near zero and every minute of rest feels like waste?
These are not abstract questions. They are the questions parents ask at kitchen tables. They are the questions engineers ask at three in the morning. They are the questions I asked myself in Trivandrum, watching twenty people recalculate everything they thought they knew about their own value.
Becker gives you a calculator for a moment most people are navigating by feel. The calculator does not tell you what to care about. It tells you what the caring costs, what it returns, and where to invest when the old returns have collapsed. That clarity is a gift — an uncomfortable one, but a gift.
The count matters. Here is how it works.
-- Edo Segal ^ Opus 4.6
1930-2014
Gary Becker (1930–2014) was an American economist who fundamentally expanded the boundaries of economic analysis by applying the rational-choice framework to domains traditionally considered outside the discipline's reach. Born in Pottsville, Pennsylvania, he earned his doctorate at the University of Chicago under Milton Friedman and spent the majority of his career at that institution, where he held joint appointments in economics and sociology. His landmark works include *Human Capital* (1964), which formalized education and training as capital investment with measurable rates of return and depreciation; *The Economics of Discrimination* (1957), which demonstrated that prejudice imposes quantifiable costs on discriminators; and *A Treatise on the Family* (1981), which modeled household decisions — marriage, fertility, the allocation of time — through the lens of production theory. With Kevin Murphy, he developed the theory of rational addiction, arguing that even self-destructive consumption patterns can be understood as forward-looking optimization under specific discount rates. Becker was awarded the Nobel Memorial Prize in Economic Sciences in 1992 and the Presidential Medal of Freedom in 2007. His insistence that economic reasoning illuminates all human behavior — not just market transactions — made him one of the most influential and controversial social scientists of the twentieth century.
The most expensive thing a knowledge worker owns is not her house. It is not her retirement account, her car, or her degree — though the degree is closer to the truth than most people realize. The most expensive thing she owns is invisible, uninsurable, and utterly nontransferable. It is the accumulated human capital inside her skull: the years of training, practice, trial, error, and slow-deposited intuition that constitute the single largest investment she will ever make, and whose returns she will collect — or fail to collect — for the rest of her working life.
Gary Becker did not invent the idea that education is an investment. But he formalized it with a rigor that transformed how governments, firms, and individuals understand the economics of skill. His 1964 book Human Capital took what had been a loose intuition — that learning pays off — and rebuilt it as a precise analytical framework, complete with rates of return, depreciation schedules, and the brutal logic of opportunity cost. Before Becker, economics treated education as consumption: a good that people purchased because they enjoyed it, the way they purchased theater tickets or vacations. Becker demonstrated that education is capital formation. The student sitting in a lecture hall is not consuming knowledge. She is building an asset — one that will generate returns, in the form of higher wages and greater productivity, compounded over decades.
The distinction matters because capital has properties that consumption does not. Capital depreciates. Capital can be rendered obsolete. Capital requires maintenance. And capital has a rate of return that the investor compares, consciously or not, against every alternative use of the same resources. The years a medical student spends in residency are not just years of learning. They are years of forgone earnings — the salary she could have collected if she had entered the workforce with her undergraduate degree. The tuition is the visible cost. The forgone earnings are the invisible cost, and they are usually larger. Becker's framework insists that both be counted, because the rational agent — the agent who maximizes utility subject to constraints — counts both whether she knows she is counting or not.
This framework, developed in the relative stability of the postwar American economy, contained a prediction that Becker never had to confront. He died on May 3, 2014, eighteen months before AlphaGo defeated a human champion at Go, four years before GPT-2 demonstrated that language models could generate coherent prose, and eight years before ChatGPT reached fifty million users in two months and initiated the period of technological vertigo that Edo Segal calls the orange pill moment. Becker never saw the machines learn to speak human language. He never watched a senior engineer recalculate the value of twenty years of expertise in real time. He never heard a twelve-year-old ask her mother, "What am I for?"
But his framework predicts, with uncomfortable precision, every one of those behaviors. Because the framework does not require that the investor be aware of the calculation. It requires only that the incentives operate. And the incentives are operating now, at a speed and scale that would have fascinated Becker and might have alarmed even him.
The prediction is embedded in the logic like a delayed-action charge. Human capital theory explains not only why people invest in skills. It explains when they stop. The rational individual invests in human capital when the expected return exceeds the expected cost. The expected return is the wage premium the skill commands in the marketplace, discounted over the remaining years of a working life. The expected cost is the sum of direct expenses (tuition, training materials, tools) and opportunity costs (the earnings and experiences forgone during the investment period). When those two quantities are in balance, the investment is made. When the expected return falls below the expected cost, the investment is not made. Not out of laziness. Not out of despair. Out of the same maximizing logic that drove the investment in the first place.
Consider what happens when a technology arrives that compresses the return on a particular form of human capital. The compression does not need to be total. It does not need to eliminate the skill. It needs only to reduce the wage premium that the skill commands — to make the skill less scarce, or the output of the skill less valuable, or both. When this happens, the rational agent at the margin — the person deciding whether to invest the next year of her life in building that capital — recalculates. If the recalculation shows a lower return, she invests less. If it shows a substantially lower return, she invests elsewhere. And if millions of agents perform the same recalculation simultaneously, the supply of that particular form of human capital contracts.
This is not a hypothetical scenario. It is a description of what began happening in the winter of 2025.
The engineers that Segal describes in The Orange Pill — the ones running for the woods, lowering their cost of living in anticipation of lost income — are performing exactly the calculation Becker's framework predicts. They have spent years, sometimes decades, building domain-specific human capital: mastery of particular programming languages, frameworks, deployment architectures, debugging intuitions that live in the body as much as in the mind. This capital was expensive to build. It required the years of tedious, formative struggle that Segal, following Byung-Chul Han, calls friction — the hours of debugging, the failed deployments, the slow accretion of understanding that no documentation can transmit. The investment was rational at the time it was made, because the market rewarded it handsomely. A senior software engineer in 2020 commanded a salary premium that reflected the scarcity of her accumulated expertise.
Then the return collapsed. Not because the expertise became wrong, but because the expertise became abundant. When Claude Code can produce competent implementations in minutes, the scarcity premium on the human capacity to produce those same implementations erodes. The skill has not changed. Its market price has. And Becker's framework is clear about what happens next: the rational agent reduces her investment in the depreciating asset and redirects it toward assets whose returns have not yet been compressed.
The engineers leaning into AI tools — the ones Segal describes as choosing fight over flight — are performing the complementary calculation. They are not doubling down on the old capital. They are reallocating. They are investing in a different form of human capital: the capacity to direct AI, to exercise judgment about what should be built, to integrate across domains that were previously separated by the cost of translation. This capital is new, its returns are uncertain, and the investment is risky. But the rational agent does not require certainty. She requires only that the expected return on the new investment exceed the expected return on maintaining the old one. And for a growing number of agents, the comparison is not close.
Becker's distinction between general and specific human capital illuminates why this transition is so painful for some and so liberating for others. General human capital consists of skills and knowledge that are portable across employers and contexts: communication, reasoning, the capacity to learn new domains quickly, the judgment that Segal calls taste. Specific human capital consists of skills and knowledge that are valuable only within a particular firm, industry, or technological context: mastery of a proprietary codebase, expertise in a framework that may be superseded, deep familiarity with the authority structures and institutional knowledge of a specific organization.
Becker demonstrated that specific capital commands a premium precisely because it is not portable. The firm values it because it cannot be replicated by hiring someone off the street. The worker values it because it represents a relationship — an accumulation of knowledge about this particular system, this particular team, this particular way of doing things — that took years to build and cannot be transferred. But the non-portability that makes specific capital valuable in a stable environment makes it catastrophically vulnerable in a transition. When the firm changes, the codebase is rewritten, or the framework is superseded, the specific capital evaporates. The years of investment yield no return because the market for that particular form of expertise no longer exists.
Artificial intelligence is depreciating specific human capital at a rate that has no precedent in the history of technology. The senior engineer's debugging intuition — her ability to feel that something is wrong in a codebase she has maintained for years, the embodied knowledge that Segal compares to a doctor feeling a pulse — is the purest form of specific capital. It is valuable in one context and worthless in every other. When AI can debug the same codebase in seconds, the market price of that intuition drops to zero. Not because the intuition is wrong. Because the intuition is no longer scarce.
Meanwhile, general human capital — the capacity to think clearly about what should be built, to evaluate whether a product serves its users well, to make judgment calls that integrate technical, ethical, and commercial considerations — is appreciating. Its return is rising precisely because everything around it is becoming cheaper. When execution is abundant, judgment becomes the bottleneck. And judgment, in Becker's taxonomy, is the most general form of human capital there is: portable across industries, applicable to any domain, and resistant to automation precisely because it requires the integration of context, values, and stakes that no current machine possesses.
The twelve-year-old who asks "What am I for?" is, in Becker's framework, a prospective investor surveying the market. She is asking which forms of human capital will generate returns over the forty or fifty years of her working life. The traditional answer — invest in a specific skill, build depth in a domain, become the person who can do the difficult technical thing — is the answer that the market rewarded for the entire industrial and information ages. Becker's own career was built on formalizing why this answer made economic sense. The skill was expensive to acquire, which made it scarce. The scarcity generated a premium. The premium justified the investment. The cycle was self-reinforcing.
The cycle is breaking. Not everywhere, not all at once, but at the margins where the future is being priced. The twelve-year-old's question is not existential in the way a philosopher might use the word. It is economic. It is the most important investment question of her lifetime: Where should she put the next decade of her cognitive capital?
Becker's framework does not provide the answer. It provides something more useful: the structure within which the answer must be sought. The answer will come from watching what the market rewards, which forms of capital appreciate and which depreciate, and adjusting the investment accordingly. But the framework also reveals something the market alone cannot show: the collective consequence of millions of individuals performing the same rational calculation.
If the return on deep, domain-specific expertise falls below the return on broad, integrative judgment, rational agents will redirect their investment. The supply of deep expertise will contract. And the contraction will be invisible at first, because the people who already possess that expertise will continue to exercise it, drawing on the capital they built before the returns shifted. The contraction shows up in the pipeline — in the twenty-two-year-olds who look at the arithmetic and decide not to spend seven years building the expertise their predecessors built, because the arithmetic no longer supports it.
Luis Garicano, an economist at the London School of Economics, has formalized this as the "AI-Becker problem." In Becker's original framework, firms underinvest in general training because of poaching: if a firm trains a worker in portable skills, a competitor can hire the trained worker away, capturing the return without bearing the cost. The market solves this through bundling — entry-level workers are paid less than their productivity warrants, and the difference constitutes an implicit tuition payment that the firm recoups before the worker moves on. The apprentice generates value while learning. The firm profits from the bundled arrangement. The worker acquires capital.
AI shatters the bundle. When entry-level tasks can be performed by machines, the junior worker no longer generates the revenue that subsidized her training. The firm still needs experienced workers — needs them more than ever, because judgment has become the bottleneck. But the firm has lost the mechanism by which experienced workers were produced. The pipeline that converted junior labor into senior expertise has been disrupted at its source.
This is not a small adjustment within an otherwise stable system. It is a structural failure in the mechanism by which human capital is produced. And it is perfectly rational. Every firm that replaces an entry-level worker with an AI tool is making the locally optimal decision: the tool is cheaper, faster, more reliable. The collectively irrational outcome — a society that needs experienced judgment and has stopped producing the conditions under which experience is acquired — emerges from a million individually rational choices.
Becker would recognize the structure. It is a coordination failure, the kind of problem that markets alone cannot solve because the costs are external to the decision-maker. The firm that eliminates its training pipeline does not bear the cost of the expertise that will not exist in ten years. That cost is borne by the economy as a whole, by the future workers who will lack the mentors who were never trained, and by the organizations that will one day discover they need the very capacity they stopped cultivating.
The most expensive thing a knowledge worker owns is the accumulated capital inside her skull. Becker's framework explains why she built it, how she built it, and what it was worth.
Now the framework explains something else: why the value is falling, why the response is rational, and why the rational response may lead somewhere the rationality cannot see.
---
In 1965, Gary Becker published a paper in The Economic Journal that did something no economist had done before: he put a price on time. Not a wage. A shadow price — the true cost of producing something when every input is accounted for, including the one input that economics had treated as invisible: the hours of a human life.
The paper, "A Theory of the Allocation of Time," began with an observation that sounds obvious and is not. Consumption takes time. Reading a book requires not just the purchase price of the book but the hours spent reading it. Cooking a meal requires not just the cost of ingredients but the time to prepare them. Attending a concert requires not just the ticket but the evening. Every act of consumption is simultaneously an act of time expenditure, and time, unlike money, cannot be earned, saved, or borrowed. It can only be spent.
Becker's insight was that the true cost of any activity — its shadow price — is the sum of its market cost and the opportunity cost of the time it consumes. A home-cooked meal costs the ingredients plus the hours of preparation valued at whatever the cook could have earned in those hours. When a surgeon's time is worth five hundred dollars an hour, the shadow price of her home-cooked dinner is astronomical, regardless of how cheap the ingredients are. This is why surgeons eat out. Not because they cannot cook, but because the shadow price of cooking is, for them, irrational. The model predicts that as wages rise, individuals substitute market goods for time-intensive home production. They eat at restaurants. They hire cleaners. They purchase convenience. The prediction matches the data with the quiet consistency that characterized all of Becker's best work.
The shadow price framework transforms how we understand what Segal calls the imagination-to-artifact ratio — the distance between a human idea and its realization. Before AI, the shadow price of a working software prototype was staggering. Consider the inputs: months of a developer's time, valued at the market wage for that skill level, plus the opportunity cost of everything the developer could have built instead, plus the cognitive overhead of translation — the hours spent converting a human intention into a language the machine could parse. The shadow price included not just the direct cost of the labor but the indirect cost of every failed attempt, every debugging session, every moment of friction between the builder's vision and the tool's requirements.
Becker's framework reveals that the friction Segal and Han discuss is, in economic terms, a component of the shadow price. The hours of debugging that deposited layers of understanding in the developer's nervous system were not just formative. They were expensive. They consumed time that could have been allocated to other activities — other products, other problems, other forms of capital accumulation. The friction had a cost, and the cost was measured not in frustration but in forgone alternatives.
When AI collapsed the shadow price of cognitive output — when Claude Code reduced the time cost of producing a working prototype from months to hours — the effect rippled through the entire allocation system. The hours that were previously consumed by translation and implementation were released. The developer's time, which had been bound to the mechanics of code production, became available for other uses. The shadow price of a prototype dropped so sharply that the quantity demanded — the number of people attempting to build, the range of projects undertaken, the sheer volume of things attempted — exploded. This is what adoption at the speed of recognition looks like through Becker's lens. The pent-up demand was always there, suppressed by a shadow price that rationed who could build. When the price fell, the demand surfaced.
But a shadow price reduction does not merely increase the quantity of the thing whose price has fallen. It restructures the entire allocation of time. Becker's model treats the individual as a household production unit — a small factory that combines time and market goods to produce the commodities the individual actually values: health, prestige, pleasure, accomplishment, connection. When the price of one input falls, the production function shifts. Activities that were previously too expensive in time become feasible. Activities that depended on the expensive input become cheaper to produce. And — this is the critical mechanism — the freed time does not sit idle. It is reallocated to whatever activity now offers the highest marginal return.
This is precisely what the Berkeley researchers found. When AI tools reduced the time cost of specific tasks, workers did not use the freed time for leisure, reflection, or the kind of deep rest that is itself a form of capital maintenance. They used it for more work. Not because anyone told them to. Because the rational calculus, operating beneath conscious awareness, identified the next-highest-return activity and directed the freed time toward it. The tool had not freed them. It had changed the price structure, and they had responded to the new prices exactly as Becker's model predicted: by reallocating time toward whatever the market (or their own internalized valuation system) rewarded most.
Segal describes this as productive addiction — the compulsion to build that the AI tools both enable and intensify. Becker's framework strips the pathology from the description without diminishing its accuracy. The workers are not irrational. They are responding to a shadow price collapse that has made productive activity absurdly cheap in time terms. When producing something valuable takes minutes instead of hours, the opportunity cost of not producing it — of sitting idle, or resting, or staring out the window — feels enormous. Not because rest has become less valuable in absolute terms, but because the marginal return on the next productive act has risen so sharply that rest, by comparison, looks like waste.
The phenomenon extends beyond the workplace into the household, where Becker's production function framework takes on its most revealing dimensions. Becker treated the household not as a passive consumption unit but as an active production facility. Families do not consume food, shelter, education, and entertainment in the way economists traditionally modeled consumption — as the acquisition of market goods. They produce what Becker called "commodities": the complex goods that people actually want, which are produced by combining market goods with household time. A family dinner is not a meal. It is the combination of ingredients, cooking time, the time of every family member present, and the social interaction that occurs during the meal. The commodity is the dinner experience. The inputs are goods and time.
When AI enters the household production function, the relative prices shift. A parent who can use Claude to draft a legal document, design a lesson plan for her child, or research medical symptoms is substituting AI-assisted production for market-purchased specialization. The shadow price of producing these commodities falls. Activities that previously required hiring a professional — or spending hours developing the expertise to do them yourself — become feasible as household production. The household's production possibilities frontier expands, sometimes dramatically.
But the expansion is asymmetric. It flows toward activities that can be described in language and away from activities that cannot. AI excels at tasks that can be specified through natural language: research, drafting, analysis, code, design. It has little to contribute to the activities Becker identified as the core of household production: the relational work of maintaining bonds, the emotional labor of caregiving, the embodied presence that constitutes what a family actually is. These activities have always been the hardest to measure and the most essential to human flourishing. They are also the activities with the highest shadow price, because they cannot be time-compressed, outsourced, or automated.
Becker's framework generates a prediction that is simultaneously hopeful and alarming. The hopeful version: AI-driven reductions in the shadow price of cognitive tasks should free household time for relational work. If the model holds, parents should spend less time on tasks that AI can perform and more time on the irreducibly human work of being present with their families. The alarming version: the freed time will not stay freed. It will flow toward the next-highest-return activity, and in a culture that has internalized the achievement imperative — the psychopolitical condition Han describes as auto-exploitation — the next-highest-return activity is always more production.
The Berkeley data confirms the alarming version. Households in which AI tools were adopted did not report increases in family time or relational engagement. They reported increases in work — task seepage into evenings, weekends, the micro-gaps of waiting rooms and commutes. The shadow price of productive work had fallen so far below the shadow price of relational presence that the rational allocation was always, at the margin, to produce rather than to be.
This raises a question that Becker's framework poses but cannot answer on its own: Is there a market failure in the allocation of time to relational activities? Becker's model assumes that individuals maximize utility subject to constraints, and that the allocation that emerges from this maximization is, by definition, optimal for the individual performing it. If a parent chooses to spend an additional hour building with AI rather than reading to her child, the revealed preference is that the additional hour of building generates more utility than the additional hour of reading.
But revealed preference is a treacherous guide when the activity in question alters the preference structure itself. This is the mechanism Becker and Kevin Murphy identified in their theory of rational addiction: current consumption of an addictive good increases the marginal utility of future consumption, creating a feedback loop in which the rational agent finds herself consuming more and more, not because the consumption is making her happier in any absolute sense, but because each unit of consumption raises the baseline from which the next unit is evaluated. The addict is not irrational. The addict is maximizing on a path that the addiction itself has shaped.
The productive compulsion Segal documents — the developer who cannot stop, the spouse writing helplessly about a partner who has disappeared into the tool — is rational addiction in Becker's precise sense. The current building session produces real output, which raises the expected return on the next session, which makes the next session's opportunity cost (the cost of doing anything else) feel higher, which makes the decision to continue feel like the only rational choice. Each session reinforces the next. The feedback loop is self-sustaining and self-escalating.
And Becker's framework reveals the precise failure mode: the rational addict discounts future costs too heavily relative to present returns. The present return is visible, tangible, measurable — working code, a shipped feature, a client deliverable. The future cost is invisible, diffuse, and cumulative — the erosion of the relationship with the child who wanted a bedtime story, the atrophy of the attentional capacity that can only be maintained through rest, the slow degradation of the judgment that requires distance from the work to function properly.
The shadow price of everything has changed. AI has repriced the time cost of cognitive production so drastically that the old allocation patterns are no longer in equilibrium. The system is adjusting, and the adjustment is being performed by millions of rational agents simultaneously, each responding to the new prices in ways that Becker's framework predicts with clinical accuracy.
What the framework cannot do is tell those agents whether the new equilibrium will be one worth inhabiting. The shadow price of building has fallen. The shadow price of being present has not. The question is whether any structure — any institution, any cultural norm, any deliberate intervention — can redirect the freed time toward the activities whose value the market does not price but whose absence the market cannot survive. Becker would formulate this as a problem of externalities and public goods. He would build the model, state the assumptions, derive the predictions.
The predictions would be correct. Whether they would be comforting is another matter entirely.
---
Every asset depreciates. Physical capital rusts, corrodes, and is eventually scrapped. Financial capital can be inflated away or devalued by a market that has changed its mind about what it is willing to pay. Human capital depreciates too, though the mechanism is less visible and, for the person experiencing it, more devastating — because human capital is not something you own. It is something you are.
Becker understood depreciation as a structural feature of human capital, not an accident. Skills become obsolete. Knowledge is superseded. The market's demand for a particular capacity shifts, and the person who built their identity around that capacity discovers that the market no longer values what they spent a decade becoming. The depreciation is not personal. It is not a judgment on the quality of the investment or the intelligence of the investor. It is the impersonal operation of a price system that adjusts to new information about what is scarce and what is abundant.
AI is depreciating human capital at a rate that has no precedent in the modern economy. Not because AI is smarter than humans — a claim that obscures more than it reveals — but because AI is rendering specific forms of cognitive labor abundant. And abundance, in an economy organized around scarcity pricing, is the one thing that destroys value.
The mechanism is straightforward, even if its consequences are not. A senior software engineer commands a salary of two hundred thousand dollars or more, not because her time is intrinsically worth that amount, but because her accumulated expertise — the debugging intuition, the architectural judgment, the capacity to navigate complex codebases built up over years of patient practice — is scarce. Other people want what she can do and cannot easily do it themselves. The scarcity creates a premium. The premium justifies the investment she made in building the capital. The system is internally consistent and, within its own logic, fair: invest, build scarcity, command a premium, recoup the cost, repeat.
When AI tools can produce competent code in seconds — not expert code, not elegant code, but competent, functional, deployable code — the scarcity of competent code production evaporates. The premium that competence commanded disappears. And the engineer's human capital, measured by its market return, depreciates. Not because her knowledge has become less real. Because her knowledge has become less scarce.
Becker's distinction between general and specific human capital provides the sharpest analytical tool for understanding who is devastated by this shift and who is liberated by it.
Specific human capital, in Becker's taxonomy, is knowledge and skill that is valuable only within a particular context: a particular firm, a particular technology stack, a particular industry configuration. The engineer who has spent five years mastering a proprietary codebase possesses specific capital. So does the lawyer who knows every precedent in a narrow subspecialty. So does the accountant who has internalized the idiosyncratic reporting requirements of a single industry. This capital commands a premium because it is costly to reproduce — hiring someone new and waiting for them to accumulate the same contextual knowledge takes years. The specificity is the source of the value.
But the specificity is also the source of the vulnerability. Specific capital is not portable. When the context changes — when the firm is acquired, the technology stack is replaced, or, as is happening now, the entire category of work is automated — the capital evaporates. The years of investment yield no return because the market for that particular form of expertise no longer exists. The engineer who could feel the pulse of her codebase discovers that the codebase is being maintained by an AI that does not need to feel anything. Her intuition was not wrong. It was rendered unnecessary.
General human capital, by contrast, consists of skills and knowledge that are portable across employers, industries, and technological contexts. Communication. Reasoning. The capacity to learn new domains quickly. The judgment that allows a person to evaluate whether a project is worth pursuing, whether a team is functioning well, whether a product serves its users. Becker demonstrated that general capital is harder for any single firm to capture — a worker trained in general skills can take those skills to a competitor — but it is also more resilient to context shifts, precisely because it does not depend on any particular context for its value.
The AI transition is depreciating specific capital and appreciating general capital simultaneously. The debugging intuition, the framework mastery, the syntax expertise — these are specific. Their value was tied to a particular way of building software that is being superseded. The judgment about what to build, for whom, and why — this is general. Its value is not tied to any particular technology. It is tied to the capacity to make good decisions under uncertainty, a capacity that becomes more valuable, not less, as the range of possible decisions expands.
This creates a distributional pattern that is as predictable as it is painful. The workers most devastated by AI are the ones who invested most heavily in specific capital: the experts, the specialists, the deep practitioners who spent decades building knowledge that was valuable in one context and one context only. The workers most liberated are the ones whose capital was always general: the integrators, the generalists, the people who could move across domains because their value lay not in what they knew about any particular system but in how they thought about systems in general.
The cruelty of this distribution is that the experts did nothing wrong. Their investment was rational at the time it was made. The market rewarded specificity for decades. Deep expertise commanded premium salaries, prestigious positions, the respect of peers who understood how hard it was to build. The signals all pointed in the same direction: go deep. The deeper you go, the more you are worth.
The signals have reversed. And the reversal is not gradual. Becker's framework models depreciation as a continuous process — a slow erosion of value over time, like the physical depreciation of a machine. But AI-driven depreciation is discontinuous. It arrives as a step function: one month the skill is valuable, the next month a tool can replicate it. The senior engineer does not watch her expertise slowly become less relevant over a period of years. She watches it become less relevant over a period of weeks, as Claude Code ships an update that handles the class of problems she spent a decade learning to solve.
The step-function character of the depreciation makes rational adjustment extraordinarily difficult. Becker's model assumes that agents can observe the change in returns and adjust their investment accordingly. But when the change arrives as a discontinuity, the observation and the adjustment collapse into the same moment. There is no time to retrain, no period of gradual adaptation, no long curve of declining returns that provides advance warning. There is only the before and the after, and the after arrives with the casual speed of a software release.
The historical precedents that Segal traces — the Luddites, the scribes after Gutenberg, the accountants after VisiCalc — all exhibited the same pattern, but at slower timescales. The framework knitters of Nottingham had years, arguably decades, to observe the power loom's ascent and adjust. Many did not adjust, for reasons Becker's framework explains (the sunk cost of existing capital, the high cost of retraining, the uncertainty about whether new investments would pay off). But the option existed. The timescale permitted it.
AI has compressed the timescale so far that the option barely exists for many workers. The accountant displaced by VisiCalc in 1979 could spend two years learning to use the spreadsheet and emerge as a more valuable professional — one who could now focus on judgment rather than calculation. The trajectory from displacement to redeployment took years, but the path was navigable. The developer displaced by Claude Code in 2026 faces a different calculus: the tool that displaced her is also improving at a rate that may compress the value of whatever she retrains into before the retraining is complete.
This is the distinctive economic feature of the AI transition. Previous technologies depreciated specific capital and created a window for reallocation. AI depreciates specific capital and simultaneously accelerates the rate at which any new specific capital will itself be depreciated. The rational agent, performing the expected-return calculation that Becker's framework requires, faces a moving target. She must invest in new capital whose value is uncertain, knowing that the same technology that destroyed her previous investment may destroy the new one before it generates a return.
Becker's framework predicts a rational response to this condition, and the response is troubling: the agent reduces her overall investment in human capital. If the expected lifespan of any particular skill is shrinking — if the depreciation rate is accelerating — the rational investment per skill decreases. The agent builds less depth, acquires knowledge more superficially, and holds it more loosely. This is not laziness. It is optimization under a regime of accelerating obsolescence. It is the same logic that leads a firm to prefer leasing equipment over purchasing it when technological change is rapid: do not invest heavily in an asset that will be worthless before the investment is recouped.
Applied to human beings, this logic has consequences that Becker's framework can describe but that extend beyond the framework's emotional vocabulary. A society of individuals who rationally invest less in deep expertise because the returns are falling is a society that produces less deep expertise. The expertise does not exist in the abstract. It exists in people — in their nervous systems, their accumulated judgment, their capacity to recognize patterns that only years of immersion can teach. When the investment declines, the capacity declines. When the capacity declines, the society's ability to exercise the very judgment that the AI transition has made more valuable — the general capital of knowing what to build and why — is undermined, because general judgment is not built in a vacuum. It is built on a foundation of deep engagement with specific domains. The generalist who can integrate across fields is valuable precisely because she once went deep enough in at least one field to understand what depth feels like.
If the rational response to accelerating depreciation is to stop going deep, the foundation on which valuable generalism rests begins to erode. And the erosion is invisible, because the general capital continues to perform adequately — for now. The generalist draws on reserves of understanding built before the depreciation accelerated. But the reserves are not being replenished. The pipeline is drying at the source.
Becker's framework describes this as a market failure: the individual agent's rational response to depreciating returns produces a collective outcome — the undersupply of deep expertise — that no individual agent intended or desired. The cost is externalized across the economy, across time, across the future workers who will inherit a world in which the knowledge they need was not built because the incentives pointed elsewhere.
The returns have collapsed. The rational response is underway. And the rational response, left unmodified by institutional intervention, leads to a place that rationality alone cannot repair.
---
In the months following December 2025, a pattern emerged among experienced software engineers that Segal documented with the precision of a field observer watching a species respond to a sudden environmental change. Some engineers, often the most senior and most accomplished, began reducing their cost of living. They moved out of expensive cities. They paid off debts. They downsized. They talked about homesteading, about self-sufficiency, about building a life that could sustain itself on a fraction of their current income. They were, in the language of the moment, running for the woods.
Other engineers, equally senior and equally aware of what was happening, did the opposite. They leaned into the AI tools with an intensity that bordered on compulsion. They built faster, took on more ambitious projects, expanded the scope of their work to encompass domains they had never touched before. They were choosing fight over flight, and their output was extraordinary — often ten or twenty times what they had produced before. The work was exhilarating and, as Segal acknowledges, sometimes unsustainable.
The two groups looked at the same data and arrived at opposite conclusions. The runners saw a world in which their skills were being commoditized and rationally reduced their exposure. The fighters saw a world in which their skills, augmented by AI, were more powerful than ever, and rationally doubled down.
Both responses are rational. Becker's framework does not predict that all agents will respond the same way to a shift in returns. It predicts that all agents will respond in ways consistent with their individual constraints, risk preferences, and assessments of the probability distribution. The difference between the runner and the fighter is not a difference in rationality. It is a difference in the parameters of the optimization.
Consider the runner. She is fifty-two years old, with a mortgage, two children approaching college age, and twenty-five years of accumulated expertise in backend systems architecture. Her human capital is overwhelmingly specific — tied to a particular set of technologies, a particular way of building, a particular market that valued the kind of deep, narrow expertise she had spent her career developing. The AI transition has reduced the expected return on that capital. She performs the calculation that Becker's framework describes: the present value of her remaining career earnings under the old regime versus the present value under the new one. The difference is stark. Under the old regime, her expertise commanded a premium that would have sustained her family through retirement. Under the new one, the premium is eroding, and the rate of erosion is accelerating.
Her rational response is risk reduction. Lower the fixed costs. Eliminate the mortgage. Move to a place where the cost of living is a fraction of what it was. Build a life that can be sustained on a lower income, because the income trajectory has changed. This is not panic. It is portfolio rebalancing — the same logic that drives an investor to shift from equities to bonds when volatility increases. The runner is not fleeing from reality. She is adjusting to it.
Now consider the fighter. He is thirty-eight, unencumbered by a mortgage, with a general capital portfolio that includes not just technical skills but product judgment, cross-domain integration, and the capacity to communicate complex ideas to non-technical audiences. His human capital is disproportionately general — the kind that appreciates when specific capital around it depreciates. The AI transition has not reduced the expected return on his capital. It has increased it, because the tools have removed the friction that previously consumed eighty percent of his working hours and prevented him from applying his judgment to the full range of problems he was capable of addressing.
His rational response is investment amplification. Use the tools. Build faster. Take on more. Expand into domains that were previously inaccessible because the translation cost was prohibitive. The expected return on each hour of work has multiplied, and the rational agent responds by working more, not less, because the opportunity cost of leisure has skyrocketed. When every hour of work produces what ten hours produced before, the implicit price of not working — of resting, reflecting, reading, doing nothing — has risen by a factor of ten.
Becker's time allocation model captures this with uncomfortable precision. The shadow price of leisure rises when the productivity of work increases. The model predicts that highly productive workers will choose less leisure, not more, because the opportunity cost of each leisure hour — measured in forgone output — has risen. This is the economic logic beneath the productive compulsion that the Berkeley researchers documented and that Segal experiences firsthand: not a pathology of will, but a rational response to a price change.
Yet the rationality of the response does not validate the outcome. This is where Becker's framework reaches its most important and most uncomfortable limit. Rational agents responding optimally to local price signals can produce collective outcomes that are globally suboptimal. The runner who retreats to the woods is making the best decision for herself and her family given her assessment of the landscape. But if tens of thousands of senior engineers make the same decision simultaneously, the collective result is a mass withdrawal of experienced judgment from the technology sector — precisely at the moment when experienced judgment is most needed to direct the AI tools wisely.
The fighter who works around the clock is making the best decision for himself given the extraordinary return on each marginal hour. But if tens of thousands of knowledge workers make the same decision simultaneously, the collective result is a workforce running at unsustainable intensity, eroding the cognitive and relational capital that sustains judgment over the long term. The individual optimization, performed correctly by each agent, produces a systemic failure that no individual agent intended.
This is a coordination problem, and Becker spent his career studying coordination problems in domains where economists had not previously looked. His work on crime demonstrated that criminal behavior responds to incentives — increase the expected cost of crime (through higher detection rates or stiffer penalties), and crime decreases. His work on discrimination demonstrated that discriminatory preferences carry economic costs — firms that discriminate against productive workers operate at a competitive disadvantage. In each case, the insight was that individual behavior, even behavior that appears irrational or immoral from the outside, responds to the incentive structure in which it is embedded. Change the structure, and you change the behavior.
The historical precedents teach the same lesson. The framework knitters of Nottingham were rational agents facing a collapse in the return on their specific human capital. Their response — breaking machines — was, in Becker's terms, an attempt to restore the old return structure by destroying the technology that had changed it. The response was rational in the narrow sense that it addressed the immediate cause of the return collapse. It was catastrophic in the broader sense that it accelerated the social and political forces arrayed against them. The machines were not stopped. The knitters were criminalized. The Parliament passed the Frame Breaking Act of 1812, making machine destruction a capital offense. Fourteen Luddites were executed at York in 1813.
The knitters lost not because they chose wrong in the abstract. They lost because no institutional structure existed to redirect them toward the new returns. There was no retraining program. There was no transitional support. There was no mechanism by which the gains from mechanization — which were real and eventually enormous — could be redistributed to the people who bore the cost of the transition. The gains flowed to factory owners. The costs were borne by displaced craftsmen and their families. The redistribution that eventually occurred — the eight-hour day, the weekend, child labor laws, the entire apparatus of labor protection — took decades to build and required sustained political struggle.
Becker would frame this as a problem of institutional design. The market, left to itself, produces an efficient allocation of resources given the existing price structure. But efficiency and justice are not the same thing. The efficient outcome in Nottingham was the replacement of expensive hand labor with cheap machine labor. The just outcome — which took a century to approximate — required institutions that the market alone would never have produced.
The parallel to the present moment is precise. The market is producing an efficient reallocation of human capital away from AI-depreciating domains and toward AI-complementary ones. The reallocation is rational at the individual level. But the distribution of gains and losses is radically unequal, and the speed of the transition is outpacing the institutional structures that could redirect it.
The runners are performing a rational retreat from a market that no longer rewards their investment. The fighters are performing a rational intensification in response to a market that rewards their investment more than ever. Both responses are locally optimal. Neither is globally sufficient. And the difference between an outcome that serves human flourishing and one that produces a generation of displaced expertise and exhausted intensity is not a difference in individual rationality. It is a difference in institutional architecture — in the structures that redirect the river's flow after the terrain has shifted.
Segal's observation that the flight-or-fight dichotomy maps onto the most primal human stress response is more than a metaphor. Becker's framework adds the economic substrate: the amygdala fires, but the direction of the response is shaped by the agent's capital portfolio, risk tolerance, and assessment of future returns. The runner has more specific capital at risk and a longer time horizon of obligation. The fighter has more general capital to deploy and a shorter time horizon of debt. Both are mammals calculating under stress. Both are right about the facts. Both are wrong about the sufficiency of their response, because neither flight nor fight alone produces the institutional structures that the transition requires.
The VisiCalc transition is instructive here, not because it was painless — it was not — but because the institutional response was, by historical standards, relatively effective. The accountants displaced by spreadsheets did not disappear. They were retrained, often by the same firms that had adopted the technology, because the firms discovered that an accountant who understood both the old system and the new tool was more valuable than either a pure accountant or a pure spreadsheet. The retraining was not charity. It was rational investment by firms that recognized the complementarity between existing domain knowledge and new technological capability.
The firms were able to make this investment because the old system of entry-level training — the apprenticeship model in which junior accountants performed calculations manually while learning the judgment that would make them senior accountants — still functioned. The spreadsheet automated calculation, but it did not eliminate the entry-level role. Junior accountants still needed to understand the numbers in order to interpret the spreadsheet's output. The training pipeline survived the transition because the technology automated the routine while preserving the educational function of the routine.
Garicano's AI-Becker problem reveals why the current transition is structurally different. AI does not merely automate the routine component of entry-level work. It automates the entire entry-level task, eliminating the role that served as the training ground. The junior developer who would have spent two years writing basic code — learning, through the friction of that process, the architectural intuition that would eventually make her a senior developer — no longer has a role. The basic code writes itself. The firm has no economic reason to employ a junior developer for training purposes, because the junior developer is not generating the revenue that subsidized her training in the old model.
SignalFire's analysis of 650 million LinkedIn profiles confirms the pattern: new graduate hiring in technology roles declined by twenty-five percent between 2023 and 2025. The New York Federal Reserve reports that unemployment among recent college graduates has risen thirty percent since the pandemic, compared to eighteen percent for workers overall. The pipeline is thinning, and the thinning is rational — firms are making locally optimal decisions that produce a collectively suboptimal outcome.
Becker's framework identifies the structural remedy: change the incentives. If firms will not invest in training because the old bundling mechanism has broken, create new mechanisms. Subsidize apprenticeships. Create tax incentives for firms that maintain training pipelines. Build educational institutions that produce the general capital — judgment, integration, the capacity to ask the right question — that the market increasingly rewards but that no individual firm has the incentive to cultivate on its own.
These are dams. Not in the river of intelligence, but in the market itself. Structures that redirect the rational flow of individual decisions toward collectively beneficial outcomes. Becker would not use the language of dams or beavers. He would model the externality, specify the incentive structure, and derive the optimal intervention. But the conclusion would be the same: when rational agents responding to price signals produce a coordination failure, the remedy is not to override the rationality but to change the prices.
The runners are rational. The fighters are rational. The market that produced both responses is functioning exactly as Becker's framework predicts. What is missing is the institutional architecture that would make the rational individual response produce a rational collective outcome. Building that architecture is the most important economic task of this decade, and it is a task that neither the runners nor the fighters, acting alone and responding to local price signals, will perform.
It requires someone to build the dam. And building a dam is not a market transaction. It is a collective decision, made by institutions that are themselves the product of human capital — the accumulated judgment, political skill, and willingness to think beyond the next quarter that constitutes the rarest and most valuable form of general capital there is.
In 1988, Gary Becker and Kevin Murphy published a paper in the Journal of Political Economy that offended nearly everyone who read it. The paper was called "A Theory of Rational Addiction," and its central claim was that addicts are rational.
Not rational in the colloquial sense — not sensible, not wise, not making choices that a dispassionate observer would endorse. Rational in the precise economic sense: forward-looking agents whose current consumption decisions reflect an assessment, however distorted, of future costs and benefits. The heroin addict who injects today is not failing to consider tomorrow. He is considering tomorrow and discounting it — weighting the present benefit more heavily than the future cost, in a calculation that is internally consistent even if its outcome is self-destructive.
The outrage was immediate and sustained. Psychologists objected that addiction is a disease, not a choice. Sociologists objected that the model ignored the structural conditions — poverty, trauma, lack of opportunity — that make addiction more likely. Philosophers objected that calling self-destruction rational was an abuse of the word. Becker's response, as always, was not to argue but to test. The model generated predictions. Addicts should respond to expected future price increases by reducing current consumption — if they are truly forward-looking, the announcement of a future cigarette tax should reduce smoking today, before the tax takes effect. The data confirmed the prediction. The argument, as Becker preferred, ended not with persuasion but with accuracy.
The mechanism that makes the model work is what Becker and Murphy called adjacent complementarity. The current consumption of an addictive good increases the marginal utility of future consumption of the same good. Each cigarette makes the next cigarette more desirable. Each drink raises the baseline from which the next drink is evaluated. The complementarity is adjacent in time — today's consumption affects tomorrow's — and it creates a feedback loop that is self-reinforcing. The agent does not drift into addiction. The agent optimizes into it, following a path that the consumption itself has shaped.
The model also identifies the failure mode with clinical precision. The rational addict discounts future costs too heavily relative to present returns. The discount rate — the rate at which future consequences are weighted against present satisfactions — is the parameter that separates the addict from the moderate consumer. A high discount rate means the future matters less. The present benefit looms large. The future cost — health, relationships, the capacity for the non-addicted pleasures that constitute a full life — recedes into a haze of discounted insignificance.
This model, developed to explain heroin and cigarettes and alcohol, is the most precise analytical framework available for understanding what happened to knowledge workers in the winter of 2025.
The Substack post that went viral in January 2026 — "Help! My Husband Is Addicted to Claude Code" — was not a metaphor. The spouse was describing, with the accuracy of an unwitting economic diagnostician, a textbook case of Becker-Murphy rational addiction. Her husband was not wasting time. He was producing real output: code that worked, products that shipped, problems that were solved. The consumption was not destructive in the way heroin is destructive. It was productive. And the productivity was the trap.
Adjacent complementarity operates in productive addiction through the same mechanism it operates in substance addiction, but the reinforcement pathway is more insidious because it is socially validated. Each building session with AI produces tangible output — a feature, a prototype, a solution. The output generates satisfaction, professional recognition, and the specific pleasure of seeing an idea become real. This satisfaction increases the marginal utility of the next session, because the builder now knows what is possible and wants more of it. The baseline has shifted. The ordinary working day, the pre-AI rhythm of meetings and documentation and slow, collaborative iteration, now feels intolerably slow — not because it was ever inefficient in absolute terms, but because the complementarity has recalibrated the builder's internal standard.
Segal describes this sensation with the honesty of a person who recognizes the pattern in himself. The exhilaration of the early sessions gives way to a grinding compulsion. The work continues not because it is satisfying but because stopping feels like a loss — a voluntary diminishment, a retreat from a capability that has become part of the self. The language is the language of addiction: unable to stop, lost track of time, confused productivity with aliveness.
Becker's framework strips the moral judgment from this description without diminishing its seriousness. The builder is not weak-willed. The builder is maximizing on a path that the consumption itself has shaped. Each session raises the expected return on the next session. Each shipped feature raises the stakes of the session after that. The opportunity cost of not building — of resting, reflecting, being present with family, doing nothing — rises with each unit of output, because the output keeps demonstrating how much more is possible.
The discount rate is the critical variable, and in productive addiction, the discount rate is pushed higher by two forces that do not operate in substance addiction. The first is social validation. The substance addict's consumption is stigmatized; the social environment pushes back against it, raising the perceived cost. The productive addict's consumption is celebrated. Colleagues admire the output. The industry rewards the intensity. The culture has internalized the equivalence between productivity and worth so thoroughly that the person who works fewer hours is not resting — she is falling behind. The social environment does not push back against productive addiction. It reinforces it, lowering the perceived cost of continuation and raising the perceived cost of stopping.
The second force is the visibility asymmetry between present returns and future costs. The present return on a building session with AI is immediately observable: working code, a visible feature, a measurable increment of progress. The future cost is not observable. It accumulates invisibly — the erosion of judgment that comes from operating without rest, the degradation of relationships that comes from chronic absence, the atrophy of the attentional capacity that requires fallow periods to maintain itself. The builder cannot see these costs until they have compounded beyond the point of easy reversal. And by the time they are visible, the complementarity has advanced so far that the cost of stopping — the withdrawal, the sense of diminishment, the terror of being left behind — exceeds the perceived cost of continuing.
Becker and Murphy's model predicts a specific behavioral signature for rational addiction that distinguishes it from mere habit: binge-and-crash cycles. The rational addict, operating under adjacent complementarity with a high discount rate, tends toward unstable equilibria. Small perturbations in consumption lead to large swings — periods of intense use followed by periods of collapse, because the feedback loop that drives escalation also drives the crash when the cost threshold is finally breached. The builder who works sixteen-hour days for three weeks and then spends a weekend unable to get out of bed is not exhibiting poor time management. She is exhibiting the characteristic dynamics of a system governed by adjacent complementarity with a high discount rate and a delayed cost function.
The Berkeley researchers documented precisely this pattern without naming it. Workers reported periods of intense AI-assisted productivity followed by episodes of burnout that were qualitatively different from ordinary fatigue. The burnout had the specific character of withdrawal: not merely tiredness but a flatness, a loss of motivation, a difficulty engaging with work that did not provide the immediate feedback loop the AI tools had conditioned them to expect. The ordinary working day — the meeting, the document review, the slow conversation with a colleague — felt not just tedious but aversive, because the adjacent complementarity had recalibrated what "normal" cognitive stimulation felt like.
The distinction between flow and addiction, which Segal frames through Csikszentmihalyi's psychology, maps onto Becker's framework with uncomfortable precision. Flow and addiction share the same observable behavior: intense engagement, loss of time awareness, inability or unwillingness to stop. The difference, in psychological terms, is volition — the sense of choosing to be in the state rather than being unable to leave it. In Becker's terms, the difference is the discount rate. The person in flow has a moderate discount rate: she values the present experience highly but has not discounted the future so steeply that she cannot disengage when the future demands it. The addict has a high discount rate: the present dominates so completely that the future, with all its accumulated costs, barely registers in the calculation.
The trouble is that the discount rate is not stable. It is itself affected by the consumption. This is the deep mechanism of Becker-Murphy addiction, and it is the feature that makes productive addiction so difficult to manage. The AI tool does not merely provide a pleasant experience. It reshapes the agent's temporal orientation. Each session of high-intensity, high-return building trains the nervous system to weight immediate returns more heavily and future costs more lightly. The discount rate drifts upward with use. The agent becomes progressively less capable of the long-horizon thinking that would reveal the accumulating costs — not because she is less intelligent, but because the very cognitive apparatus that performs long-horizon evaluation has been reshaped by a pattern of use that systematically favors the short horizon.
This is why Segal's self-report is so diagnostically valuable. When he describes catching himself at three in the morning, writing not because the work demanded it but because he could not stop, and recognizing the pattern as the same compulsive loop he had experienced with earlier technologies — he is describing a rational agent observing his own discount rate in real time and finding it dangerously high. The observation is itself a form of intervention: the agent who can see the discount rate is, at least momentarily, operating at a lower discount rate than the agent who cannot. But seeing is not the same as correcting. The complementarity continues to operate. The next session still promises more return than rest. The observation fades. The laptop opens again.
Becker's framework does not prescribe a solution, because Becker's framework does not prescribe. It describes. But the description implies a remedy that is consistent with Becker's broader approach to policy: change the price. If the rational addict discounts future costs because they are invisible, make them visible. If the social environment reinforces the addiction by celebrating the output, restructure the social environment to also price the cost. If the individual agent cannot maintain a moderate discount rate because the consumption itself pushes the rate upward, introduce external structures — time limits, mandatory breaks, protected non-productive periods — that function as commitment devices, mechanisms by which the agent binds her future self to a choice that her present self, operating under a high discount rate, would not make.
The Berkeley researchers' proposal for "AI Practice" — structured pauses, sequenced workflows, protected offline time — is, in Becker's terms, a set of commitment devices designed to counteract the adjacent complementarity of productive AI use. The commitment device works not by reducing the utility of the AI session but by raising the cost of extending it beyond the structured limit. The cost is social (the team norm is to stop), institutional (the schedule enforces the pause), and informational (the pause creates a space in which the accumulated costs of extended use become temporarily visible).
Whether these devices will prove sufficient is an empirical question. Becker's model is not optimistic about the prospects for commitment devices in the face of strong adjacent complementarity, because the same rationality that makes the agent adopt the device also makes the agent circumvent it when the present return is sufficiently high. The smoker who throws away the pack at night buys another in the morning. The builder who sets a timer for two hours disables the timer at one hour and fifty-nine minutes because the feature is almost done and stopping now would waste the context.
The productive addict faces a problem that the substance addict does not: the addiction produces genuine value. The cigarette provides a private pleasure whose cost is borne by the smoker alone. The building session produces code that other people use, products that generate revenue, solutions that serve real needs. The social cost of intervening — of telling the builder to stop — is visible and immediate: lost output, delayed shipment, reduced competitiveness. The social cost of not intervening — the long-term erosion of judgment, health, and relational capacity — is invisible and deferred.
This asymmetry is the deep structural reason why productive addiction is harder to address than substance addiction. Society has built extensive institutional architecture around substance addiction: treatment programs, support groups, medical interventions, legal restrictions on sale and use. Society has built almost nothing around productive addiction, because the output masks the cost, and a culture organized around achievement has no vocabulary for the pathology of too much achievement.
Becker's framework identifies the gap. The incentives favor continuation. The costs are externalized across time. The social environment reinforces rather than constrains. The commitment devices are weak relative to the complementarity. And the agent, performing the calculation that Becker's model describes, continues to build — not because she is failing to calculate, but because the calculation, given the prices she faces, yields a clear answer.
The answer is: keep going.
The model says she will keep going until the accumulated cost breaches a threshold — a health crisis, a relationship rupture, a collapse of the judgment capacity that made the building valuable in the first place. At that point, the agent enters the crash phase of the binge-crash cycle. The crash is not a failure of the model. It is the model's prediction, fulfilled with the same quiet accuracy that characterized Becker's work across every domain he entered.
The question is whether institutions can be built that change the prices before the crash. Becker would model the optimal intervention. He would specify the tax — the structure that raises the present cost of continuation to a level that reflects the future cost the agent is discounting. He would derive the welfare-maximizing policy. The math would be clean. The implementation would be hard, because the politics of taxing productivity in a culture that worships it are approximately as tractable as the politics of taxing religion in a theocracy.
But the framework points to the task. The addiction pays, and the payment conceals the cost. The work of this moment is making the cost visible before the crash makes it undeniable.
---
Every market prices scarcity. The price of a thing reflects not its intrinsic worth — a concept that economics has largely abandoned as incoherent — but the relationship between how much of it exists and how much of it people want. Water is essential to life and nearly free. Diamonds are useless for survival and extraordinarily expensive. The paradox resolves the moment scarcity enters the analysis: water is abundant, diamonds are not, and the price reflects the margin, not the essence.
Becker understood this principle with the discipline of a man who had spent his career applying it to domains where others thought it did not belong. He applied it to time, showing that leisure becomes more expensive as wages rise. He applied it to marriage, showing that the value of a spouse depends on the complementarity between what each partner produces. He applied it to crime, showing that criminal behavior responds to the expected cost of punishment adjusted for the probability of detection. In each case, the analysis was the same: find the scarcity, and you find the price.
The AI transition has restructured what is scarce in the knowledge economy, and the restructuring is so thorough that the price signals — the signals that every career decision, every hiring choice, every educational investment responds to — have shifted in ways that most institutions have not yet registered.
Before AI, execution was scarce. The capacity to write working code, draft a competent legal brief, build a financial model, design a functional interface — these were the skills that commanded premium salaries, because not everyone could do them, and the people who could were in persistent demand. The scarcity was maintained by the difficulty of acquisition: years of training, practice, and the specific friction that Segal and Han describe as the medium through which deep expertise is deposited. The investment was large, and the market rewarded it because the output was scarce.
AI has made execution abundant. Not universally — not yet — and not perfectly. But abundantly enough that the scarcity premium on competent execution is falling across every domain where the work can be described in natural language. Competent code is abundant. Competent prose is abundant. Competent analysis, competent design, competent summarization, competent translation — all abundant, available to anyone with a subscription and the capacity to describe what they want.
Becker's framework generates the prediction: when the supply of execution increases and the demand remains constant, the price of execution falls. The workers whose human capital consisted primarily of execution capacity — the ability to do the thing — experience a return reduction. The investment they made in building that capacity generates less income, less status, less of the market recognition that justified the investment.
But the scarcity has not disappeared. It has migrated. The question is: to where?
The historical pattern provides the answer, because the migration of scarcity has happened before, at every major technological transition, with a regularity that suggests a structural principle rather than a coincidence.
When VisiCalc made calculation cheap, the scarce resource became the judgment about what to calculate. The accountant who could add columns of numbers was abundant. The accountant who could look at the numbers and determine what they meant — whether the business was healthy, where the risks lay, which investments would generate returns — was scarce. The market repriced accordingly. Within a decade, more people worked in accounting than before the spreadsheet, and they earned more, because the work had migrated from execution to interpretation.
When LexisNexis made legal research cheap, the scarce resource became legal strategy. The associate who could find the relevant cases was abundant. The partner who could look at the cases and construct an argument that would persuade a judge — who could see, in the pattern of precedent, a line of reasoning that the research alone did not reveal — was scarce. The market repriced. Law firms restructured around strategic capacity rather than research volume.
When diagnostic imaging was automated, the scarce resource became clinical interpretation. The machine could identify the anomaly. The physician who could place the anomaly in the context of the patient's history, symptoms, and risk factors — who could determine whether the anomaly was a threat or an artifact, whether it required intervention or watchful waiting — was scarce. The machine improved detection. The physician's judgment determined what the detection meant.
In each case, the pattern is the same: automation of execution migrates scarcity to judgment. The executor becomes abundant. The judge becomes scarce. And the market, responding to the new scarcity with the mechanical reliability that Becker's framework predicts, reprices accordingly.
The AI transition follows this pattern at a scale and speed that dwarfs every previous instance. AI is automating execution not in one domain but across every domain simultaneously. The migration of scarcity is not sector-specific. It is economy-wide. And the judgment to which scarcity is migrating is not a single, well-defined skill. It is a constellation of capacities that Becker's human capital framework can describe but that the educational and institutional systems of the twentieth century were not designed to produce.
The constellation includes: the capacity to identify which problems are worth solving — a function of values, empathy, and market understanding that no current AI can originate. The capacity to evaluate whether a solution serves its intended users well — a function of taste, which is itself the product of deep engagement with the domain and with the people the solution is meant to serve. The capacity to integrate across domains — to see that a technical decision has ethical implications, that a design choice has business consequences, that an architectural pattern has organizational effects. And the capacity to make decisions under genuine uncertainty — not the uncertainty that can be resolved by gathering more data, but the irreducible uncertainty that characterizes every meaningful choice: whether this product should exist, whether this market is worth entering, whether this team is capable of executing this vision.
These capacities are not new. They have always been valuable. But they were, in the old economy, embedded in roles that also required execution. The product leader made judgment calls, but she also reviewed code, attended standups, managed timelines, and performed dozens of execution tasks that consumed the majority of her time. The judgment was the valuable part. The execution was the costly wrapper in which the judgment was packaged.
AI strips the wrapper. What remains is the judgment, exposed and unadorned, and the market is discovering — with the disorientation of a collector who has been buying frames and suddenly realizes the painting is what matters — that judgment was the valuable thing all along.
Becker's framework provides the vocabulary for what Segal calls the inversion: the moment when the return on breadth exceeds the return on depth, and the rational investment calculus flips from specialization to integration. The specialist built value by going deep into a single domain, accumulating the specific capital that commanded a scarcity premium. The integrator builds value by connecting across domains, exercising the general capital that becomes the bottleneck when execution is cheap and the range of possible actions expands.
This is not a comfortable conclusion for the educational institutions that have spent a century organizing themselves around specialization. The university department, the professional school, the certification program — all are built on the assumption that the market rewards depth in a defined field. Becker's own institution, the University of Chicago, is organized around departments whose boundaries reflect a theory of knowledge production that presupposes the value of specialization. The tenure system rewards depth. The publication system rewards narrow expertise. The entire incentive structure of academic life points toward the bottom of a single well.
The market is pointing toward the surface. Toward the integrator who can see across wells. Toward the general capital that has always been harder to build, harder to measure, and harder to credential, because it does not produce the legible outputs — the publications, the certifications, the narrow expertise — that the existing institutional system is designed to evaluate.
Becker would not mourn the institutions. Becker would model their response to the new incentive structure and predict, correctly, that they will adapt — slowly, reluctantly, and under competitive pressure from institutions that adapt faster. The university that reorganizes around integrative capacity will attract the students whose rational investment calculus now favors breadth. The professional school that teaches judgment rather than execution will produce graduates whose market return reflects the new scarcity. The adaptation will happen because the incentives demand it. But the speed of adaptation will be slower than the speed of the market shift, because institutions have their own form of specific capital — accumulated routines, tenured faculty, physical infrastructure, accreditation frameworks — that depreciates when the environment changes and that creates resistance to the very adaptation the environment demands.
The gap between the speed of market repricing and the speed of institutional adaptation is, in Becker's framework, a source of deadweight loss — a welfare reduction that benefits no one and harms everyone. The students who invest in skills the market no longer rewards because the institution has not yet updated its curriculum. The firms that cannot find workers with the general capital they need because the educational system is still producing specialists. The workers caught between the old credential and the new requirement, holding a degree that certifies expertise in a domain the market has already repriced.
This is not an abstract concern. It is the lived experience of millions of people in the winter of 2025 and the spring of 2026, performing in real time the rational recalculation that Becker's framework describes. The senior engineer looking at the arithmetic. The parent lying awake. The student wondering whether the degree is worth the debt. The twelve-year-old asking what she is for.
Becker's framework does not answer her question. But it clarifies the economic terrain on which the answer must be constructed. The market will pay for scarcity. Scarcity has migrated from execution to judgment. Judgment is the capacity to choose wisely among possibilities, and it is built not through any single investment but through the accumulation of general capital — the kind that is portable, integrative, and resistant to the specific-capital depreciation that AI is inflicting on every domain it enters.
The market will pay for judgment. The question is whether the institutions that produce judgment — the schools, the firms, the families, the cultures — can reorganize themselves around this new scarcity before the gap between what the market needs and what the institutions produce becomes a chasm that a generation of workers falls into.
Becker would model the chasm. He would specify its dimensions, estimate its welfare cost, derive the optimal intervention. The model would be correct. The chasm would remain, unless someone builds across it.
---
In 1981, Gary Becker published A Treatise on the Family, a book that scandalized sociologists and delighted economists by applying the rational-choice framework to the most intimate domain of human life. Families, Becker argued, are not merely consumption units — passive recipients of market goods who spend their income on food, shelter, and entertainment. They are production units. Small factories that combine market goods, time, and the human capital of their members to produce the commodities that people actually value: nourishment, companionship, child development, emotional security, the complex goods that no market sells directly but that every market exists, ultimately, to support.
The model was precise and deliberately unsentimental. A family dinner is not, in Becker's framework, a ritual of togetherness. It is a production process. The inputs are ingredients (a market good with a monetary price), the time of the cook (a resource with an opportunity cost equal to the cook's market wage), the time of the family members who gather to eat (each with their own opportunity cost), and the human capital that determines the quality of the cooking, the conversation, and the social interaction. The output is the dinner commodity — a composite good whose value cannot be reduced to any single input. The family is maximizing the production of this commodity, subject to the constraints of income and time.
The framework was attacked for reducing love to a production function. Becker's response, as always, was empirical. The model predicted that as women's wages rose, household time allocation would shift: more market work, less home production, more purchased substitutes for activities previously produced at home. The data confirmed the prediction across decades and across cultures. The model predicted that higher-income families would invest more in fewer children — substituting quality for quantity, as economists say, by allocating more resources per child rather than distributing fewer resources across many. The data confirmed this too, with the consistency that was Becker's signature and his shield.
What Becker could not have predicted, because the technology did not exist during his lifetime, is what happens to the household production function when AI enters the home.
The entry is already underway. It does not arrive as a robot vacuum or a smart thermostat — those are automations of physical tasks whose household production function is straightforward. It arrives as a cognitive partner. A parent uses Claude to research a child's medical symptoms at midnight. A spouse uses it to draft a legal letter, prepare tax documents, design a home renovation, or tutor a child in algebra. A teenager uses it to write college application essays, debug a personal coding project, or explore a career question that her school counselor could not answer with sufficient specificity.
Each of these uses represents a substitution within the household production function. An activity that previously required either the purchase of a market service — a lawyer, an accountant, a tutor, a consultant — or a substantial time investment in developing the human capital to perform the task oneself, can now be produced in-house using AI as an input. The shadow price of these activities collapses. The household's production possibilities frontier expands.
Becker's framework generates a clear prediction: the household will produce more. More cognitive output. More projects attempted. More problems addressed. The expansion follows the same logic as any input price reduction in a production system. When a key input becomes cheaper, the rational producer increases output, reallocates freed resources toward the most productive remaining uses, and produces commodities that were previously too expensive to attempt.
The prediction matches the data. Households that have adopted AI tools report engaging in a wider range of cognitive activities — from personal finance management to creative projects to educational support — than they did before adoption. The range of what a household can produce has expanded in ways that Becker's framework describes with clinical accuracy: more output, produced with fewer market inputs and less specialized human capital, across a broader range of commodity types.
But the framework also generates a second prediction, and this is where the analysis becomes uncomfortable. The freed time will not remain free. It will be reallocated to the next-highest-return activity, because the rational agent — even the rational agent operating within a family, motivated by love and commitment and the non-market values that families exist to produce — still faces the constraint of finite time and the imperative to allocate that time where it generates the most value.
And in a culture that has internalized the achievement imperative — the condition Han diagnoses as the hallmark of the contemporary psyche — the next-highest-return activity is almost always more production. Not more family time. Not more relational presence. Not more of the unstructured, purposeless being-together that constitutes the irreducible core of what families are for. More work. More building. More optimization. More of whatever the market rewards, because the market's rewards are legible and immediate and the family's rewards are diffuse and deferred.
Segal describes this from the inside — the recognition, caught in a moment of self-awareness at three in the morning, that the exhilaration of building had displaced the presence that his family needed. The Berkeley researchers documented it from the outside — the seepage of AI-assisted work into the evenings, weekends, and micro-gaps of domestic life. The household has not been liberated by the expansion of its production possibilities. It has been colonized by the same intensification dynamic that operates in the workplace, because the household and the workplace are no longer separate production environments. They are the same environment, connected by the device in the pocket and the imperative in the skull.
Becker's household production model identifies the specific mechanism. When AI reduces the shadow price of productive cognitive work to near zero, the opportunity cost of any non-productive activity — rest, play, aimless conversation, the kind of undirected togetherness that family researchers identify as the medium in which secure attachment forms — rises proportionally. Every minute spent not producing is a minute in which the household could have been producing something. The rational producer, operating within Becker's framework, finds it increasingly difficult to justify activities whose return is invisible, deferred, and impossible to measure.
This is not a new problem. Becker identified it in his 1965 time allocation paper, when he observed that rising productivity in the market sector increases the opportunity cost of time spent in the household sector, pulling time out of home production and into market work. The mechanism is the same. AI has simply accelerated it to a speed that makes the consequences visible within months rather than decades.
But the household production function has a feature that distinguishes it from the firm's production function, and Becker was clear about this distinction: the commodities the household produces are not all substitutable. A family can substitute a restaurant meal for a home-cooked dinner without significant loss. It cannot substitute AI-generated bedtime stories for a parent reading to a child. The child does not want the story. The child wants the parent. The commodity being produced — secure attachment, the embodied knowledge that a specific adult is present and attentive and not going anywhere — cannot be produced without the input of that specific adult's time. No market good substitutes. No technology substitutes. The input is irreducible.
Becker's framework accommodates this irreducibility through the concept of non-substitutable inputs — inputs for which no combination of other inputs can compensate. When a commodity requires a non-substitutable input, the production function is characterized by what economists call a Leontief structure: the output is limited by the scarcest input, regardless of how abundant the other inputs are. A household can have unlimited AI assistance, unlimited market goods, unlimited everything else — and the production of secure attachment remains constrained by the hours of parental presence.
The implication is that AI does not affect all household commodities equally. It dramatically reduces the cost of commodities that can be produced with substitutable inputs — financial management, information gathering, educational content, home improvement planning. It does not affect, and cannot affect, the cost of commodities that require the non-substitutable input of human presence. The household's production possibilities frontier expands asymmetrically: vastly outward in the cognitive-output dimension, and not at all in the relational-presence dimension.
The asymmetry creates a distortion in household time allocation that Becker's framework predicts but that Becker himself never modeled, because the technology that creates it did not exist during his working life. The rational household, responding to the new price structure, allocates more time to the activities where AI has made production cheaper and less time to the activities where AI has made no difference. The parent spends more time on AI-assisted projects — which feel productive, which generate visible output, which carry the social validation of achievement — and less time on the unstructured presence that the child needs and that the parent, measuring her time against the opportunity cost of productive activity, finds increasingly difficult to justify.
The child does not experience this as a rational reallocation. The child experiences it as absence.
And the absence compounds. Developmental psychology is unambiguous on this point: secure attachment in early childhood is produced by sustained, responsive presence — the repeated experience of a caregiver who is physically and emotionally available, who responds to distress with comfort and to curiosity with engagement. The production function for secure attachment has a time dimension that cannot be compressed. The deposits are made slowly, through thousands of small interactions, and the capital they build — the child's internal working model of whether the world is safe and whether people can be relied upon — depreciates rapidly when the deposits stop.
A parent who is physically present but cognitively absorbed in an AI-assisted project is, in production function terms, providing a degraded input. The time is there. The presence is not. The commodity being produced — the meal, the homework help, the evening routine — may be adequate. But the relational commodity — the child's experience of being seen, valued, and prioritized — is underproduced, because the parent's attention, the true non-substitutable input, has been reallocated to the activity whose return is more immediately legible.
Becker's framework does not moralize about this. It describes it. The parent is maximizing subject to constraints. The constraints have changed. The prices have changed. The allocation has changed. And the allocation, viewed from the perspective of the individual agent responding to the price signals she faces, is rational.
But the commodity that is being underproduced — the child's secure attachment — is also a form of human capital. It is the foundational capital on which every subsequent investment will be built. The child who develops secure attachment goes on to form stronger relationships, maintain better health, exercise better judgment, and accumulate more human capital across every dimension of her life. The child who does not develop secure attachment goes on to struggle in ways that compound across a lifetime.
The underproduction of secure attachment is, in Becker's terms, an externality of the household's rational time allocation — a cost borne by the child and, eventually, by the society the child inhabits, that does not appear in the parent's optimization calculus because the cost is deferred and diffuse while the benefit of the alternative activity is immediate and concentrated.
This is the household version of the coordination failure identified in the labor market. The rational agent, responding to local price signals, produces an outcome that is locally optimal and collectively harmful. The parent maximizes. The child receives less of the one input that cannot be substituted. The human capital of the next generation is formed on a thinner foundation than it could have been. And the thinning is invisible, because the visible outputs — the projects completed, the problems solved, the household running at higher cognitive capacity than ever before — mask the invisible deficit.
Becker would not prescribe a remedy from within the framework. He would describe the externality, quantify the welfare loss, and identify the intervention point. The intervention, in Becker's language, is a correction of the price distortion — a structure that makes the true cost of relational underinvestment visible in the household's optimization calculus.
What this looks like in practice is what Segal calls a dam and what the Berkeley researchers call AI Practice: protected time, institutional norms, the deliberate construction of boundaries that the market's price signals, left to themselves, will erode. Not because the market is wrong. Because the market does not price everything that matters. And the things it does not price — presence, attachment, the slow formation of the human capital that will determine whether the next generation can exercise the judgment the economy increasingly demands — are the things that need protection most.
---
Gary Becker's doctoral dissertation, completed at the University of Chicago in 1955 and published as The Economics of Discrimination in 1957, made an argument so counterintuitive that it took the profession two decades to absorb it. Discrimination, Becker argued, is not free. It is costly — and the cost is borne primarily by the discriminator.
The argument was built on a concept Becker called the discrimination coefficient: a measure of the premium an employer is willing to pay to indulge a preference for one type of worker over another. An employer with a discrimination coefficient of, say, twenty percent against a particular group will hire from that group only if the group's members accept wages at least twenty percent below the wages of the preferred group. The employer is, in effect, paying a tax — not to the government but to his own prejudice. The tax takes the form of higher labor costs, reduced access to the full talent pool, and a competitive disadvantage relative to employers who do not discriminate and can therefore hire the best workers at the market wage regardless of origin.
The implications were radical, especially for the 1950s. Becker was arguing that discrimination is not just morally wrong — a point that required no economic analysis to make — but economically irrational. The discriminating firm operates below its production possibility frontier. It produces less than it could. It earns less than it could. The discrimination coefficient is a self-imposed tax on productivity, and in a competitive market, firms that pay this tax should, over time, be driven out by firms that do not.
The model's predictions were imperfect — discrimination persists, suggesting that markets are not as competitive as the model assumes, or that the discrimination coefficient is maintained by social structures (segregated networks, biased information, institutional inertia) that operate outside the market mechanism. Becker acknowledged these complications without abandoning the framework. The point was not that markets automatically eliminate discrimination. The point was that discrimination has a cost, and the cost is real, and the cost creates pressure — however slow, however imperfect — toward inclusion.
The AI transition is applying unprecedented pressure on the discrimination coefficient across the global knowledge economy. The mechanism is not moral. It is structural. AI tools do not know where a user went to school. They do not hear an accent. They do not see a skin color, a gender, a disability. They do not care whether the person describing the problem is in San Francisco or in Lagos, whether she has a degree from MIT or learned to code from YouTube videos in a one-room apartment in Dhaka. The tool processes the description and produces the output. The quality of the output depends on the quality of the description — on the human's judgment, clarity, and specificity — not on any characteristic that has historically functioned as a proxy for competence.
This is not a small adjustment to the existing system of credentialing and gatekeeping. It is a structural bypass. The credential — the degree, the pedigree, the institutional affiliation — has always served a dual function. Its first function is informational: it signals to the market that the holder possesses certain competencies. Its second function is discriminatory: it rations access to opportunity based on characteristics (family wealth, geographic location, social network) that are correlated with the credential but not with the competency it is supposed to certify.
When a tool allows the competency to be demonstrated directly — when the developer in Lagos can build a working product and show it to a potential client or employer without the intermediation of a credential — the discriminatory function of the credential is exposed. The credential was never just a signal of competence. It was a filter, and the filter selected for characteristics that were distributed unequally across the population. The degree from a prestigious university signaled competence, yes. It also signaled that the holder had the financial resources to attend, the social networks to gain admission, the geographic proximity to apply, and the cultural capital to navigate the application process. The competence was real. The correlation between the competence and the other characteristics was also real. And the correlation was not innocent. It reproduced, generation after generation, the distribution of access to opportunity along lines that had nothing to do with the capacity to do the work.
Becker's framework describes this as a market operating with imperfect information. The credential exists because employers cannot directly observe competence. They use proxies — degrees, references, interview performance — that are correlated with competence but also correlated with characteristics the employer may or may not consciously prefer. The discrimination coefficient is embedded not just in explicit bias but in the structure of the information system itself. The employer who hires only from prestigious universities is not necessarily prejudiced. She may be rationally using the best available proxy for competence. But the proxy carries discriminatory information along with the competence signal, and the rational use of the proxy reproduces the discrimination even in the absence of prejudice.
AI disrupts this information structure by providing a direct signal. When a hiring manager can evaluate a candidate's output — code that works, analysis that holds, design that serves its users — the proxy becomes less necessary. The direct signal is cheaper, more accurate, and less contaminated by discriminatory information than the credential. The rational employer, comparing the cost of evaluating the credential against the cost of evaluating the output, shifts toward the output. Not out of moral commitment to inclusion. Out of the same cost-minimizing logic that drives every market decision.
The forty-seven million developers worldwide, with the fastest growth in Africa, South Asia, and Latin America, represent the population for whom this structural bypass is most consequential. Before AI, a developer in Nairobi could possess extraordinary talent and ambition and still face a discrimination coefficient that priced her out of the global market. The coefficient included the direct discrimination — the bias of employers who preferred candidates from familiar institutions — and the structural discrimination — the absence of the infrastructure, mentorship, and institutional support that converts raw talent into marketable human capital.
AI reduces both components. The direct discrimination is reduced because the tool provides a means of demonstrating competence without the intermediary of a credential. Build the product. Show the product. The product speaks for itself, in a language that does not carry an accent. The structural discrimination is reduced because the tool provides a partial substitute for the infrastructure that was missing. The developer in Nairobi could not previously access the equivalent of a senior engineering mentor. Claude Code is not a mentor in the full human sense — it does not provide the relational support, the career guidance, the personal investment that a mentor provides. But it provides the technical scaffolding — the ability to ask questions, get immediate feedback, and iterate toward a solution — that a mentor's technical expertise provides. The scaffolding is available twenty-four hours a day, at a cost of one hundred dollars a month, and it does not discriminate.
The claim requires qualification. AI does not eliminate inequality. It reduces one specific component of inequality — the barrier between imagination and execution, between competence and its market recognition — while leaving other components intact. Access to the tools requires connectivity, hardware, and a level of financial stability that billions of people do not have. The tools are optimized for English-language users, reflecting the training data and the institutional priorities of the American companies that built them. The benefits of AI-augmented productivity accrue disproportionately to those who already have the general human capital — the judgment, the domain knowledge, the capacity to direct the tool effectively — that makes the tool useful. The developer in Nairobi who possesses this general capital benefits enormously. The person who does not possess it benefits less, and the gap between the two may widen even as the barrier to entry falls.
Becker's framework is precise about this distributional dynamic. When a technology reduces the cost of a complementary input — and AI is a complement to general human capital, not a substitute for it — the return on the existing capital increases. The person who already possesses judgment, taste, and the capacity to ask good questions finds that AI amplifies these capacities. The person who does not possess them finds that AI amplifies their absence. The tool is an amplifier, and an amplifier does not discriminate between signal and noise. It amplifies both.
This means that the reduction of the discrimination coefficient is uneven. It benefits most the people who are discriminated against despite possessing the general capital that the market values. The talented developer in Lagos, the brilliant student in Dhaka, the experienced professional in Trivandrum whose capabilities were masked by a credential gap or a geographic penalty — these are the people for whom the discrimination coefficient's decline is transformative. For them, AI does not merely improve productivity. It removes a barrier that was never justified by competence and that existed only because the information structure of the old market could not distinguish competence from its correlated characteristics.
For those without the general capital — without the judgment, the literacy, the foundational education that makes the tool useful — the decline of the discrimination coefficient changes less. The barrier was not the only obstacle. Remove the barrier, and the absence of the underlying capital is exposed. The tool is available. The capacity to use it effectively is not.
This is Becker's uncomfortable conclusion, generalized from discrimination to the entire AI transition: the technology opens a door, but only those who have the capital to walk through it will benefit. The others will see the door. They may stand in front of it. But the capital to cross the threshold — the years of education, the foundational skills, the accumulated judgment that makes the tool an amplifier rather than a noise generator — is not provided by the technology itself.
The policy implication is the one Becker derived in his original discrimination work, extended to the present context: the most effective intervention is not to regulate the technology but to invest in the human capital that allows individuals to use it. Reduce the discrimination coefficient by making the tool available. Address the remaining inequality by building the capital that makes the tool useful. The investment in foundational education, in the general human capital that appreciates when specific capital depreciates, is not just a social good. It is the highest-return investment a society can make in an economy where the tools are democratized but the capacity to use them is not.
Becker would quantify this return. He would estimate the present value of a program that provides foundational general capital — literacy, numeracy, judgment, the capacity to articulate a problem clearly enough for an AI to address it — to the populations that currently lack it. The return would be large, because the complementarity between human capital and AI capital is strong, and the current underinvestment in the foundational layer represents a deadweight loss whose magnitude is growing with every month that the tools improve while the capital to use them does not.
The discrimination coefficient is declining. The talented developer in Lagos can now demonstrate what she can do without the intermediary of a credential that was never within her reach. This is real, and it matters, and it represents a genuine expansion of who gets to build.
But the coefficient has not reached zero. And the distance between its current value and zero is measured not in technology but in human capital — the accumulated investment that transforms a tool from a curiosity into a capability. Becker's life work was demonstrating that this investment is the most important one a society makes. The AI transition has not changed that conclusion. It has made it more urgent than Becker could have imagined.
The University of Bologna was founded in 1088. For nearly a thousand years, the institution has operated on a premise so foundational that it has become invisible: knowledge is organized into departments, and students are trained by moving deeper into one of them. The premise survived the printing press, the industrial revolution, the telegraph, the telephone, the computer, the internet, and the smartphone. Each technology disrupted what was taught. None disrupted the structure of how teaching was organized.
The structure is about to be disrupted, and Becker's framework explains exactly why.
The logic of specialization in education follows directly from the logic of human capital investment. When the market rewards deep expertise in a defined domain — when a decade of concentrated study in medicine, law, engineering, or accounting generates a wage premium that exceeds the cost of the investment — the rational educational system organizes itself to produce that expertise. Departments form around domains. Curricula deepen within domains. Tenure rewards depth. Accreditation certifies depth. The entire institutional apparatus points downward, toward the bottom of a single well.
This organizational logic was not arbitrary. It reflected a real economic fact: execution was scarce, and the execution the market valued most was the kind that required deep, domain-specific training. The physician who could diagnose a rare condition. The engineer who could design a bridge that would not collapse. The lawyer who could navigate a labyrinth of precedent to construct an argument that a generalist would miss. These capacities were produced by years of immersive, focused, friction-rich study within a single domain, and they commanded premiums precisely because the investment required to build them was large enough to deter all but the most committed.
Becker formalized this as the human capital investment calculus: the rational individual invests in training up to the point where the marginal return — the additional lifetime earnings generated by one more year of study — equals the marginal cost — the tuition plus the forgone earnings of one more year out of the labor market. The equilibrium investment is the number of years of training at which the return and the cost are balanced. In a world where the market rewards depth, the equilibrium pushes toward long, intensive, domain-specific training. Medical school lasts four years after college, followed by three to seven years of residency. The investment is enormous. The return justifies it — or did, when the execution the physician performed was scarce enough to command the premium.
The AI transition shifts the equilibrium, and the shift is measurable. When execution becomes abundant — when AI can produce competent code, competent legal research, competent financial analysis, competent diagnostic imaging interpretation — the wage premium on execution-level competence declines. The return on the marginal year of domain-specific training falls. And the rational individual, performing the calculation that Becker's framework describes, invests less.
This is not a prediction about what might happen. It is a description of what is happening. Computer science enrollments, after a decade of explosive growth, are beginning to plateau or decline at multiple institutions. The signal from the market is legible: the return on the specific capital of programming — syntax mastery, framework expertise, the mechanical skills that constituted the bulk of a computer science education — is falling. The students are reading the signal and adjusting their investment. They are rational. They may also be making a mistake, but the mistake, if it is one, is produced by the price system itself, not by the students' failure to interpret it.
The mistake, if it is one, lies in the assumption that the relevant comparison is between depth and no investment at all. The actual comparison is between depth in the old sense — years of domain-specific execution training — and investment in the forms of capital whose returns are rising: the general capital of judgment, integration, and the capacity to direct AI tools toward problems worth solving. The students who abandon computer science are not necessarily wrong to do so. But the students who replace computer science with nothing — who read the declining premium on execution and conclude that no investment is warranted — are making the error that Becker's framework illuminates most clearly: confusing the depreciation of one form of capital with the depreciation of all capital.
The educational system bears significant responsibility for this confusion, because the system has not updated its offerings to reflect the new return structure. The curriculum that was designed to produce deep executors — programmers, analysts, researchers capable of performing specific technical tasks — has not been replaced by a curriculum designed to produce integrators, questioners, and directors of AI-augmented work. The student who looks at the course catalog sees the old options: more depth in a domain whose execution premium is falling, or a liberal arts education whose connection to market returns has always been tenuous and is now, in the absence of explicit integration with AI tools, even more so.
What the student does not see — because no institution has yet built it at scale — is a curriculum designed around the forms of capital that the AI economy actually rewards. The curriculum would look nothing like what currently exists, and describing it requires specifying what the market is paying for with a precision that most educational institutions have not yet attempted.
The market is paying for the capacity to identify which problems are worth solving. This capacity is not the product of any single discipline. It is produced by the intersection of multiple disciplines: enough technical knowledge to understand what is feasible, enough domain knowledge to understand what is needed, enough ethical awareness to evaluate whether the feasible thing and the needed thing are also the right thing. No existing department produces this intersection. The departments produce depth. The intersection requires breadth deployed with judgment — a different cognitive architecture than any single department is designed to cultivate.
The market is paying for the capacity to evaluate AI output. When the tool produces a working prototype, someone must determine whether the prototype is good — whether it serves its intended users, whether it will scale, whether it solves the right problem or merely the specified one. This evaluative capacity is not taught in engineering programs, which focus on production, or in business programs, which focus on markets, or in design programs, which focus on aesthetics. It lives at the intersection of all three, augmented by the kind of domain knowledge that only comes from immersion in the world the product is meant to serve.
The market is paying for the capacity to ask questions that reframe problems. Segal's observation that AI has shifted the premium from answers to questions is, in Becker's framework, a statement about the changing scarcity structure: answers are abundant, questions are scarce, and the market prices scarcity. But the capacity to ask reframing questions is not produced by any course or credential. It is produced by the kind of intellectual cross-training that occurs when a student is required to engage with problems from multiple perspectives simultaneously — technical, humanistic, empirical, ethical — and to synthesize across those perspectives rather than retreating to the comfort of a single disciplinary lens.
Becker's framework identifies the barrier to institutional adaptation: the institutions themselves have accumulated specific capital that depreciates in the same way individual workers' capital depreciates. A university's specific capital includes its departmental structure, its tenured faculty whose expertise is domain-specific, its accreditation requirements that mandate domain-specific training, its physical infrastructure organized around domain-specific labs and classrooms, and its alumni networks organized around domain-specific professional identities. This capital was expensive to build. It generates returns under the old regime. And it resists adaptation to the new one, not because the people within the institution are irrational, but because the rational response to depreciating specific capital is to defend it — to insist that the old return structure still holds, to argue that the fundamentals have not changed, to resist the recognition that the investment they made and the institution they built are losing value.
This is the expertise trap that Segal identifies in the Luddite chapter, transposed from individual workers to institutional structures. The university that built its reputation on producing deep specialists is reluctant to reorganize around producing integrative generalists, for the same reason the framework knitter was reluctant to learn the power loom: the sunk cost is large, the identity is at stake, and the new regime is uncertain enough that the defense of the old one feels rational.
But Becker's framework is merciless about the trajectory. Institutions that do not adapt to changing return structures lose their competitive position to institutions that do. The university that reorganizes around integrative capital — that builds programs combining technical literacy, humanistic judgment, and explicit training in AI-augmented inquiry — will attract the students whose rational investment calculus now favors this combination. The university that clings to the old structure will find its enrollment declining as students read the price signals and invest their capital elsewhere.
The adaptation is beginning, unevenly and imperfectly. A small number of institutions have introduced programs that explicitly bridge technical and humanistic training, that teach students to use AI tools as instruments of inquiry rather than shortcuts around thinking, that evaluate students on the quality of their questions rather than the correctness of their answers. These programs are marginal within their institutions. They are often regarded with skepticism by faculty whose specific capital is invested in the old structure. But they exist, and their existence is itself a signal — a small eddy in the market that the rational agent can read.
Becker would predict that these programs will grow, not because administrators will suddenly see the light, but because the students will vote with their enrollment decisions, and the enrollment decisions will follow the return structure, and the return structure will reward the institutions that produce the capital the market demands. The mechanism is the same one that has driven every adaptation in the history of education: not vision, not leadership, but the relentless pressure of a price system that rewards alignment with reality and punishes divergence from it.
The speed of adaptation matters more than at any previous transition, because the speed of the market shift is faster than at any previous transition. The VisiCalc transition gave the accounting profession a decade to adapt. The AI transition is giving the educational system months. The gap between the speed of capability change and the speed of institutional response is, in Becker's framework, a source of welfare loss — graduates trained for a world that no longer exists, firms unable to find workers with the capital they need, students accumulating debt in exchange for credentials whose market value is declining in real time.
The gap will close. Becker's framework guarantees it, because the incentives will force it. But the cost of the gap — borne by the students who invest in the wrong capital because the right capital is not yet available, by the firms that cannot find the integrative judgment they need, by the society that underproduces the general capital on which the entire AI economy depends — is a cost that does not have to be as large as it is. Closing the gap faster is not a market transaction. It is an institutional decision, made by people who can see the price signals and choose to act on them before the market forces their hand.
Becker would approve of the logic, if not the urgency. He was a patient man, confident that incentives would do their work in time. But time, in this transition, is the one input whose price has risen most steeply. The shadow price of delay — of every month that the educational system continues to produce the old capital while the market demands the new — is measured in human lives misallocated, in potential unrealized, in the quiet accumulation of a deficit that will take a generation to repair.
The education of capital is the education of the next generation. The curriculum it receives — whether it teaches depth or integration, execution or judgment, answers or questions — will determine whether that generation enters the AI economy equipped to direct it or equipped only to be directed by it. Becker's framework identifies the stakes. Whether the institutions rise to meet them is a question his framework poses but cannot, on its own, resolve.
---
The argument has arrived at its terminus, and the terminus is a question that Gary Becker spent his career equipping economics to answer but that economics alone cannot fully resolve: What is the return on being human?
The question sounds like philosophy. It is not. It is the most practical investment question of the twenty-first century, because the answer determines where rational agents — individuals, firms, societies — should allocate their scarce resources in an economy where machines can perform an expanding share of the cognitive work that humans have always done.
Becker's framework provides the analytical structure. The return on any investment is determined by the scarcity of the output the investment produces. When a form of human capital becomes abundant — when machines can replicate it cheaply and at scale — the return on that capital falls. When a form of human capital remains scarce — when no machine can replicate it, regardless of cost — the return on that capital rises. The rational investor, surveying the landscape of possible investments, directs her resources toward the forms of capital whose scarcity is durable.
The question, then, is: What forms of human capital are durably scarce in an economy where AI can produce competent execution across every domain that can be described in natural language?
The answer is not the answer most people expect. The durably scarce forms of human capital are not the most intellectually demanding. They are not the forms that require the highest IQ, the most years of training, or the most rarified technical expertise. AI is excellent at intellectual difficulty. It solves complex mathematical problems, generates sophisticated legal arguments, writes code of remarkable intricacy, and produces analysis that can hold its own against domain experts. Intellectual difficulty per se is not a durable source of scarcity, because the trajectory of AI capability is consistently toward greater intellectual range and precision.
The durably scarce forms of human capital are the ones that require something machines do not possess: stakes in the world. The experience of being a creature that is born, that will die, that must choose how to spend finite time, that loves specific other creatures with a particularity no algorithm can replicate, that is capable of suffering and of inflicting suffering and of choosing not to, that cares about things — genuinely cares, not in the sense of optimizing a utility function but in the sense of being willing to sacrifice for something that cannot be reduced to a calculation.
This is not sentimentality. Becker was the least sentimental of economists. He would insist on specificity: which capacities, exactly, require stakes? Which ones can be measured? Which ones generate market returns?
The first is judgment under genuine uncertainty. Not the uncertainty that can be resolved by gathering more data — AI excels at that — but the irreducible uncertainty that characterizes every decision where the stakes are real and the information is permanently incomplete. Whether to launch a product. Whether to fire a colleague. Whether to enter a market. Whether to trust a partner. Whether to tell a child the truth about something painful. These decisions require not just intelligence but the specific courage that comes from having something to lose. The decision-maker who has no skin in the game — who does not bear the consequences of being wrong — makes systematically different decisions than the one who does. The quality of the decision depends on the quality of the decision-maker's engagement with the stakes, and engagement with stakes is something that, at present and for the foreseeable future, only creatures who live and die can possess.
The second is the capacity to create trust. Trust is not an emotion. It is an economic institution — perhaps the most important one. Every market transaction that extends beyond simultaneous exchange depends on trust: the belief that the other party will fulfill their commitment even when defection would be profitable. Trust reduces transaction costs. It enables cooperation. It is the invisible infrastructure on which every complex economy is built, and its absence is catastrophically expensive — visible in the legal costs, the enforcement mechanisms, the duplicated safeguards that societies without trust must maintain.
Trust between humans is produced by a specific process: repeated interaction over time, under conditions where both parties have the option to defect and choose not to. It is built slowly, through the accumulation of evidence that this particular person, in this particular relationship, can be relied upon. It is destroyed quickly, through a single act of betrayal that violates the accumulated evidence. It cannot be manufactured, mandated, or optimized. It can only be earned — through the specific, slow, friction-rich process of showing up, keeping commitments, and absorbing costs that a purely rational agent would avoid.
AI can simulate trust. It can produce language that sounds trustworthy. It can optimize interactions to maximize the user's perception of reliability. But it cannot create trust in the economic sense, because trust requires the possibility of betrayal, and betrayal requires the kind of autonomous agency that current AI systems do not possess. The machine cannot choose to defect. It can only execute. And a relationship with an entity that cannot choose to betray you is not a trust relationship. It is a reliability relationship, and the two are not the same. The premium on the capacity to create genuine human trust — in teams, in organizations, in communities, in families — is rising precisely because the simulation of trust is becoming cheap and ubiquitous.
The third is the capacity to care about the right things. Becker treated preferences as given — the economist's equivalent of taking the terrain as fixed and optimizing the path. But the AI transition reveals that preferences are not given. They are formed, by experience and by the environment in which the agent operates. An economy saturated with AI tools that optimize for measurable outputs systematically shapes the preferences of the agents within it toward measurable outcomes and away from outcomes that resist measurement. The worker whose performance is evaluated by metrics that AI can track learns to optimize those metrics. The student whose work is graded by rubrics that AI can score learns to produce work that satisfies the rubric. The parent whose success is measured by the visible achievements of her children learns to invest in the visible and neglect the invisible.
What gets neglected is precisely what matters most: the formation of character, the development of taste, the cultivation of the capacity to care about things that are genuinely worth caring about rather than merely measurable. These are not soft skills in the dismissive sense that the term usually carries. They are the hardest skills there are, because they require the kind of sustained, deliberate attention that no market prices and no metric captures and no AI can replicate.
Becker might object to this argument on methodological grounds. The Becker of Accounting for Tastes would argue that preferences are stable, that what changes is the constraint set, and that the apparent shift in what people care about is really a shift in what they can afford to pursue. The objection has force. But the AI transition may represent a case where the constraint set has shifted so dramatically — where the cost of pursuing measurable achievement has fallen so far relative to the cost of cultivating unmeasurable care — that the revealed preferences of an entire generation are being shaped by a price structure that systematically undervalues the capacities that matter most.
The return on being human is the return on the capacities that require stakes: judgment under genuine uncertainty, the creation of trust, and the cultivation of care. These capacities are not merely the residual — the leftover after AI has automated everything else. They are the foundation. The economy needs them more urgently than it has ever needed them, because the abundance of execution has made the scarcity of judgment the binding constraint on every organization, every institution, and every household.
Becker's former student Pablo Peña, now teaching human capital theory at the University of Chicago, puts the point in language Becker himself might have used: "In the long run, it is our tastes and preferences — not efficiency in production — that give value to economic activities." The AI economy produces with unprecedented efficiency. The question is whether the activities it produces are worth producing. And that question can only be answered by agents who care — who have preferences that are formed by engagement with the world rather than optimization against a metric.
The return is real. Becker's framework insists on this: if the capacity generates value — if judgment, trust, and care are genuinely productive inputs whose absence is genuinely costly — then the market will, eventually, price them. The firm that invests in cultivating these capacities in its workforce will outperform the firm that does not, because the firm that has judgment will make better decisions, the firm that has trust will incur lower coordination costs, and the firm that has people who care will produce things that people want rather than things that metrics approve.
But the market's pricing is slow, and the transition is fast, and the gap between the two is where the welfare loss accumulates. The market has not yet built the institutions that reward the slow cultivation of judgment. It has not yet developed the metrics that distinguish genuine care from its simulation. It has not yet created the educational programs that produce the general capital whose return is rising but whose measurement remains elusive.
This gap is the space in which the work must be done. Not by the market alone — the market is not fast enough, and its pricing of unmeasurable goods is permanently imperfect. Not by the state alone — the state's capacity for granular intervention in the formation of human capital is limited and its track record is mixed. But by the combination of market incentives, institutional innovation, and individual commitment that Becker's framework identifies as the mechanism by which every previous human capital challenge has been addressed.
The return on being human is the highest return available in the AI economy. It is also the hardest to capture, the slowest to compound, and the most easily overlooked by agents optimizing against the price signals they can see rather than the ones they cannot. The rational agent who can see beyond the current price structure — who can perceive the rising return on judgment, trust, and care before the market has fully priced it — has the opportunity to make the most important investment of this era: an investment in the capacities that make human beings worth amplifying.
Becker would build the model. He would specify the return. He would derive the optimal investment.
The investment would be in humans.
Not because the economist is sentimental.
Because the economist can count.
---
The price of a bedtime story is invisible.
No ledger records it. No market prices it. No quarterly report captures the return on twenty minutes of a parent's undivided attention, voice warm in the dark, reading a book the child has heard forty times before and wants to hear again — not for the plot, which she has memorized, but for the presence, which cannot be stored.
Gary Becker would have put a shadow price on that story. He would have noted the opportunity cost — the twenty minutes of AI-augmented building the parent forgone, the code that could have been shipping, the prototype that could have been iterating. He would have treated the decision to read instead of build as a revealed preference, evidence that the parent values the story-commodity more than the marginal output of another productive session. And the framework would be correct, as Becker's framework almost always is.
But the framework would miss something that Becker himself, I suspect, would not have missed — because Becker was a father, and fathers know things about bedtime stories that do not fit inside production functions. The twenty minutes are not consumption. They are investment. The deepest kind. The kind whose return compounds across a lifetime and whose depreciation, when the deposits stop, reshapes the architecture of a human soul.
What Becker gave me, across these ten chapters, was a calculator where I expected a sermon. He did not tell me that human capital matters because it is noble or sacred or because some philosophical tradition says so. He told me it matters because the numbers say so — because the returns are measurable, the costs are quantifiable, and the predictions match the data with the unforgiving accuracy that was his signature.
And then the calculator pointed at the one thing the calculator cannot fully price.
The return on being human — on judgment, trust, care, the capacity to sit with uncertainty and choose anyway — is the highest return available in an economy saturated with artificial intelligence. I believe this. Becker's framework supports it. The scarcity structure demands it. But the return is invisible to the instruments we have built to measure returns, because the instruments were designed for a world where execution was the bottleneck, and the things that matter most in a world where judgment is the bottleneck do not fit inside the old measurement apparatus.
I think about the engineers in Trivandrum, recalculating. I think about the twelve-year-old, asking her mother what she is for. I think about the senior architect watching his expertise depreciate and finding, beneath the depreciation, something that appreciates — the judgment that was always there, masked by the execution that consumed his days. Each of them is living inside Becker's framework whether they know it or not. Each of them is performing the investment calculus that determines what they will become.
The bedtime story is not in the calculus. It should be. It is the foundational deposit in the human capital that will determine whether the next generation can direct the most powerful tools ever built toward something worth building. The parent who reads the story is not consuming leisure. She is forming the substrate of judgment — in herself and in her child — on which everything else depends.
Becker's framework is a gift because it refuses to be impressed by sentiment. It demands that every claim earn its place through evidence, through prediction, through the discipline of following the incentives wherever they lead. The incentives lead here: invest in the capacities that require stakes. Invest in judgment, trust, care. Invest in the slow, expensive, friction-rich formation of the human qualities that no machine replicates and that every machine needs someone to provide.
The market will learn to price these qualities. It always does, eventually. But eventually is a long time when the transition is moving this fast, and the people in the gap — the students, the workers, the parents, the children — cannot wait for eventually.
Build the dams. Change the incentives. Read the bedtime story.
The economist can count. The count says: invest in humans.
-- Edo Segal
Your skills are capital. Gary Becker proved it in 1964 -- that every year of training, every hour of practice, every layer of hard-won expertise is an investment with a rate of return, a depreciation schedule, and a brutal sensitivity to what the market decides is scarce. For sixty years, the market rewarded depth. Then, in the winter of 2025, machines learned to produce competent execution across every knowledge domain, and the return on depth collapsed overnight. Millions of rational agents began recalculating -- some running for the woods, others doubling down on tools that amplified what remained valuable.
This book applies Becker's framework to the AI revolution with clinical precision. What depreciates? What appreciates? Where should the next generation invest when the old arithmetic no longer holds? The answers are uncomfortable, clarifying, and urgent.

A reading-companion catalog of the 22 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Gary Becker — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →