The most practical investment question of the twenty-first century is: what is the return on being human? Becker's framework provides the structure. The return on any investment is determined by the scarcity of the output the investment produces. When a form of human capital becomes abundant — when machines can replicate it cheaply and at scale — the return falls. When a form remains scarce — when no machine can replicate it, regardless of cost — the return rises. The rational investor surveys the landscape and directs resources toward capital whose scarcity is durable. The question is: what forms of human capital are durably scarce in an economy where AI can produce competent execution across every domain that can be described in natural language? The answer is not what most people expect. The durably scarce forms are not the most intellectually demanding. AI is excellent at intellectual difficulty. The durably scarce forms are the ones that require something machines do not possess: stakes in the world.
There is a parallel reading that begins not with what humans uniquely possess, but with what AI uniquely requires: vast server farms consuming electricity equivalent to small nations, rare earth minerals extracted under conditions of extreme human suffering, cooling systems depleting water tables, and maintenance performed by workers whose own judgment and care are systematically devalued. The "return on being human" may be highest precisely where it is most invisible — in the cobalt mines of the Congo, the assembly lines of Shenzhen, the data centers of Virginia where technicians work twelve-hour shifts keeping the machines that supposedly transcend human limitation operational.
This material substrate reveals the framework's blind spot. The capacities Becker identifies as durably scarce — judgment, trust, care — are indeed scarce, but not because machines cannot replicate them. They are scarce because the political economy of AI systematically extracts them from the many to concentrate their returns among the few. The warehouse worker whose bathroom breaks are monitored by algorithm exercises profound judgment navigating impossible delivery schedules; they simply capture none of the return. The content moderator who protects AI training data from toxicity exercises extraordinary care distinguishing context and harm; their trauma is an externality. The gig worker who maintains their rating through countless micro-negotiations creates trust every day; the platform captures the value. The return on being human is astronomical — it is simply not distributed to most humans being human. The question is not what capacities remain scarce but who captures the return on scarcity, and the answer is: those who own the infrastructure, not those who provide the judgment, trust, and care that keep it running.
The durably scarce capacities are those requiring the experience of being a creature that is born, that will die, that must choose how to spend finite time, that loves specific other creatures with a particularity no algorithm can replicate, that is capable of suffering and of inflicting suffering and of choosing not to, that cares about things — genuinely cares, not in the sense of optimizing a utility function but in the sense of being willing to sacrifice for something that cannot be reduced to a calculation. This is not sentimentality. Becker was the least sentimental of economists. He would insist on specificity.
The first durably scarce capacity is judgment under genuine uncertainty. Not uncertainty that can be resolved by gathering more data — AI excels at that — but the irreducible uncertainty characterizing every decision where stakes are real and information is permanently incomplete. Whether to launch a product. Whether to fire a colleague. Whether to enter a market. Whether to trust a partner. Whether to tell a child the truth about something painful. These decisions require not just intelligence but the specific courage that comes from having something to lose.
The second is the capacity to create trust. Trust is not an emotion. It is an economic institution — perhaps the most important one. Every market transaction extending beyond simultaneous exchange depends on trust: the belief that the other party will fulfill their commitment even when defection would be profitable. AI can simulate trust. It can produce language that sounds trustworthy. But it cannot create trust in the economic sense, because trust requires the possibility of betrayal, and betrayal requires autonomous agency current AI systems do not possess. The machine cannot choose to defect. A relationship with an entity that cannot choose to betray you is not a trust relationship — it is a reliability relationship.
The third is the capacity to care about the right things. Becker treated preferences as given, but the AI transition reveals that preferences are formed by experience and environment. An economy saturated with AI tools that optimize for measurable outputs systematically shapes preferences toward measurable outcomes and away from outcomes that resist measurement. What gets neglected is precisely what matters most: the formation of character, the development of taste, the cultivation of the capacity to care about things genuinely worth caring about rather than merely measurable.
The return-on-being-human framework emerges from Becker's insistence that human capital analysis extends to all capacities that generate market returns, combined with the observation that AI's trajectory of capability is rendering many previously scarce capacities abundant while leaving others durably scarce. The framework's articulation in the 2020s draws on work by Pablo Peña and other Chicago economists applying Becker's tools to the AI transition.
Stakes are the source of durable scarcity. What machines cannot replicate are the capacities that require caring about outcomes in a way only creatures with finite time and real loves can care.
Judgment under uncertainty is the first scarce capacity. It requires the specific courage of having something to lose — which the stakeless machine structurally cannot have.
Trust is an economic institution. It requires the possibility of betrayal that only autonomous agents with their own interests can provide.
Care cannot be optimized. The capacity to care about the right things requires the cultivation of preferences under conditions that AI-saturated environments systematically erode.
The framework's critics argue it relies on claims about machine limits that current AI may eventually transcend — that trust, judgment, and care are functional capacities AI could in principle acquire. The framework's defenders respond that even if AI achieves functional equivalence, the economic logic holds: scarcity determines price, and until AI actually achieves these capacities at competitive cost, the return on human provision of them remains high. The debate is empirical, not conceptual.
The proper synthesis recognizes that both views are describing different layers of the same economic transformation. At the level of aggregate market dynamics — where Becker's framework operates — the scarcity analysis is essentially correct (90% weight). Judgment under genuine uncertainty, trust creation, and cultivated care do command premium returns precisely because AI cannot replicate the stakes-based foundation they require. The framework accurately identifies which human capacities will appreciate rather than depreciate as AI capabilities expand.
But at the level of distribution mechanisms — where the contrarian view focuses — the capture problem dominates (80% weight). The returns on scarce human capacities are real but concentrated among those with the market power to claim them. A CEO's judgment commands millions while a nurse's equally stakes-based decisions about patient care barely cover rent. This isn't because the CEO's judgment is scarcer but because institutional structures determine who can monetize scarcity. The material substrate argument also carries significant weight (70%): AI's dependence on human-maintained infrastructure creates vast demand for human judgment and care that goes systematically uncompensated.
The synthetic frame that holds both views is a nested returns structure. The return on being human operates simultaneously at three levels: the capacity level (what humans can do that machines cannot), the capture level (who can monetize these capacities), and the substrate level (whose humanity keeps the entire system operational). Becker's framework correctly identifies the capacities but incompletely maps the returns. The contrarian view correctly identifies the extraction but incompletely theorizes the scarcity. Together they reveal that the return on being human is both higher than ever and more unevenly distributed than ever — a paradox that defines the political economy of the AI transition.