By Edo Segal
The metric that should terrify you is not the one going up.
Twenty-fold productivity. Two-point-five billion in run-rate revenue. Fifty million users in two months. Every number the AI revolution produces points skyward, and every conference I attend celebrates the climb. I have celebrated it myself. I stood in a room in Trivandrum and watched each of my engineers become a team, and the exhilaration was real, physical, the kind that makes you want to call someone.
But there is a number nobody is tracking. A gap nobody is measuring. The distance between what the technology makes possible and what people are actually free to do with it.
That gap has a name. Amartya Sen called it the conversion problem. He spent sixty years demonstrating that the most dangerous illusion in economics is the belief that having a resource is the same as benefiting from it. Bengal had enough food in 1943. People starved anyway, because the systems that converted food into nourishment had collapsed. The resource was abundant. The freedom was absent.
I read Sen and felt the ground shift under metrics I had trusted my entire career. Output is not freedom. Access is not capability. A developer in Lagos can subscribe to Claude Code for a hundred dollars a month. That fact appears in every democratization argument I have made. Sen's framework asks the question I was not asking: Can she actually convert that access into a life she has reason to value? Does she have the electricity, the connectivity, the financial infrastructure, the educational preparation, the institutional support? Each missing conversion factor is a break in a chain that the subscription price alone cannot mend.
This is not a book about development economics. It is a book about seeing what our dashboards hide. Sen built the most rigorous instrument available for evaluating whether powerful transformations actually serve human lives — not in aggregate, not on average, but in the specific, irreducible particularity of each person's freedom. The AI revolution is the most powerful transformation most of us will live through. And we are measuring it with instruments calibrated to detect power, not freedom.
Every book in this series hands you a different lens. Csikszentmihalyi showed us flow. Han warned us about smoothness. The Luddites taught us the cost of refusal. Sen teaches something harder to hear: that the revolution can succeed by every metric we are tracking and still fail the people it claims to serve.
The question is not whether AI is powerful. The question is whether the institutions exist to convert that power into genuine human freedom. Sen's framework does not answer the question. It makes the question impossible to avoid.
That is why you need this lens. Not because it is comfortable. Because it is precise.
— Edo Segal ^ Opus 4.6
1933-present
Amartya Sen (1933–present) is an Indian economist and philosopher widely regarded as one of the most influential thinkers of the twentieth and twenty-first centuries. Born in Santiniketan, Bengal, he witnessed the Bengal famine of 1943 as a child — an experience that shaped his life's work on poverty, inequality, and human welfare. Educated at Presidency College in Kolkata and Trinity College, Cambridge, Sen has held professorships at Harvard, Oxford, the London School of Economics, and Cambridge, where he served as Master of Trinity College from 1998 to 2004. He was awarded the Nobel Memorial Prize in Economic Sciences in 1998 for his contributions to welfare economics and social choice theory. His major works include Collective Choice and Social Welfare (1970), Poverty and Famines (1981), Development as Freedom (1999), and The Idea of Justice (2009). Sen's most enduring contribution is the capability approach, developed in dialogue with philosopher Martha Nussbaum, which redefines human development not as economic growth but as the expansion of substantive freedoms — the real opportunities people have to live lives they have reason to value. His work directly informed the creation of the United Nations Human Development Index and continues to shape global policy debates on inequality, education, healthcare, and democratic governance.
In the winter of 2025, when artificial intelligence crossed the threshold from tool to collaborator, when a Google engineer watched Claude Code reproduce in one hour what her team had spent a year building, when millions of knowledge workers confronted the most significant restructuring of human labor since electrification — the one thinker whose intellectual framework was most precisely suited to evaluating what was happening said nothing.
Amartya Sen, ninety-two years old, Nobel laureate, architect of the capability approach that had reshaped how the world measures human welfare, the philosopher-economist who had spent six decades arguing that the standard metrics of progress — GDP, income, output — systematically fail to capture what actually matters about human lives, did not publish a paper on artificial intelligence. Did not give a lecture. Did not offer the evaluative framework that dozens of scholars were already applying on his behalf, scrambling to connect his ideas to a technological revolution he had not publicly addressed.
The silence is itself diagnostic. It reveals something about the velocity of the current transformation that no productivity metric can capture. The thinker whose life's work is most needed in this conversation has not entered it — not, one suspects, because the questions are uninteresting to him, but because the technology has moved faster than any individual intellectual response can track. By the time a careful philosopher formulates a position on what large language models mean for human capability, the models have advanced another generation. The river does not wait for the cartographer.
But the map exists. Sen drew it decades ago, for a different landscape, and its contours align with the present terrain so precisely that the absence of the mapmaker matters less than the presence of the map. The capability approach — the intellectual framework Sen developed across four decades of work, from his early studies of famine and poverty through the formal architecture of Development as Freedom and The Idea of Justice — provides the most rigorous available instrument for answering the question that the AI revolution has placed at the center of public life: Is this technology actually making people's lives better?
The question sounds simple. It is not. Its difficulty lies in what the word "better" conceals. Better by what measure? For whom? In what evaluative space? The dominant metrics of the technology industry — parameters, benchmarks, revenue, adoption rates, lines of code generated, productivity multipliers — measure the power of the technology without measuring its impact on the lives of the people who use it. ChatGPT reached fifty million users in two months. Claude Code's run-rate revenue crossed $2.5 billion by February 2026. These are measurements of appetite. They tell us nothing about nourishment.
Sen spent his career making precisely this distinction. His most famous contribution to economics was the demonstration that conventional measures of welfare — gross domestic product per capita, average income, aggregate utility — can present a picture of national prosperity that is entirely consistent with severe deprivation for large segments of the population. A country can have rising GDP while millions of its citizens lack access to healthcare, education, political participation, or the basic conditions for a dignified life. The aggregate measure conceals the distribution. The average obscures the particular. The number hides the person.
The same failure of measurement is now occurring in the evaluation of artificial intelligence, and it is occurring at a scale and speed that makes the GDP problem look quaint by comparison. When Edo Segal describes a twenty-fold productivity multiplier achieved by his engineering team in Trivandrum — each of twenty engineers suddenly capable of producing what an entire team had produced before — the number is extraordinary. It is also, from a Senian perspective, radically incomplete. The productivity multiplier measures output. It does not measure what the engineers are now free to do with their expanded capability. It does not measure whether the expansion translates into lives they have reason to value. It does not ask whether the conversion factors — the infrastructure, the institutional support, the cultural conditions, the personal circumstances — that determine whether increased productivity becomes increased human freedom are present or absent.
An engineer who produces twenty times more code but works twenty times more hours has increased output without increasing well-being. An engineer who produces twenty times more code but has lost the developmental friction through which expertise is built has increased output while potentially decreasing capability. An engineer who produces twenty times more code but whose expanded productivity is captured entirely by the employer, with no corresponding expansion of the engineer's own freedom to choose how to work, what to work on, or what kind of life to build around the work, has increased output while the distribution of the gains remains unchanged or worsens.
These are not hypothetical concerns. The Berkeley study that The Orange Pill documents found exactly this pattern: workers who adopted AI tools worked faster, took on more tasks, expanded into areas that had previously been someone else's domain — and burned out. The boundaries between work and non-work dissolved. Task seepage colonized previously protected spaces. The productivity metrics improved. The workers' experience of their own lives did not improve correspondingly. The output expanded. The freedom did not.
Sen's framework gives this observation its proper name: a failure of conversion. The capability approach distinguishes between means and ends with a precision that the technology industry has never adopted. Income is a means. GDP is a means. Productivity is a means. Access to tools is a means. The end is what Sen calls substantive freedom — the real opportunity to live a life one has reason to value. The means are valuable only insofar as they convert into the end, and the conversion is never automatic. It depends on what Sen calls conversion factors: the personal, social, and environmental conditions that determine whether a given resource actually translates into a capability the person can exercise.
A computer is a means. For the engineer in San Francisco with reliable electricity, broadband internet, institutional support, financial security, and cultural recognition of her work, the computer converts readily into expanded capability. For the developer in Lagos — invoked repeatedly in AI discourse as proof of democratization — the same computer encounters a thicket of conversion failures: unreliable power grids, limited bandwidth, economic precarity that demands fourteen-hour days of subsistence labor, absence of the financial infrastructure needed to monetize what she builds, absence of the legal infrastructure needed to protect what she creates. The tool is identical. The capability expansion is radically different. And the difference is not a footnote to the democratization story. It is the story, or at least the half of it that the technology industry consistently omits.
Sen arrived at this framework through an experience that has an uncomfortable structural parallel to the present moment. As a child in Bengal in 1943, he witnessed the famine that killed between two and three million people. The defining feature of the Bengal famine, which Sen would later demonstrate with devastating empirical rigor, was that it was not caused by a shortage of food. Bengal had enough food. The famine was caused by a failure of entitlement — a collapse in the economic and institutional structures that determined who had access to the food that existed. People starved not because food was scarce but because their entitlements to food had been destroyed by inflation, speculation, hoarding, and the failure of government to intervene.
The parallel is structural, not moral — no one is starving because of AI, and the comparison should not be pressed into false equivalence. But the underlying logic is identical: the question is not whether the resource exists but whether people can access it, and access is determined not by the resource itself but by the institutional, economic, and social structures that mediate between the resource and the person. AI capability is not scarce. What is scarce is the constellation of conditions under which AI capability translates into expanded human freedom. The technology is abundant. The conversion factors are not.
This is why the most important question about artificial intelligence is not a technical question. It is not about parameters or benchmarks or context windows. It is not about whether the model can pass a bar exam or write a symphony or generate code that compiles. These are measurements of the technology's power. The important question is about the technology's impact on the capability sets of the people whose lives it touches — and the answer to that question depends almost entirely on variables that the technology itself does not control.
Ten countries may capture seventy to seventy-five percent of AI's projected $15.7 trillion in economic value by 2030. The number comes from PwC's AI economic impact analysis and has been cited in geopolitical analyses of sovereign AI development across Asia and beyond. Whether the number proves precisely accurate is less important than what it reveals about distribution: the gains from AI, like the gains from every previous general-purpose technology, are concentrating before they distribute, and the concentration follows the contours of existing institutional infrastructure. The countries with the most AI capability are the countries that already had the most technological infrastructure, the most educational capital, the most financial resources, the most institutional capacity to channel new technology toward productive use. The floor may be rising, as the optimists argue. But the ceiling is rising faster.
Sen would insist on asking: rising for whom? The capability approach does not deny that aggregate measures can improve. It denies that aggregate improvement is sufficient evidence of human development. Development, in Sen's framework, is the expansion of real freedoms — the substantive opportunities people have to live lives they have reason to value. An expansion of aggregate AI capability that is captured primarily by a small number of companies in a small number of countries, while billions of people lack the conversion factors necessary to translate AI access into expanded freedom, is not development in any sense that Sen's framework would recognize. It is growth. Growth and development are not the same thing. Sen spent a career establishing the distinction. The AI revolution is testing whether anyone learned it.
The silence of the most relevant thinker is not an absence. It is an invitation. The framework exists. The tools of analysis are available. The questions are precisely formulated. What remains is the application — the work of bringing Sen's evaluative apparatus to bear on a technological transformation that his framework anticipated in structure if not in specifics. The work of asking not whether AI is powerful, which it is, or whether AI is productive, which it is, or whether AI is profitable, which it is, but whether AI is expanding the substantive freedoms of the people whose lives it touches, and for whom, and at what cost, and with what distribution, and whether the institutional infrastructure exists to ensure that the expansion reaches the people who need it most.
These questions cannot be answered by the technology industry alone, because the technology industry's metrics are not designed to detect the phenomena the questions describe. The metrics detect power, speed, scale, revenue. They do not detect capability in Sen's sense — the real freedom to choose a life one has reason to value. The gap between what the metrics measure and what matters is the gap that Sen's framework was built to close.
The mapmaker is silent. The map speaks.
The most consequential decision any society makes about a new technology is not whether to adopt it. That decision is usually made by the technology's momentum, by the cascading logic of competitive pressure and consumer demand that makes adoption functionally inevitable once a technology crosses a threshold of usefulness. The most consequential decision is how to evaluate the technology's impact — what counts as success, what counts as failure, what counts as progress, and what counts as cost. The choice of evaluative framework determines what a society sees and what it misses, what it optimizes for and what it neglects, what it celebrates and what it allows to erode unnoticed.
The technology industry has chosen its evaluative framework, and the framework is output. Parameters. Benchmarks. Lines of code generated. Adoption rates. Revenue. Productivity multipliers. Time saved. Cost reduced. These metrics are not meaningless. They measure real phenomena. They capture genuine changes in what machines can do and how fast they can do it. But they are, from the perspective of human welfare, catastrophically incomplete — in the same way and for the same reasons that GDP per capita is catastrophically incomplete as a measure of human development.
Sen demonstrated the inadequacy of income-based measures through what he called the evaluative space problem. The evaluative space is the dimension in which an assessment is conducted — the units, the currency, the metric by which progress or regress is measured. If the evaluative space is income, then a society in which average income rises has made progress, even if the rise is concentrated among the wealthy while the poor become poorer. If the evaluative space is utility, then a society in which aggregate happiness increases has made progress, even if the increase is achieved by conditioning the disadvantaged to accept their deprivation — what Sen called the problem of adaptive preferences, in which people who have never had access to education or healthcare or political participation may report satisfaction with their lives, not because their lives are satisfactory but because their expectations have been shaped by deprivation.
If the evaluative space is output, then a software industry in which twenty-fold productivity multipliers are documented has made progress, even if the multiplier is achieved through the erosion of the developmental processes through which capability is built, through the colonization of leisure time by AI-accelerated work, through the concentration of gains in the hands of tool-providers and platform-owners while the workers whose productivity has multiplied see their autonomy contract rather than expand.
The evaluative space determines what is visible and what is invisible. Choose the wrong space, and you will optimize for the wrong thing. You will celebrate the wrong victories. You will miss the most important costs. And because what you measure is what you manage, you will build institutions that perpetuate the distortion, that reward the metric rather than the thing the metric was supposed to approximate.
Sen proposed a different evaluative space: capabilities. Not what people produce, not what they earn, not what they consume, but what they are substantively free to do and to be. The shift in evaluative space is not merely semantic. It is structural. It changes what counts as evidence, what counts as progress, and what counts as an adequate response to technological change.
Consider two engineers, both working in the same organization, both using Claude Code, both experiencing the twenty-fold productivity multiplier that The Orange Pill documents. In the output evaluative space, they are identical. Both produce twenty times more code. Both ship features faster. Both contribute to the organization's bottom line.
In the capability evaluative space, they may be radically different. The first engineer uses the freed cognitive bandwidth to move into product strategy, to develop architectural judgment, to make decisions about what should be built rather than merely executing the building. The tool has expanded her capability set — the range of functionings she can choose to exercise. She has gained not just productivity but agency, the substantive freedom to operate at a higher cognitive level, to exercise judgment rather than mere execution.
The second engineer uses the same tool to take on more tasks, to fill every freed minute with additional work, to respond to the organizational expectation that productivity gains should be captured as additional output rather than redirected toward higher-order thinking. The tool has increased her output without increasing her capability set. She produces more but chooses less. Her freedom to determine the character of her work has not expanded. It may have contracted, because the tool's efficiency has raised the baseline expectation of what she should produce, and the gap between what she can produce and what she is expected to produce has not widened — it has simply shifted upward, with the exhaustion compounding at a higher altitude.
Same tool. Same productivity metric. Radically different human outcomes. The output evaluative space cannot distinguish between these two cases. The capability evaluative space can.
The distinction between functionings and capabilities is the analytical engine that makes this evaluation possible. A functioning, in Sen's framework, is a state of being or doing — being well-nourished, being educated, participating in community life, engaging in creative work, exercising professional judgment. A capability is the real freedom to achieve a functioning — the substantive opportunity to be well-nourished, to be educated, to participate, to create, to judge. The distinction matters because a person can have a capability without exercising it, and the freedom to choose not to exercise a capability is itself valuable. The person who could engage in creative work but chooses instead to rest has a capability set that includes creative work. The person who cannot engage in creative work because the institutional conditions do not support it has a diminished capability set, regardless of whether she would have chosen to create.
Applied to AI, this distinction produces evaluations that the output metrics systematically miss. The question is not whether a person uses AI to produce creative work — which is a question about functionings — but whether the person has the substantive freedom to engage in creative work that she has reason to value, with or without AI. The capability question is prior to the functioning question. It asks about the conditions of choice, not the content of choice.
A student who uses AI to generate an essay has achieved a functioning — the essay exists. But has the student's capability set expanded or contracted? If the AI has removed the developmental friction through which the capability of writing is built — the struggle with language, the confrontation with one's own confusion, the slow accumulation of expressive skill through practice and failure — then the functioning has been achieved at the cost of the capability. The student can produce an essay. The student cannot write. The output metric records success. The capability metric records a loss that the output metric is structurally incapable of detecting.
This is not a marginal concern. It is the central evaluative challenge of the AI era, and it is the challenge that Sen's framework was built to address — not because Sen anticipated AI, but because the framework was designed to detect precisely this kind of failure: situations in which aggregate improvements in measurable outcomes coexist with, and sometimes mask, genuine losses in human capability.
The Harvard Business Review study from February 2026 — the Berkeley research that tracked AI adoption in a two-hundred-person technology company over eight months — documented this coexistence with empirical specificity. Productivity increased. Task scope widened. Boundaries between roles blurred as people expanded into adjacent domains. By the output metrics, the transformation was unambiguously positive. But the researchers also documented task seepage, the colonization of rest periods by AI-facilitated work. They documented the intensification of multitasking. They documented accumulating fatigue and declining satisfaction. They proposed what they called "AI Practice" — structured pauses, sequenced workflows, protected reflection time — as an organizational intervention.
In Senian terms, what the Berkeley researchers discovered was a conversion failure. The productivity gain — the additional output the AI tools made possible — was not converting into expanded capabilities for the workers. It was converting into additional obligations, additional expectations, additional work that filled every space the efficiency had created. The tool expanded what the workers could do. The institutional context ensured that the expansion was captured as additional doing rather than additional freedom. The means increased. The conversion into ends did not follow.
The concept of conversion factors is where Sen's framework achieves its greatest analytical precision, and where the AI discourse is most impoverished. Conversion factors are the conditions — personal, social, environmental — that determine whether a given resource translates into a capability. A bicycle is a resource. For a person who can ride, who has roads to ride on, whose physical condition permits riding, and whose social context does not prohibit riding, the bicycle converts into the capability of mobility. For a person who cannot ride, or who has no roads, or whose physical condition prevents riding, or whose social context prohibits it, the bicycle is a resource without a capability. The resource is identical. The conversion is different. And the difference is determined not by the resource but by the surrounding conditions.
AI tools are resources. For the engineer in San Francisco — with reliable infrastructure, institutional support, financial security, a culture that recognizes and rewards technical contribution, and an employer whose organizational structure channels productivity gains toward higher-order work — the AI tool converts readily into expanded capability. For the engineer in a different institutional context — where the organizational structure captures productivity gains as additional output expectations, where the culture rewards visible busyness rather than reflective judgment, where financial precarity makes it impossible to risk the experimentation through which new capabilities are developed — the same tool converts into intensified labor without expanded freedom.
The conversion factors that determine whether AI access translates into capability expansion include, at minimum: reliable infrastructure, meaning electricity, connectivity, and hardware sufficient to run the tools. They include educational preparation, meaning not just technical training but the broader cognitive preparation — critical thinking, judgment, the ability to formulate good questions — that determines whether a person can direct AI tools productively. They include financial security sufficient to permit experimentation and risk-taking. They include institutional structures that channel productivity gains toward human development rather than pure output extraction. They include cultural conditions that recognize and reward the higher-order capabilities — judgment, vision, ethical reasoning — that AI elevates in importance. They include political conditions that give people voice in determining how the technology is deployed in their communities and workplaces.
Each of these conversion factors is unevenly distributed. Each follows the contours of existing inequality. Each determines whether the AI revolution expands freedom or merely expands output. And each is invisible to the metrics that the technology industry uses to evaluate its own impact.
The evaluation of AI in the output space produces a story of unambiguous progress. More capability, more productivity, more access, more speed. The evaluation of AI in the capability space produces a more complex story — a story of genuine gains for some, genuine losses for others, and a vast territory of ambiguity where the outcome depends on conversion factors that the technology itself does not control and the technology industry does not measure.
Sen's framework does not tell us what the answer is. It tells us what the question is. And the question — whether AI is expanding the substantive freedoms of the people whose lives it touches — is the question that the current discourse is not asking with sufficient rigor, not because the question is obscure but because the evaluative framework needed to ask it has not been adopted by the people with the most power to act on the answer.
The metrics determine the management. The management determines the outcome. If the technology industry continues to evaluate AI in the output space, it will continue to optimize for output — producing more, faster, at greater scale — without attending to the capability question that determines whether the output translates into human freedom. If the policy community continues to evaluate AI in the access space — treating tool availability as equivalent to capability expansion — it will continue to celebrate democratization without examining whether the democratic promise is being fulfilled.
What is needed is a migration of evaluative framework — from output to capability, from means to ends, from what the technology can do to what people are genuinely free to do with it. The framework exists. The analytical tools are available. The application is the work that remains.
The Bengal famine of 1943 killed between two and three million people. Amartya Sen was nine years old, living in Dhaka, when it began. He watched laborers and rural workers appear at the doorstep of his family's home, skeletal and pleading. He watched people die in the streets. The experience marked him in the way that certain childhood encounters with unnecessary suffering mark a person permanently — not with trauma alone but with the specific, burning need to understand why.
The conventional explanation was scarcity. Bengal did not have enough food. The war had disrupted supply chains. The rice crop had failed. The mouths outnumbered the grain. The explanation was intuitive, widely accepted, and wrong.
Sen demonstrated, decades later with meticulous empirical analysis, that Bengal's food supply in 1943 was not significantly lower than in previous, non-famine years. The food existed. What had collapsed was not supply but entitlement — the economic and institutional mechanisms by which people gained access to the food that was available. Wartime inflation had destroyed the purchasing power of rural laborers. Speculative hoarding had removed rice from the market. The colonial government had prioritized military supply chains over civilian distribution. The free press that might have forced governmental response had been suppressed by wartime censorship.
People starved surrounded by food. Not because the resource was absent, but because the structures that converted the resource into nourishment had failed.
This insight — that the critical variable is not the existence of a resource but the institutional and social machinery that translates the resource into human welfare — became the foundation of Sen's entitlement approach and, subsequently, of the broader capability framework. It is an insight of such analytical power that its application extends far beyond the economics of famine. It extends, with uncomfortable precision, to the economics of artificial intelligence.
The parallel must be drawn carefully, because the moral stakes are different in kind. No one is dying of AI deprivation. The comparison is structural, not humanitarian. But the structure is identical: a resource of extraordinary power exists, its existence is celebrated as proof of progress, and the question of whether the resource is actually reaching the people who need it most — and reaching them in a form they can use — is systematically obscured by the celebration.
AI capability in 2026 is not scarce. The frontier models are available through subscription. Claude Code costs one hundred dollars per month. The knowledge embedded in these systems — the ability to write software, to analyze data, to draft legal documents, to generate medical differential diagnoses, to produce creative work — represents a concentration of cognitive capability that would have been unimaginable a decade ago. The resource exists. It is abundant. It is, in principle, accessible to anyone with an internet connection and a credit card.
The technology industry points to this accessibility as evidence of democratization. And the pointing is not dishonest. The floor has risen. A developer in Lagos, a student in Dhaka, a small business owner in Nairobi can now access tools that were previously available only to engineers at the world's most well-resourced technology companies. This is a genuine expansion of access, and it would be dishonest to deny it.
But access is not entitlement. And entitlement is not capability. The distinction between access, entitlement, and capability is the analytical contribution that Sen's framework makes to the AI discourse, and it is the distinction that the discourse most urgently needs.
Access means the resource exists and can, in principle, be reached. The developer in Lagos has access to Claude Code. The subscription is available. The tool functions. Access is the thinnest layer of the analysis — necessary but nowhere near sufficient.
Entitlement, in Sen's framework, means the person has a legitimate claim on the resource through the existing economic and institutional structures. The developer in Lagos has access, but does she have entitlement? Can she afford the subscription? Does she have the financial infrastructure to accept payment for what she builds? Does she have legal protection for her intellectual property? Does she have the educational preparation to direct the tool productively? Each of these questions identifies an entitlement that may or may not be present, and each absent entitlement represents a break in the chain between the resource and the person.
Capability means the person is substantively free to use the resource to achieve functionings she has reason to value. This is the fullest and most demanding layer of the analysis. The developer in Lagos may have access and even entitlement — she can reach the tool, she can afford it, she has the legal standing to use it. But can she convert the tool into a sustainable livelihood? Into a product that reaches a market? Into a career that gives her the freedom to choose what she works on, how she works, and what kind of life she builds around her work? The capability question goes beyond access and entitlement to ask about the full set of conditions — personal, social, institutional, environmental — that determine whether the resource translates into a life the person has reason to value.
The famine was not about food. The AI gap is not about tools.
The structural parallel operates at multiple scales. At the individual scale, the conversion factors that determine whether AI access becomes capability expansion include: reliable electricity, which approximately 770 million people worldwide still lack consistent access to. Broadband internet, which roughly three billion people lack. Hardware sufficient to run the tools, whose cost relative to local wages varies by orders of magnitude across geographies. Educational preparation that includes not just technical literacy but the broader cognitive capabilities — critical thinking, problem formulation, evaluative judgment — that determine whether a person can direct AI productively rather than merely consume its output. Financial infrastructure that allows the person to participate in the digital economy as a producer rather than merely a user. Legal infrastructure that protects intellectual property, enforces contracts, and provides recourse when things go wrong. Time — the discretionary hours free from subsistence labor that allow experimentation, learning, and the development of new capabilities.
Each of these conversion factors is unevenly distributed. Each follows the contours of existing inequality. Each represents a potential point of entitlement failure — a break in the chain between the abundant resource and the person who needs it.
At the national scale, the parallel is even starker. The projection that ten countries may capture seventy to seventy-five percent of AI's economic value mirrors the distributional pattern of every previous general-purpose technology, but with a concentration mechanism that is new in degree if not in kind. The cost of training frontier AI models is enormous and growing. The compute required to run them is controlled by a small number of cloud providers. The data required to train them is disproportionately generated in and controlled by wealthy countries. The talent required to build them is concentrated in a small number of institutions, most of which are located in the United States and China.
This concentration creates what Sen would recognize as a structural entitlement failure at the civilizational level. The resource exists. The resource is abundant. But the institutional structures that determine who can access the resource, and under what conditions, and with what degree of control over the terms of access, are reproducing the patterns of inequality that Sen spent his career documenting in the context of development economics.
The colonial government of Bengal in 1943 did not intend to cause a famine. It intended to win a war. The famine was a side effect of institutional priorities that placed military logistics above civilian welfare, that allocated scarce transportation to military supply chains rather than food distribution, that suppressed the free press that would have forced accountability. The institutional structure produced the entitlement failure, and the entitlement failure produced the famine, and at no point did any individual decision-maker decide that millions of people should die.
The technology companies building frontier AI models do not intend to create a capability famine. They intend to build powerful technology. But the institutional structure of the AI economy — the concentration of compute, the concentration of talent, the concentration of data, the concentration of the financial resources required to participate at the frontier — is producing an entitlement structure that will determine, with the same implacable logic that determined who ate in Bengal in 1943, who benefits from AI and who does not.
Sen's most important finding about the Bengal famine was that the famine could have been prevented. Not by increasing the food supply, which was adequate, but by restructuring the entitlements — by intervening in the economic and institutional mechanisms that determined who had access to the food that existed. Price controls. Distribution programs. The restoration of press freedom that would have forced governmental accountability. The tools of prevention were institutional, not agricultural.
The same logic applies to the AI transition. The capability expansion that AI makes possible can be broadly distributed — but only if the institutional infrastructure is built to distribute it. The infrastructure is not the technology itself. The technology is the food. The infrastructure is the distribution system — the educational programs, the financial mechanisms, the legal frameworks, the connectivity investments, the labor protections, the democratic institutions that determine whether the technology's benefits reach the people who need them most or concentrate among the people who need them least.
Sen identified a specific institutional mechanism that was decisive in preventing famines in democratic societies: a free press and democratic accountability. No functioning democracy with a free press, he argued, has ever experienced a famine, because the press creates the information flow and the democracy creates the accountability that forces governmental response. The mechanism is not automatic. It requires specific institutional conditions. But when those conditions are present, the entitlement failures that produce famine are caught and corrected before they become catastrophic.
The analog in the AI context is transparency and democratic participation in the governance of AI systems. The question of who benefits from AI and who bears its costs is, at its core, a question about institutional design — about who has voice in determining how the technology is deployed, who has access to information about its effects, who has the power to demand accountability from the organizations that control it. These are not technical questions. They are political questions, in the deepest and most important sense of the word — questions about the distribution of power and the institutional mechanisms that ensure power is exercised in the interest of the many rather than the few.
The famine was not about food. The food was there. The distribution system was broken. The AI revolution is producing the most powerful cognitive tools in human history. The tools are there. The question — Sen's question, the question he has been asking for sixty years — is whether the distribution system will be built. Whether the entitlements will be structured. Whether the conversion factors will be supplied. Whether the capability expansion that the technology makes possible will reach the people who need it most — or whether, as in Bengal in 1943, the resource will exist in abundance while millions lack the institutional conditions necessary to access it.
The answer is not determined by the technology. It is determined by the institutions. And the institutions are built by people — by builders, by policymakers, by educators, by citizens who demand accountability. The work of building the distribution system is the work that matters most in this moment. Not because the technology is unimportant, but because the technology without the distribution system is food without entitlements — abundant, visible, and unreachable by the people who need it most.
A distinction that appears merely academic in the abstract becomes a matter of extraordinary practical consequence when applied to the AI revolution. The distinction is between formal freedom and substantive freedom — between the absence of prohibition and the presence of genuine opportunity — and it is the distinction on which Sen's entire framework turns.
Formal freedom is the freedom from external constraint. No one prevents you from doing the thing. No law prohibits it. No guard blocks the door. Formal freedom is what classical liberal theory celebrates: the removal of barriers, the opening of markets, the lifting of restrictions. It is valuable. It is also, Sen argues, radically insufficient as an account of what it means to be free.
Substantive freedom is the real opportunity to achieve something. Not merely the absence of prohibition but the presence of the conditions — material, educational, institutional, social — that make achievement genuinely possible. The person who is formally free to attend university but cannot afford tuition has formal freedom without substantive freedom. The person who is formally free to start a business but lacks access to credit, to markets, to legal protection, to the infrastructure that makes commerce possible, has formal freedom without substantive freedom. The formal freedom is real. It is also, by itself, empty — a door that is unlocked but opens onto a cliff.
The technology industry's celebration of AI democratization is, almost entirely, a celebration of formal freedom. No one prevents the developer in Lagos from using Claude Code. No one prevents the student in Dhaka from accessing ChatGPT. No one prevents the small business owner in rural India from deploying AI tools to optimize her operations. The tools are available. The subscription is open. The door is unlocked.
The substantive freedom question is different, and it is the question that the celebration consistently fails to ask. Can the developer in Lagos actually build a viable product with Claude Code? Not in principle — in practice. Does she have the electricity to keep her computer running through a full development session? The connectivity to maintain a stable connection to the AI service? The financial infrastructure to accept payment from customers in other countries? The legal infrastructure to protect what she builds? The educational preparation to formulate the questions that direct the tool toward productive output? The market access to reach customers who will pay for what she creates? The time — the hours free from subsistence labor — to experiment, to learn, to iterate, to develop the judgment that separates a prototype from a product?
Each of these questions identifies a dimension of substantive freedom that formal access does not guarantee. And each missing dimension represents a gap between the promise of democratization and its fulfillment — a gap that is invisible in the evaluative framework the technology industry has adopted but glaring in the evaluative framework that Sen's work provides.
The distinction between formal and substantive freedom produces different evaluations of the same phenomena. Consider the migration of scarcity from execution to judgment that The Orange Pill describes as the central economic consequence of AI. When execution becomes abundant — when anyone can build anything by describing it in natural language — the scarce resource migrates from the capacity to build to the judgment about what to build. The premium shifts from the implementer to the decision-maker, from the coder to the creative director, from the person who can translate intention into artifact to the person who can determine which intentions deserve translation.
In the formal freedom framework, this migration is an unambiguous expansion of opportunity. The barriers to building have been removed. Anyone can now participate. The playing field has been leveled. The door is open.
In the substantive freedom framework, the evaluation is more complex. The migration of scarcity to judgment raises a prior question: who has had the opportunity to develop judgment? Judgment is not innate. It is cultivated through experience, education, mentorship, exposure to diverse perspectives, the luxury of time for reflection, and the institutional structures that reward its exercise. The person who has spent a career in a well-resourced organization, surrounded by experienced colleagues, with access to diverse markets and challenging problems, has had rich opportunities to develop judgment. The person who has spent a career executing tasks in a resource-constrained environment, without mentorship, without exposure to strategic thinking, without the institutional support that cultivates higher-order capabilities, has had fewer opportunities.
When the economy suddenly revalues judgment over execution, the people who have had the richest opportunities to develop judgment benefit disproportionately. The migration of scarcity does not create a level playing field. It creates a new hierarchy that mirrors, in different dimensions, the old one. The old hierarchy rewarded technical execution. The new hierarchy rewards strategic judgment. The people at the top of both hierarchies are disproportionately the people who had the most institutional support, the most educational preparation, the most cultural capital — in short, the most conversion factors — from the beginning.
This is not an argument against the migration. The migration is real, it is proceeding, and it cannot be reversed by argument or by policy. It is an argument for recognizing that the migration, by itself, does not constitute democratization. Democratization requires not just the opening of doors but the construction of the pathways — educational, institutional, financial, cultural — that enable people to walk through them. Formal freedom opens the door. Substantive freedom builds the path.
Sen's framework identifies five categories of instrumental freedom — freedoms that serve as means to the expansion of overall human capability. They are: political freedoms, economic facilities, social opportunities, transparency guarantees, and protective security. Each category identifies a dimension of institutional infrastructure that is necessary for formal freedoms to convert into substantive ones. And each category illuminates a specific dimension of the AI transition that the formal-freedom framework misses.
Political freedoms include the ability to participate in decisions about how AI is deployed in one's community, workplace, and society. The question of whether AI systems are used to surveil, to sort, to evaluate, to recommend — and on what terms, and with what accountability, and with what recourse for those affected — is a political question. It requires political freedom to address: the freedom to know what systems are being deployed, to challenge their effects, to demand accountability from the people who control them. Without political freedom, AI deployment is imposed rather than negotiated, and the people most affected by the deployment have no voice in shaping it.
Economic facilities include not just income but the institutional conditions that enable economic participation: access to credit, to markets, to financial infrastructure, to the legal mechanisms that enforce contracts and protect property. The developer who builds a product with AI tools but cannot access the financial infrastructure to monetize it has formal freedom to build without substantive freedom to benefit from building. The economic facilities that convert AI access into economic capability are unevenly distributed across and within countries, and the unevenness determines, more than the technology itself, who benefits from the AI revolution.
Social opportunities include education and healthcare — the capabilities that determine whether a person can participate productively in economic and social life. Education, in the context of AI, means not just technical training but the cultivation of the cognitive capabilities that AI elevates in importance: the ability to ask good questions, to exercise judgment, to evaluate competing claims, to think across disciplinary boundaries. The educational systems of most countries are not designed to cultivate these capabilities. They are designed to cultivate the capabilities that the previous economic regime rewarded: technical execution, domain specialization, the accumulation of factual knowledge. The migration of scarcity from execution to judgment demands a corresponding migration in educational priorities, and the migration is not occurring at anything like the speed required.
Transparency guarantees include the ability to know what AI systems are doing, on what basis, with what data, and with what effects. The opacity of large language models — the difficulty of explaining why a particular output was produced, what training data influenced it, what biases it embodies — is a transparency problem that has direct implications for the capability of the people affected by these systems. A person who is evaluated by an AI system she cannot understand, whose functioning she cannot challenge, whose biases she cannot detect, is a person whose substantive freedom has been diminished by the technology's opacity.
Protective security includes the social safety nets — unemployment insurance, retraining programs, healthcare guarantees, minimum income provisions — that protect people during periods of economic disruption. The AI transition is producing economic disruption at a speed that existing safety nets were not designed to handle. The software engineer whose skills are commoditized in months rather than decades, the knowledge worker whose entire professional domain is restructured in a single product cycle, the middle of the skill distribution that is neither senior enough to provide irreplaceable judgment nor junior enough to be retrained cheaply — these people need protective security that the current institutional infrastructure does not provide.
Each of Sen's five instrumental freedoms identifies a dimension of institutional infrastructure that is necessary for the AI revolution to expand substantive freedom rather than merely formal access. Each is unevenly distributed. Each follows the contours of existing inequality. And each represents a domain of institutional construction that is at least as important as the technological construction that the AI industry is performing with such speed and confidence.
The argument is not that formal freedom is valueless. It is that formal freedom is the beginning, not the end, of the evaluation. The door is open. The question is whether the path exists — whether the educational, economic, social, political, and protective infrastructure is in place to enable people to walk through the open door and arrive somewhere that constitutes a genuine expansion of their freedom to live a life they have reason to value.
Sen has argued, across multiple works, that the selection of relevant capabilities should be left to democratic deliberation rather than expert determination. This is a feature of the framework, not a bug. It means that the question of which capabilities matter most in the age of AI — which freedoms should be prioritized, which conversion factors should be built first, which institutional investments should take precedence — is not a question for technologists or economists or philosophers to answer in isolation. It is a question for democratic societies to answer through the messy, imperfect, indispensable process of public deliberation.
This deliberative commitment is itself under threat from the AI transition. The speed of the technology's advance, the opacity of its functioning, the concentration of its control in a small number of organizations, the erosion of the information environment through algorithmic curation and AI-generated content — each of these features of the current moment makes democratic deliberation more difficult precisely when it is most needed. The technology that should be subject to democratic governance is, by its nature and its pace, outrunning the democratic institutions that should govern it.
This is not an argument for stopping the technology. It is an argument for building the democratic infrastructure as fast as the technological infrastructure — for investing in the institutional conditions of deliberation with the same urgency and the same resources that are invested in the technological conditions of capability. The technology without the institutions is formal freedom without substantive freedom — a door opened onto a cliff, an abundance without entitlements, a resource that exists but cannot be converted into the thing that matters: the real freedom of real people to live lives they have reason to value.
The capability set is the most precisely useful concept in Sen's analytical architecture, and the concept most conspicuously absent from the discourse surrounding artificial intelligence. The capability set is not a list of things a person does. It is the full range of things a person could do — the set of achievable functionings from which a person is free to choose. The value of the capability set lies not in its exercise but in its existence. A person who could engage in creative work but chooses rest has a larger capability set than a person who rests because creative work is unavailable to her. The freedom inheres in the choice, not in the outcome.
This distinction — between what a person does and what a person could do — is the distinction that the AI discourse has failed to make, and the failure distorts every major claim about the technology's impact on human lives. The claim that AI expands human capability is, by the metrics currently in use, well supported. The claim that AI expands the human capability set — the range of genuinely available functionings from which a person is free to choose — is a different and far more demanding claim, and the evidence for it is radically more ambiguous.
Consider the most celebrated case: the non-programmer who builds software through conversation with an AI system. The Orange Pill documents this as a paradigm case of capability expansion — an engineer who had spent years exclusively on backend systems building a complete user-facing feature in two days, not because she had learned frontend development but because the AI handled the translation into code she had never written. The boundary between what she could imagine and what she could build had moved so dramatically that her job description changed in a week.
In the functioning space, this is unambiguous expansion. She did something she could not do before. A new functioning — building user interfaces — has been achieved. But the capability question asks something more exacting: has the functioning been added to her capability set, or has it been added to the capability set of the human-AI system while potentially narrowing her individual capability set?
The distinction is not pedantic. It identifies the central evaluative challenge of the AI era. If the engineer can build user interfaces only with the AI tool — if the capability is located in the system rather than in the person — then the capability is contingent on continued access to the tool, on the tool's continued functioning, on the pricing decisions of the company that provides the tool, on the infrastructure that supports the tool's operation. The person has not acquired a capability. She has acquired access to a capability that belongs to the system. Her capability set has expanded in a sense, but the expansion is conditional, revocable, and dependent on factors outside her control.
This conditional expansion is qualitatively different from the kind of capability expansion that Sen's framework treats as constitutive of development. When a person learns to read, the capability is located in the person. It cannot be revoked by a pricing decision. It does not depend on continued access to a service. It is, in the fullest sense, the person's own. When a person gains access to an AI tool that can read for her, a functioning has been achieved — information has been accessed — but the capability of reading has not been developed. The distinction between the functioning and the capability is the distinction between a prosthesis and a development, between an external augmentation and an internal growth.
Sen's framework does not automatically favor internal capabilities over external augmentations. Sen is not a Luddite; he does not argue that people should refuse tools that expand their functioning. But the framework does insist on clarity about what, precisely, is being expanded. An expansion of functionings through tool access is valuable. An expansion of the capability set through the development of internal capacities is more valuable, because it is more robust, more transferable, and more fully the person's own. The AI discourse has conflated the two, treating every expansion of functioning as an expansion of capability, and the conflation obscures the most important evaluative question: is the technology developing human capacities or substituting for them?
There are cases where AI unambiguously expands the capability set. The person who uses AI to learn a new domain — who engages with the tool not as a substitute for understanding but as a scaffold for developing it — acquires capabilities that are genuinely her own, capabilities that persist even if the tool is removed. The engineer who uses Claude Code not merely to generate frontend code but to understand frontend architecture, who reads the generated code, who studies the patterns, who uses the AI's output as a teaching tool rather than a production tool, is developing internal capabilities through interaction with an external system. The tool is a teacher, not a prosthesis. The capability expansion is genuine.
There are also cases where AI unambiguously contracts the capability set. The student who uses AI to generate essays without engaging in the cognitive struggle of writing — the confrontation with one's own confusion, the slow development of the capacity to articulate thought through sustained effort — has achieved a functioning (the essay exists) while losing the developmental process through which the capability of writing is built. Over time, the student's capability set narrows: she can produce text but cannot write, in the same way that a person who uses a calculator for all arithmetic can produce answers but cannot calculate. The functioning is preserved. The capability is lost.
Most cases fall between these poles, in a territory of ambiguity that the binary framework of "AI helps" or "AI hurts" cannot navigate. This is where the capability set concept achieves its greatest analytical value. The question is not whether AI is good or bad for human capability. The question is: for this person, in this institutional context, with these conversion factors present or absent, is the interaction with this AI tool expanding or contracting the capability set — the full range of genuinely achievable functionings from which the person is free to choose?
The answer depends on how the tool is used, which depends on the institutional context in which the tool is embedded, which depends on the educational preparation of the user, which depends on the organizational norms that govern the tool's deployment, which depends on the cultural values that determine what counts as valuable work and valuable learning. The answer is, in other words, determined by conversion factors — the same conversion factors that determine whether food availability translates into nourishment, whether income translates into well-being, whether formal freedom translates into substantive freedom.
The concept of adaptive preferences — one of Sen's most incisive analytical tools — adds a further layer of complexity. Adaptive preferences are preferences that have been shaped by deprivation: the person who has never had access to education may not value education, not because education is unvaluable but because the preference has adapted to the constraint. The person who has never experienced creative autonomy may not value creative autonomy. The preference adapts to the available capability set, and the adapted preference then appears to justify the limited capability set that produced it.
In the AI context, adaptive preferences operate in a specific and concerning way. The person who has always used AI to generate text may not value the capability of writing, because the capability has never been developed and its absence has never been felt. The developer who has always used AI to generate code may not value the deep architectural understanding that comes from years of manual coding, because the understanding has never been developed and its absence — masked by the tool's competence — has never been felt. The preference for AI-assisted production adapts to the capability set that AI-assisted production provides, and the adapted preference obscures the capabilities that have been lost or never developed.
Sen warned against using preference satisfaction as the measure of well-being precisely because of this adaptive mechanism. A society in which people report satisfaction with AI-augmented work is not necessarily a society in which people's capability sets have expanded. It may be a society in which preferences have adapted to a contracted capability set — in which people have lost capabilities they never developed and therefore do not miss, and in which the loss is invisible because the preference has adjusted to accommodate it.
The evaluative challenge is acute because it operates on a timescale that the technology industry's quarterly metrics cannot detect. The contraction of a capability set through disuse is gradual. The developer who stops debugging manually does not lose architectural intuition overnight. The loss accumulates, layer by layer, as the experiential deposits that built the intuition are no longer being laid down. The loss is invisible in the short term and potentially devastating in the long term — not because the individual cannot function (the tool compensates) but because the individual's capability set has narrowed, and the narrowing has implications for robustness, adaptability, and the capacity to respond to circumstances in which the tool is unavailable or insufficient.
The question that Sen's framework poses to the AI revolution is not whether the technology works. It works. It works spectacularly well. The question is whether the technology is expanding or contracting the range of genuinely achievable functionings from which people are free to choose — and whether the institutional context in which the technology is deployed supports the expansion of the capability set or merely the expansion of output.
The framework produces specific, actionable evaluative criteria. An AI deployment that develops users' internal capabilities — that teaches, that scaffolds, that builds understanding — is expanding the capability set. An AI deployment that substitutes for users' capabilities — that produces output without developing the capacity to produce — is contracting the capability set while expanding the functioning set. An AI deployment that does both, which most deployments do, requires careful evaluation of the net effect, an evaluation that depends on conversion factors the technology industry is not currently measuring.
What is needed is not a verdict on AI but an evaluative practice — an ongoing, context-sensitive assessment of what the technology is doing to the capability sets of the people who use it. The assessment must be conducted in the right evaluative space: not output, not revenue, not adoption, but the substantive freedom of real people to achieve the functionings they have reason to value. The tools for the assessment exist. They were built by a thinker who never addressed AI directly but whose framework anticipated, with remarkable precision, the evaluative challenge that AI presents.
The capability set is the unit of analysis. The conversion factors are the mechanism. The question — is this technology expanding or contracting the range of lives people are genuinely free to live? — is the question that the framework asks. The AI discourse has not been asking it. The cost of the omission grows with every deployment that expands output while the capability question goes unexamined.
In one of his most cited thought experiments, Amartya Sen described a person who has been so thoroughly shaped by deprivation that she no longer desires what she lacks. The feudal laborer who does not aspire to education because education has never been available. The woman in a patriarchal society who does not desire economic independence because independence has never been conceivable. The enslaved person who reports contentment because the capacity to imagine an alternative has been extinguished by the conditions of bondage.
Sen called this the problem of adaptive preferences, and he deployed it as a devastating critique of utilitarian welfare economics — the tradition that measures human well-being by the satisfaction of preferences, by happiness, by the self-reported contentment of the people whose welfare is being assessed. The problem is not that the person is lying about her satisfaction. She is not. The satisfaction is genuine. The problem is that the satisfaction has been produced by the deprivation itself, shaped by the narrowing of the capability set until the preference conforms to the constraint. The person is satisfied not because her life is good but because her expectations have been reduced until they fit the life she has.
If preference satisfaction is the measure of well-being, the happy slave is well-off. If capability is the measure, the happy slave is profoundly deprived — deprived not only of freedom but of the capacity to recognize the deprivation. The choice of evaluative framework determines whether the deprivation is visible or invisible. Utilitarianism renders it invisible. The capability approach reveals it.
The application to artificial intelligence is less dramatic in moral terms but structurally identical in analytical terms. The satisfied user of AI tools — the knowledge worker who reports that AI has made her work faster, easier, more productive, more enjoyable — may be genuinely satisfied. The satisfaction may be authentic. But if the satisfaction has been shaped by a narrowing of the capability set — if the worker no longer values capabilities she has lost because the loss has never been felt, if her preferences have adapted to the contracted set of functionings that AI-augmented work provides — then the satisfaction is an unreliable indicator of well-being.
The pattern is observable in the data, even if the data was not collected with this framework in mind. The Berkeley study documented workers who reported that AI made their work more productive and more engaging. The same workers exhibited task seepage, boundary erosion, intensified multitasking, and accumulating fatigue. The satisfaction and the degradation coexisted. The workers were satisfied with their expanded output. They were also, by measures they were not reporting, experiencing a contraction of capabilities they were not monitoring: the capability for sustained attention, for genuine rest, for the kind of reflective thinking that only occurs in the absence of stimulation.
The adapted preference mechanism explains why these two observations are not contradictory. The workers valued what the tool provided — speed, output, breadth of task engagement — and had ceased to value what the tool eroded — depth, rest, the unstructured cognitive time in which integrative thinking occurs. The preference adapted to the new capability set. The new capability set became the standard against which satisfaction was measured. And the measurement, conducted in the space of preference satisfaction, registered success.
Sen would recognize this pattern instantly. It is the same pattern he documented in the context of gender inequality, where women in deeply patriarchal societies reported satisfaction with arrangements that severely constrained their freedom. It is the same pattern he documented in the context of caste, where members of subordinate castes internalized the norms that limited their opportunities and reported contentment with their constrained lives. The mechanism is identical: the capability set contracts, the preference adapts, the adaptation renders the contraction invisible to any evaluative framework that takes satisfaction as its measure.
The implications for AI evaluation are profound and uncomfortable. They mean that user satisfaction metrics — the Net Promoter Scores, the engagement statistics, the self-reported assessments of AI's helpfulness — cannot be taken at face value as measures of well-being. A person can be genuinely satisfied with a technology that is narrowing her capability set, because the narrowing has produced an adapted preference that no longer values what has been lost. The satisfaction is real. The well-being may not be.
This does not mean that satisfaction is meaningless or that all user reports should be dismissed. It means that satisfaction must be supplemented by an independent assessment of the capability set — an assessment that asks not whether the person is satisfied but whether the person has the substantive freedom to achieve the functionings she would have reason to value if she were in a position to make an informed, unconstrained choice. The assessment must be independent because the person whose preferences have adapted cannot reliably conduct it herself. The laborer who does not value education cannot tell you that education is valuable. The user who does not value deep attention cannot tell you that attention is being eroded. The preference has been shaped by the deprivation, and the shaped preference cannot serve as a measure of the deprivation that shaped it.
The parallel to the technology industry's reliance on engagement metrics is precise and damning. Engagement metrics measure preference satisfaction in real time. The user clicks, scrolls, prompts, returns. Each action is recorded as a vote of confidence. The aggregate of these votes constitutes the industry's primary measure of success. But engagement, like preference satisfaction, is susceptible to adaptation. A person whose attention has been shaped by algorithmic feeds — whose tolerance for boredom has been eroded, whose capacity for sustained focus has been diminished, whose preference for stimulation has been cultivated by the technology itself — will engage with the technology that produced the adaptation. The engagement is genuine. The engagement metric records satisfaction. The satisfaction has been produced by the very narrowing that should concern us.
This is not a conspiracy. It is not the result of malicious intent. It is the structural consequence of evaluating technology in the space of preference satisfaction rather than in the space of capability. The technology industry optimizes for engagement because engagement is measurable, immediate, and monetizable. Capability expansion is none of these things. It is slow, it is difficult to measure, and its relationship to revenue is indirect and uncertain. The optimization target determines the product design, and the product design shapes the user's capability set, and the shaped capability set produces the adapted preferences that the engagement metrics register as success.
Sen's solution to the adaptive preference problem is not to ignore preferences but to supplement them with an independent assessment of what he calls objective conditions — the material, educational, and institutional circumstances that determine whether a person's preference reflects genuine choice or constrained adaptation. In the context of AI, this means supplementing user satisfaction metrics with independent assessments of what the technology is doing to the capability sets of the people who use it.
Such assessments would examine whether users are developing new capabilities through their interaction with AI or merely executing new functionings through the tool's capabilities. They would examine whether the time freed by AI is being invested in higher-order cognitive activities or consumed by additional tasks. They would examine whether the erosion of specific capabilities — deep attention, manual problem-solving, the embodied knowledge that comes from struggle — is being offset by the development of new capabilities or is simply being masked by the tool's compensation. They would ask, in other words, the questions that engagement metrics structurally cannot ask.
The satisfied user is not necessarily a flourishing user. The flourishing user is one whose capability set is expanding — whose range of genuinely achievable functionings is growing, whose freedom to choose how to live and work is increasing, whose development as a human being is being supported rather than substituted by the technology she uses. Flourishing and satisfaction may correlate, but they are not identical, and in conditions where technology shapes the very preferences by which satisfaction is measured, the correlation may be illusory.
The evaluative challenge is political as well as analytical. The companies that build AI tools have an economic interest in user satisfaction, because satisfaction drives engagement, which drives revenue, which drives valuation. An independent assessment of capability — one that might reveal that a satisfying tool is contracting the capability set — threatens the economic logic on which the industry is built. The incentive to measure satisfaction rather than capability is not merely a methodological preference. It is a structural feature of the economy in which AI tools are produced and consumed.
Sen's framework makes this structural feature visible. It names the mechanism by which satisfaction can coexist with deprivation. It identifies the conditions under which preference adaptation renders deprivation invisible. And it insists, with the quiet precision that characterizes his work, that the evaluation of human welfare cannot be outsourced to the preferences of the people whose welfare is at stake — not because their preferences are unimportant, but because their preferences have been shaped by the very conditions whose adequacy is in question.
The happy slave does not know she is enslaved. The satisfied user does not know her capability set is contracting. The evaluative framework that cannot detect the contraction is not a neutral instrument. It is a participant in the contraction, rendering invisible the very phenomenon it should be designed to reveal.
The title of Amartya Sen's most widely read book is itself an argument. Development as Freedom does not say that development produces freedom, or that development requires freedom, or that development and freedom are correlated. It says that development is freedom — that the expansion of substantive human freedoms is not a consequence of development but its constitutive definition. A society that increases its GDP while contracting the freedoms of its citizens has not developed. A society that expands the real freedoms of its citizens, even without GDP growth, has.
This definitional claim is the most radical element of Sen's framework, and it is the element with the most far-reaching implications for the evaluation of artificial intelligence. If development is freedom — if the standard by which any transformation should be judged is whether it expands the substantive freedoms people have to live lives they have reason to value — then the AI revolution must be evaluated not by what it produces but by what it frees.
The question is not whether AI increases output, which it does. Not whether it reduces costs, which it does. Not whether it expands access, which it does. The question is whether AI expands the substantive freedom of the people whose lives it touches — the genuine opportunity to choose how to live, what to work on, what to value, what kind of person to become. And the answer, as the preceding chapters have argued, depends almost entirely on variables that the technology itself does not control: the conversion factors, the institutional infrastructure, the educational preparation, the political conditions, the cultural norms that determine whether technological power translates into human freedom.
Sen identified five instrumental freedoms that serve as both means and constituents of development: political freedoms, economic facilities, social opportunities, transparency guarantees, and protective security. Each of these instrumental freedoms has a specific and testable application to the AI transition, and the application reveals, in each case, a gap between what the technology makes possible and what the institutional infrastructure delivers.
The application to AI produces a specific and disturbing diagnostic. AI is expanding certain freedoms — particularly the economic facilities and social opportunities of people in well-resourced institutional contexts — while leaving others stagnant or contracting. The political freedoms that should govern AI deployment are underdeveloped. The transparency guarantees that should make AI systems accountable are inadequate. The protective security that should cushion the transition is absent for most of the people who need it. The expansion is real but partial, and the partiality follows the contours of existing inequality with a precision that should alarm anyone who takes the development-as-freedom framework seriously.
A capability that is available to some but not to others, because the conversion factors are unevenly distributed, is not a democratized capability. It is a stratified capability — a capability whose distribution reproduces the stratification it was supposed to dissolve. The developer in San Francisco whose AI-augmented productivity translates into career advancement, creative autonomy, and expanded life choices is experiencing development in Sen's sense. The developer in Lagos whose AI-augmented productivity translates into additional gig-economy labor without expanded autonomy, without institutional support, without the conversion factors that would enable the productivity to become freedom, is experiencing output expansion without development.
The distinction is crucial because it determines what counts as an adequate response to the AI revolution. If the standard is output — more production, more access, more speed — then the current trajectory is an unambiguous success. If the standard is freedom — the expansion of people's real opportunities to live lives they have reason to value — then the current trajectory is a mixed result that requires institutional intervention to fulfill its promise.
Sen's work on the relationship between markets and freedom is directly relevant here. Sen was never an opponent of markets. He argued, throughout his career, that markets are instrumental freedoms — mechanisms that expand choice and enable coordination. But he also argued, with equal insistence, that markets alone are insufficient to produce development, because markets do not automatically generate the conversion factors that translate market access into capability expansion. Markets produce growth. Development requires institutions.
The AI market is producing growth at an extraordinary rate. The growth is measurable in revenue, in adoption, in productivity gains, in the number of people who have access to tools that were previously restricted to a small technical elite. The growth is real, and it is substantial, and it would be dishonest to deny it. But the growth is not, by itself, development. Development requires that the growth translate into expanded freedoms, and the translation requires institutional infrastructure that the market alone does not provide.
The institutional infrastructure required for AI development — development in Sen's sense, meaning the expansion of substantive freedoms — includes educational systems that cultivate the capabilities AI elevates in importance. The current educational infrastructure is designed for the previous economic regime, in which technical execution was the primary skill and domain specialization was the primary career strategy. The AI regime demands different capabilities: the ability to ask good questions, to exercise cross-domain judgment, to evaluate competing options, to understand what is worth building and for whom. These capabilities are cultivated through educational experiences that most educational systems do not provide — experiences that emphasize inquiry over instruction, judgment over knowledge accumulation, integration over specialization.
The infrastructure includes labor protections that cushion the transition for workers whose skills are being restructured. The speed of AI-driven skill restructuring exceeds the speed of any previous technological transition by an order of magnitude. The software engineer whose skills are commoditized in months, the knowledge worker whose professional domain is reorganized in a single product cycle — these people need protective security that the current institutional infrastructure does not provide. Retraining programs that take years to design and implement are inadequate for a transition that operates on a timescale of months.
The infrastructure includes political institutions that give citizens voice in determining how AI is deployed in their communities and workplaces. The opacity of AI systems, the concentration of their control in a small number of organizations, the speed of their deployment — each of these features makes democratic governance more difficult precisely when it is most needed. The institutions of democratic deliberation were designed for a pace of change that no longer obtains. The regulatory frameworks that exist — the EU AI Act, the American executive orders, the emerging frameworks in other jurisdictions — address the supply side: what AI companies may and may not build. The demand side — what citizens, workers, students, and parents need to navigate the transition — remains almost entirely unaddressed.
The infrastructure includes transparency mechanisms that make AI systems legible to the people they affect. The person who is evaluated by an AI system, whose job application is screened by an algorithm, whose credit is scored by a model, whose medical diagnosis is influenced by an AI recommendation, needs to know how these systems work, on what basis they make decisions, and what recourse is available when the decisions are wrong. Without transparency, the person affected by the system has no basis for challenging its decisions, no mechanism for holding its operators accountable, and no meaningful freedom with respect to the system's effects on her life.
The development-as-freedom framework produces a specific evaluation of the AI revolution that differs substantially from the evaluations produced by output-based, income-based, or even access-based frameworks. The evaluation acknowledges genuine gains: the expansion of formal access to cognitive tools, the reduction of certain barriers to productive participation, the potential for cost reduction and capability expansion across multiple domains. But the evaluation insists that these gains are instrumental — means, not ends — and that their value depends on whether they translate into the expansion of substantive human freedoms.
The translation is not automatic. It requires institutional construction on a scale that matches the technological construction. It requires educational reform that cultivates the capabilities the new regime demands. It requires labor protections that cushion the transition. It requires political institutions that give citizens voice. It requires transparency mechanisms that make AI systems accountable. It requires, in short, the full apparatus of what Sen calls development — not the accumulation of wealth or the expansion of output but the creation of the conditions in which people are genuinely free to live lives they have reason to value.
The AI revolution is the most powerful expansion of cognitive capability in human history. Whether it constitutes development — whether it expands the substantive freedoms of the people whose lives it touches — depends on whether the institutional infrastructure is built to channel its power toward freedom rather than mere output. The technology provides the means. The institutions determine whether the means become ends. And the gap between the technology's power and the institutions' readiness is, at this moment, the most consequential gap in the global political economy.
Development is freedom. AI is power. Power without the institutional conditions for freedom is not development. It is growth that may or may not serve the people it claims to benefit, depending on choices that have not yet been made and institutions that have not yet been built.
No substantial famine has ever occurred in a functioning democracy with a free press. Amartya Sen made this claim in Development as Freedom, and the claim has withstood three decades of scrutiny. The mechanism is specific: a free press creates an information flow that makes the early signs of famine visible to the public, and democratic accountability creates the political pressure that forces governmental response before the crisis becomes catastrophic. The press detects. The democracy compels. The combination prevents the entitlement failures that produce famine from reaching the scale of catastrophe.
The claim is not that democracies are morally superior. It is that democracies have a specific institutional mechanism — the combination of information transparency and political accountability — that prevents certain categories of catastrophic failure. The mechanism operates through feedback: information about suffering reaches the public, the public demands response, the government responds or risks replacement. The feedback loop is imperfect, slow, subject to manipulation and distortion. But it works well enough, consistently enough, across enough cases, to prevent the worst outcomes.
The question that the AI revolution poses to this framework is whether the mechanism can function at the speed the moment requires. The feedback loop that prevents famine operates on a timescale of months to years. The information about crop failure or price inflation reaches journalists. The journalists investigate. The stories are published. The public responds. The government acts. The cycle takes time — time that is available because famines develop over weeks and months, because the physical processes of agricultural failure and food distribution operate at human speed.
AI operates at machine speed. The transformation of labor markets, the restructuring of skill hierarchies, the commoditization of professional capabilities, the reshaping of the information environment — each of these processes is occurring faster than democratic institutions were designed to process. The feedback loop that works for famine prevention does not work for AI governance, not because the mechanism is wrong but because the mechanism is too slow. By the time the information reaches the public, the technology has advanced another generation. By the time the public demands response, the workforce has been restructured. By the time the government acts, the regulatory framework addresses yesterday's technology.
This temporal mismatch between the speed of technological change and the speed of democratic deliberation is not new — it has characterized every major technological transition — but the magnitude of the mismatch is unprecedented. Previous technological transitions operated on timescales of decades. The shift from agricultural to industrial economies took a century. The shift from industrial to service economies took half a century. The shift from analog to digital took decades. Each transition outpaced the democratic institutions that governed it, but the pace was slow enough that institutional adaptation, however imperfect and however belated, eventually caught up.
The AI transition is operating on a timescale of months. The capabilities that existed in December 2025 were qualitatively different from the capabilities that existed in June 2025. The organizational restructuring that The Orange Pill documents — the twenty-fold productivity multiplier, the dissolution of specialist silos, the migration of scarcity from execution to judgment — occurred in weeks, not years. The SaaS Death Cross, the trillion-dollar repricing of the software industry, occurred in the first two months of 2026. The speed is not merely fast. It is faster than any institutional mechanism currently in existence can process.
Sen's framework identifies the specific institutional conditions required for democratic governance of powerful social forces: transparency, so that the public can see what is happening; political freedom, so that the public can demand response; and accountability, so that the institutions responsible for governing the force can be held answerable for their decisions. Each of these conditions is under strain in the AI context, and the strain is produced not by the technology's malice but by its velocity.
Transparency is compromised by the opacity of AI systems themselves. Large language models are not interpretable in the way that previous technologies were interpretable. The decision-making process of a traditional algorithm — a credit-scoring model, a spam filter — could, in principle, be examined and understood. The decision-making process of a large language model cannot, because the model's behavior emerges from the interaction of billions of parameters in ways that resist human comprehension. The opacity is not a design choice. It is a structural feature of the technology. And the structural opacity means that the transparency condition — the condition that allows the public to see what is happening — is harder to fulfill for AI than for any previous technology.
The opacity extends beyond the technical to the institutional. The training data that shapes AI models is, in most cases, not publicly disclosed. The decisions about what data to include, what data to exclude, what values to embed, what behaviors to reinforce — these decisions are made by a small number of organizations, in processes that are not subject to public scrutiny. The models that are reshaping the cognitive environment of billions of people are products of decisions that those billions of people have no visibility into and no voice in.
Political freedom is compromised by the concentration of AI capability in a small number of organizations. The companies that control frontier AI models exercise a form of power that is, in practical terms, ungovernable by existing political institutions. They operate across jurisdictions, making national regulation difficult to enforce. They possess technical expertise that regulators lack, creating an information asymmetry that undermines regulatory effectiveness. They move at a speed that regulatory processes cannot match, rendering regulation perpetually retrospective. And they control a technology that is, increasingly, the infrastructure of economic and social life — the substrate on which communication, commerce, education, healthcare, and governance itself depend.
The concentration of AI capability creates what political theorists call a structural power asymmetry: a situation in which one party's control of essential resources gives it effective power over others, regardless of formal political arrangements. The company that controls the AI model that a government uses for policy analysis has structural power over that government, whether or not the power is exercised deliberately. The company that controls the AI tool on which millions of workers depend for their livelihoods has structural power over those workers. The structural power is not necessarily malicious. It is not necessarily even intentional. But it is real, and it produces a political condition in which the democratic accountability that Sen identifies as essential to preventing catastrophic failure is difficult to achieve.
Accountability requires mechanisms for holding decision-makers answerable for the consequences of their decisions. In the AI context, accountability faces a specific challenge that Sen's famine analysis did not contemplate: the diffusion of responsibility across human and machine actors. When an AI system produces a harmful outcome — a discriminatory hiring decision, a flawed medical diagnosis, an inaccurate legal assessment — the question of who is accountable is genuinely difficult. The developer who built the system? The company that deployed it? The user who relied on it? The training data that shaped it? The accountability question is not merely legal. It is structural: the architecture of AI systems distributes agency across multiple actors in ways that make traditional accountability mechanisms — designed for situations in which a single decision-maker makes a single decision with identifiable consequences — inadequate.
Sen's framework suggests that the response to the speed problem is not to slow the technology — a prescription that is both practically impossible and morally questionable, given the benefits that the technology provides — but to accelerate the institutional infrastructure. The democratic institutions that govern AI deployment must operate faster, with greater technical sophistication, and with more robust mechanisms for transparency and accountability than the democratic institutions that govern any other domain of public life.
This prescription is demanding. It requires regulatory agencies with the technical capacity to understand what they are regulating — a capacity that most regulatory agencies currently lack. It requires transparency mechanisms that make AI systems legible to the public without compromising the legitimate intellectual property of the companies that build them. It requires accountability frameworks that can assign responsibility in situations where agency is distributed across human and machine actors. It requires educational infrastructure that prepares citizens to participate in democratic deliberation about AI — deliberation that requires a level of technical literacy that most citizens do not possess.
The prescription also requires what Sen would call a commitment to public reasoning — the process by which a society arrives at collective judgments about how to govern powerful social forces through open, inclusive, informed deliberation. Public reasoning is the mechanism by which democratic societies have historically managed technological transitions: the debates over factory regulation, over environmental protection, over nuclear power, over genetic engineering. In each case, the deliberation was imperfect, the outcomes were contested, the process was slow. But the deliberation produced institutional structures that channeled the technology's power toward broadly beneficial outcomes — outcomes that would not have emerged from the market alone.
The AI transition requires public reasoning at a scale and speed that no previous technological transition has demanded. The reasoning must be inclusive, involving not just technologists and policymakers but the workers, students, parents, and communities whose lives are being reshaped by the technology. It must be informed, which requires transparency about how AI systems work and what effects they produce. It must be ongoing, because the technology is changing faster than any single deliberative cycle can process. And it must be consequential, connected to institutional mechanisms that translate deliberative conclusions into binding governance.
The alternative — governance by technologist, by market, by the preferences of the organizations that control the technology — is not governance at all. It is the abdication of governance to structural power, and the abdication produces outcomes that Sen's entire career has been dedicated to preventing: outcomes in which the benefits of powerful social forces are captured by the powerful while the costs are borne by the powerless, in which formal freedom coexists with substantive unfreedom, in which the appearance of progress masks the reality of concentrated gain and distributed loss.
The democratic mechanism that prevents famine can be adapted to the AI context. But the adaptation requires investment — in regulatory capacity, in transparency infrastructure, in public education, in deliberative institutions — that matches the investment being made in the technology itself. The technology is receiving trillions of dollars of investment. The democratic infrastructure that should govern it is receiving a fraction of that amount. The asymmetry is the most dangerous feature of the current moment, not because the technology is dangerous but because ungoverned power is always dangerous, and the speed of the technology is outpacing the only mechanism — democratic deliberation, with its imperfect but indispensable combination of transparency, political freedom, and accountability — that has proven capable of preventing powerful social forces from producing catastrophic distributional outcomes.
The press detects. The democracy compels. The mechanism works, but only if the institutions that compose it are built, maintained, and adapted to the speed and complexity of the forces they must govern. The AI revolution demands institutional construction at a pace and scale that no previous technological transition has required. The construction has barely begun.
Every great transformation in human affairs has eventually produced a corresponding transformation in how human affairs are measured. The industrial revolution produced national income accounting. The post-war development era produced the Human Development Index — a measure that Sen himself helped design, precisely because GDP alone could not capture what mattered about the lives the development project was supposed to improve. The digital revolution produced engagement metrics, click-through rates, monthly active users — measurements exquisitely calibrated to the phenomena that the digital economy valued and catastrophically blind to the phenomena it destroyed.
The AI revolution has not yet produced its evaluative framework. It is being measured with the instruments of the previous era — output metrics, productivity multipliers, adoption curves, revenue growth — and the instruments are recording a story of unambiguous progress that is, at best, incomplete and, at worst, systematically misleading. The instruments detect power. They do not detect freedom. They measure what the technology can do. They do not measure what people are substantively free to do with it.
The evaluative revolution that AI requires is not a refinement of existing metrics. It is a migration to a different evaluative space — from output to capability, from means to ends, from what is produced to what is freed. The migration is conceptually available. Sen built the framework decades ago. The application to AI has been developed by scholars working at the intersection of capability theory and technology ethics — the Capability-Sensitive Framework that specifies capability floors and life-plan alignment scores, the Lightweight Ethical Auditing Tool that integrates capability assessment into AI development processes, the philosophical formalizations that translate Sen and Nussbaum's framework into conditions for morally permissible AI deployment. The intellectual infrastructure exists. What does not exist is the institutional will to adopt it.
The absence of institutional will is not mysterious. It follows from the incentive structures that govern the organizations responsible for AI deployment. The technology companies that build AI tools are evaluated by financial markets that measure revenue, growth, and market share. The metrics that financial markets use produce the incentives that shape corporate behavior. A company that optimizes for capability expansion — that designs its tools to develop users' internal capacities rather than merely to produce output through the tool's capabilities — may produce a better product by Senian standards but a less engaging product by market standards. The tool that develops the user's judgment requires the user to struggle, to fail, to engage in the effortful cognitive work through which capability is built. The tool that substitutes for the user's judgment produces output faster, more fluently, with less friction — and generates the engagement metrics that drive the company's valuation.
The incentive structure is not a conspiracy. It is a market failure — a situation in which the market's optimization target diverges from the social optimum, and the divergence produces outcomes that no individual actor intends but that the system reliably generates. The market optimizes for engagement. The social optimum requires capability expansion. The two are not identical, and in many cases they are opposed.
Sen recognized market failures of this kind throughout his work on development. Markets are powerful mechanisms for coordination and value creation, but they systematically underinvest in capabilities whose returns are diffuse, long-term, and difficult to capture as private profit. Education, public health, environmental protection, democratic institutions — each of these is a capability-expanding investment that markets underprovide because the benefits are broadly distributed and the costs are privately borne. The same logic applies to the capability-expanding design of AI tools: the benefits of designing tools that develop users' capacities accrue to the users and to society, while the costs are borne by the company that forgoes the engagement metrics that a more substitutive design would generate.
The evaluative revolution requires intervention at multiple levels. At the organizational level, it requires the adoption of capability metrics alongside output metrics — assessments of whether the organization's AI deployment is developing employees' judgment, expanding their range of achievable functionings, increasing their autonomy, or merely increasing their output. The Berkeley researchers' proposal of "AI Practice" — structured pauses, sequenced workflows, protected reflection time — is a capability-preserving intervention that the evaluative revolution would elevate from a recommendation to a standard. Organizations that measure only output will optimize only for output. Organizations that measure capability alongside output will discover that the two sometimes align and sometimes conflict, and the discovery will produce the institutional innovation that the moment demands.
At the educational level, the evaluative revolution requires a fundamental reconception of what educational success means. The current educational system measures knowledge acquisition and skill demonstration — functionings, in Sen's terminology. The AI era demands an educational system that measures capability development — the cultivation of the cognitive capacities that AI elevates in importance: inquiry, judgment, cross-domain integration, ethical reasoning, the ability to formulate questions that no machine can originate. An educational system that measures capability rather than functioning would assess students not by the quality of their outputs — their essays, their exams, their projects — but by the quality of their questions, the sophistication of their judgment, the breadth of their integrative thinking.
The teacher who stopped grading essays and started grading questions — described in The Orange Pill as an example of ascending friction in educational practice — is conducting the evaluative revolution at the classroom level. The assignment is not to produce an essay but to produce the questions one would need to ask before writing an essay worth reading. The students who produce the best questions demonstrate the deepest engagement with the material, because a good question requires understanding what one does not understand — a harder cognitive operation than demonstrating what one does understand, and the operation that no machine can perform on one's behalf.
At the policy level, the evaluative revolution requires the development of capability-sensitive indicators for AI governance. The EU AI Act, the most comprehensive regulatory framework currently in existence, evaluates AI systems by risk category — unacceptable risk, high risk, limited risk, minimal risk — but does not evaluate them by their impact on the capability sets of the people they affect. A capability-sensitive regulatory framework would ask, for any AI deployment: does this system expand or contract the capability sets of its users? Does it develop internal capacities or substitute for them? Does it increase or decrease the range of genuinely achievable functionings from which users are free to choose? Does it enhance or diminish the conversion factors — educational, economic, social, political — that determine whether AI access translates into substantive freedom?
These questions are not currently asked by any major regulatory framework. They should be. The capability approach provides the analytical tools to ask them, the conceptual vocabulary to formulate them, and the evaluative criteria to answer them. What is required is the political will to adopt a framework that measures what matters rather than what is easy to measure.
The most technically sophisticated proposal for operationalizing Sen's framework in AI governance is the Capability-Sensitive Framework developed by Saptasomabuddha and colleagues, published in AI and Ethics in 2025. The framework specifies two normative guardrails: a capability floor, which ensures that no individual is pushed below thresholds for essential freedoms by an AI deployment, and a life-plan ceiling, which guarantees that people retain viable paths toward their meaningful goals. These guardrails are operationalized through quantitative metrics — the Capability-Coverage Ratio and the Life-Plan Alignment Score — that can be computed for specific AI systems in specific deployment contexts.
The framework is model-agnostic, meaning it can be applied to any AI system regardless of its technical architecture. It is context-sensitive, meaning it evaluates the same system differently depending on the institutional and social conditions in which it is deployed. And it is human-centric in Sen's specific sense: it evaluates the system not by its technical performance but by its impact on the substantive freedoms of the people it affects.
The existence of such frameworks demonstrates that the evaluative revolution is technically feasible. The question is whether it is institutionally achievable — whether the organizations that build AI, the governments that regulate it, the educational systems that prepare people to use it, and the citizens whose lives it reshapes can migrate from the evaluative frameworks of the previous era to the evaluative frameworks that the current era demands.
Sen's work suggests that the migration is possible but not inevitable. It requires democratic deliberation — the messy, imperfect, indispensable process by which societies arrive at collective judgments about what to measure and what to value. The deliberation must be informed by the technical realities of AI, by the empirical evidence about its effects, by the philosophical framework that identifies what matters, and by the voices of the people whose lives are at stake. The deliberation will be contested, because the question of what to measure is always a question about what to value, and questions about values are always contested.
But the contest is the point. The capability approach does not prescribe a fixed list of capabilities to be measured. It insists that the selection of relevant capabilities is a matter for democratic deliberation rather than expert determination. This openness is a feature, not a limitation. It means that the evaluative revolution is not a technocratic project — an imposition of a new measurement regime by experts who know better — but a democratic project, a collective decision about what a society values and what it will hold its institutions accountable for.
The revolution will not be easy. It will require confronting the incentive structures that make output metrics more attractive than capability metrics. It will require building the institutional capacity to measure what matters rather than what is convenient. It will require the specific courage of adopting evaluative frameworks that may reveal uncomfortable truths about technologies that the market has already celebrated as transformative.
The alternative is to continue measuring AI's impact with instruments calibrated to the wrong phenomena — instruments that will record progress while freedom stagnates, that will celebrate output while capability contracts, that will present a picture of transformation that is, in Sen's precise and devastating sense, systematically misleading about the lives of the people the transformation is supposed to serve.
The instruments determine what is visible. What is visible determines what is managed. What is managed determines what the future looks like. The evaluative revolution is not an academic exercise. It is the mechanism by which the AI revolution either fulfills its promise of expanded human freedom or defaults to the historical pattern of concentrated gain and distributed loss.
The framework exists. The tools exist. The question is whether the will exists to use them.
The argument that has been built across nine chapters arrives at a proposition that is, in its essentials, simple. The complexity lies not in the proposition but in the evidence required to establish it and the institutional infrastructure required to act on it.
The proposition: artificial intelligence is the most powerful expansion of human productive capability in history, and the question of whether this expansion constitutes human development — whether it expands the substantive freedoms people have to live lives they have reason to value — depends entirely on institutional choices that the technology itself does not make and the technology industry does not currently prioritize.
Sen's framework illuminates this proposition from an angle that no other analytical apparatus can provide. The output-based frameworks that dominate the technology industry detect the expansion of capability — more production, more access, more speed — without detecting the conditions under which the expansion becomes freedom. The access-based frameworks that dominate the policy discourse detect the availability of tools without detecting the conversion factors that determine whether availability becomes capability. The preference-based frameworks that dominate user research detect satisfaction without detecting the adaptive mechanisms by which satisfaction can coexist with capability contraction. Only the capability framework detects all three failures simultaneously, because only the capability framework evaluates technology in the space that matters: the substantive freedom of real people to achieve the functionings they have reason to value.
The evaluation produces a diagnosis that is neither the optimist's triumphalism nor the critic's despair but something more specific and more useful: a map of the conditions under which the AI revolution will expand human freedom and the conditions under which it will not. The map identifies conversion factors — infrastructure, education, financial access, legal protection, political voice, transparency, protective security — as the decisive variables. It identifies adaptive preferences as the mechanism by which capability contraction can be rendered invisible to evaluative frameworks that take satisfaction as their measure. It identifies the concentration of AI capability in a small number of organizations and countries as a structural feature that reproduces existing inequality unless deliberately countered by institutional design. And it identifies democratic deliberation — the imperfect but indispensable mechanism of collective judgment about what to value and how to govern — as the process through which the institutional infrastructure must be built.
The diagnosis is demanding. It asks the technology industry to measure what matters rather than what is easy to measure. It asks policymakers to build demand-side infrastructure — educational, economic, social — rather than merely supply-side regulation. It asks educational institutions to cultivate judgment rather than knowledge accumulation. It asks citizens to participate in governance processes that are technically complex and politically contentious. It asks, in short, for the same kind of institutional construction that every previous technological revolution has eventually required, but at a speed and scale that no previous revolution has demanded.
The demand is not unreasonable. It is proportional to the power of the technology being governed. A technology that can restructure labor markets in months, that can reshape the cognitive environment of billions of people, that can concentrate productive capability in ways that make previous concentrations look modest — such a technology demands institutional governance that is proportional to its power. Anything less is not caution or restraint or responsible governance. It is negligence — the negligence of a society that builds the most powerful tools in human history without building the institutions that ensure the tools serve human purposes.
Sen's career-long commitment to public reasoning — to the process by which societies arrive at collective judgments through open, inclusive, informed deliberation — provides the mechanism through which the institutional construction can occur. The mechanism is not new. It is the mechanism by which democratic societies have governed every previous powerful social force: through debate, through the confrontation of competing interests and values, through the slow and frustrating construction of compromises that serve the common good imperfectly but genuinely.
The mechanism is under strain. The speed of the technology is outpacing the speed of deliberation. The opacity of AI systems is undermining the information conditions that deliberation requires. The concentration of AI capability is creating power asymmetries that distort the deliberative process. Each of these strains is real, and none of them is sufficient reason to abandon the mechanism. The alternative to imperfect democratic governance is not better governance. It is governance by structural power — governance by the organizations that control the technology, in their interest, according to their values, without accountability to the people whose lives the technology reshapes.
The capability approach does not provide answers. It provides a framework for asking the right questions — questions that the current discourse is not asking with sufficient rigor, questions that the technology industry's metrics are not designed to detect, questions that the policy community's regulatory frameworks are not structured to address. The questions are specific, testable, and consequential.
Is this deployment expanding or contracting the capability sets of the people it affects? Is the expansion conditional on continued access to a service controlled by a private organization, or is it developing internal capacities that belong to the person? Are the conversion factors — infrastructure, education, financial access, legal protection, political voice — present for the people who need the expansion most, or are they present only for the people who need it least? Are preferences adapting to a contracted capability set in ways that render the contraction invisible to satisfaction-based metrics? Is the concentration of AI capability reproducing existing inequality or dissolving it?
Each question identifies a dimension of the AI revolution that current evaluative frameworks do not measure. Each unmeasured dimension represents a risk — a risk not that the technology will fail, because it will not fail, but that the technology's success will be distributed in patterns that replicate the historical pattern of every previous technological revolution: concentrated gain for the powerful, distributed cost for the powerless, and a long, painful, politically contested process of institutional construction that eventually channels the technology's benefits broadly — but only after a generation has borne the cost of the transition without the protection that earlier institutional construction could have provided.
The orange pill moment, as The Orange Pill describes it, is the recognition that something genuinely new has arrived and that one cannot unsee it. The Senian orange pill is more specific: it is the recognition that the genuinely new thing is not the technology but the evaluative challenge the technology presents — the challenge of measuring what matters in a world where what is measured determines what is managed, and what is managed determines whether the most powerful expansion of human capability in history becomes the most powerful expansion of human freedom, or merely the most powerful expansion of human output.
The distinction between output and freedom is Sen's life work. It is the distinction between a society that is rich and a society that is developed. Between a person who produces and a person who is free. Between a technology that generates and a technology that liberates. The distinction has never been more consequential than it is now, and it has never been less visible in the discourse that will determine the outcome.
Amartya Sen is ninety-two years old. The framework he built across six decades of meticulous intellectual work anticipated, in its structure if not its specifics, the evaluative challenge that the most powerful technology in human history now presents. The framework tells us what to measure. It tells us what to look for. It tells us which questions to ask. It does not tell us what the answers will be, because the answers depend on choices that have not yet been made — choices about what to value, what to build, what to protect, and what to demand from the institutions that govern the most powerful tools our species has ever created.
The choices are ours. The framework is Sen's. The question that machines cannot originate — what kind of life is worth living, and are we building the conditions that make it possible? — is the question that only beings with stakes in the world can ask.
It remains, as it has always been, a human question. And the quality of the answer we give will be the measure, in the only evaluative space that ultimately matters, of whether we were worthy of the tools we built.
The word that changed how I read every metric on my dashboard is conversion.
Not conversion rates — the kind that marketing teams optimize. Conversion in Amartya Sen's sense: the gap between having something and being able to do something meaningful with it. The gap between access and capability. Between the tool and the freedom the tool was supposed to provide.
I have spent the months since writing The Orange Pill watching this gap widen in real time. In Trivandrum, I celebrated a twenty-fold productivity multiplier — each engineer suddenly capable of work that would have required an entire team. The number was real. The exhilaration was real. What I had not yet learned to see was the question that hides inside numbers like that: multiplied for whom?
Sen never wrote about AI. He built something more useful than a prediction — he built a diagnostic instrument precise enough to reveal what our metrics conceal. We measure output. He measures freedom. We count what the technology produces. He asks what people are substantively able to do with their lives. The difference between these two measurements is the difference between a society that is growing and a society that is developing, and Sen spent sixty years demonstrating that the two are not the same thing.
The argument that struck hardest was the one about the Bengal famine. Not because the moral comparison applies — nobody is starving from AI deprivation — but because the structure is identical. Bengal had enough food. People starved because the distribution system was broken. We have enough AI capability. The question is whether the institutional systems that convert capability into human freedom are being built — and the honest answer, as of this writing, is that they are not being built fast enough, and the people who need them most are the people with the least voice in demanding them.
I keep returning to the concept of adaptive preferences — the mechanism by which people stop valuing what they have lost, because the loss reshapes what they desire. I recognize it now in conversations I have every week with engineers who no longer miss the deep struggle of debugging. They are genuinely satisfied with AI-augmented work. The satisfaction is real. But Sen's framework asks the uncomfortable next question: has the satisfaction been produced by the narrowing of what they expect from their own minds? Are we watching preferences adapt to a contracted capability set and calling the adaptation progress?
I do not have a clean answer. What I have is the question, and the question has changed what I pay attention to. When I review my team's work now, I notice something I used to miss: whether the AI interaction developed a person's judgment or merely substituted for it. Whether the freed time went to harder thinking or just more tasks. Whether the expansion of what someone produced corresponded to an expansion of what someone could become.
These are the questions that Sen's framework teaches you to ask. They are harder than the questions I was asking before. They do not resolve into dashboards. They require the slow, uncomfortable work of looking at a person rather than a metric and asking whether her life — not her output — is genuinely freer than it was.
Thirteen-point-eight billion years of cosmic history, and consciousness appears for a fraction of a fraction of one percent of it. A candle in the darkness. The candle is not defined by what it produces. It is defined by the quality of the light — by whether the light illuminates something worth seeing, something that makes the brief flicker of awareness worthwhile.
Sen would put it differently. He would say: the candle is defined by its capability set — by the range of lives it is genuinely free to live. And whether AI expands that range or narrows it depends on choices that no machine will make for us.
The tools are extraordinary. The question is whether we are building the world in which the tools serve freedom — real freedom, for real people, not just the ones who look like me, not just the ones who live where I live, not just the ones who already had the conversion factors in place before the revolution began.
That question is now the one I carry. It does not fit on a dashboard. It fits in a life, imperfectly, uncomfortably, and I suspect permanently.
— Edo Segal
The AI revolution measures itself in output — productivity multipliers, adoption curves, lines of code generated. Amartya Sen spent sixty years proving that output is the wrong measure. His capability approach asks the question the technology industry has not learned to ask: Is this expansion of power actually expanding the freedom of the people it touches? This book applies Sen's framework — forged in his study of famines where people starved surrounded by food — to a technological revolution where capability is abundant but the conditions to convert it into genuine human freedom are not. It examines conversion factors, adaptive preferences, the gap between formal access and substantive opportunity, and why the most celebrated metrics of the AI era may be systematically concealing the things that matter most. The resource is not the problem. The distribution system is. Sen's map was drawn for a different landscape, but its contours fit the present terrain with unsettling precision. — Amartya Sen, Development as Freedom

A reading-companion catalog of the 14 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Amartya Sen — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →