By Edo Segal
The question that stopped me cold was not about technology. It was about a curtain.
Rawls asks you to imagine designing the rules of a society — who gets what, who bears what cost, who decides — from behind a veil that hides one piece of information: which person in that society you will be. You do not know if you are the builder or the displaced. The CEO or the data labeler. The engineer in Trivandium whose capability just expanded twenty-fold, or the engineer sitting next to her whose specialized expertise just became a commodity available for a hundred dollars a month.
You do not know. And from behind that not-knowing, you must choose.
I have spent this entire journey thinking about amplification. About the river of intelligence and how to build in it. About the beaver and the dam. About the question "Are you worth amplifying?" Rawls forced me to confront a question I had been skating past: Who decides the rules under which amplification happens? And would those rules survive a test that no one in the technology industry has bothered to apply?
The difference principle — Rawls's most demanding idea — states that inequalities are justified only if they benefit the people who benefit least. Not the average person. Not the aggregate. The person at the bottom. The administrative assistant whose role just vanished. The student in a public school that hasn't updated its curriculum since before the threshold was crossed. The community whose economic foundation was restructured without anyone asking permission.
I kept my team. I chose growth over cuts. I describe that choice in this book as the Beaver's ethic. Rawls made me see what that choice actually was: a personal decision, made from a known position, dependent on my particular values. The difference principle does not depend on anyone's goodness. It demands institutions — structures that produce fair outcomes regardless of whether the person running the company happens to be generous this quarter.
That is a harder standard than anything I have held myself to. It is also the right one.
We are in Stage Four of the pattern — adaptation. The institutions governing this transition are being designed right now, in boardrooms and legislatures and classrooms. Rawls insists that the design meet a standard most of us have not even considered: Would the people who bear the greatest costs choose these arrangements, if they had the power to choose?
This book is that standard, applied with rigor, to the moment we are living through. It will not make you comfortable. It will make you more honest about what building responsibly actually requires.
-- Edo Segal ^ Opus 4.6
1921-2002
John Rawls (1921–2002) was an American political philosopher widely regarded as the most important liberal thinker of the twentieth century. Born in Baltimore, Maryland, he served in the Pacific during World War II before studying at Princeton and later joining the faculty at Harvard University, where he taught for more than thirty years. His magnum opus, *A Theory of Justice* (1971), revived the social contract tradition and introduced the concepts of the "original position" and the "veil of ignorance" — a thought experiment in which rational agents design the principles of a just society without knowing what position they will occupy within it. From this framework he derived the "difference principle," which holds that social and economic inequalities are permissible only when they benefit the least advantaged members of society. His subsequent works, including *Political Liberalism* (1993) and *Justice as Fairness: A Restatement* (2001), refined and extended these ideas to address the challenge of pluralism in democratic societies. Rawls's framework has shaped decades of debate across philosophy, law, economics, and public policy, and remains the dominant point of reference for discussions of distributive justice worldwide.
In 1971, a quiet professor at Harvard published a book that would become the most influential work of political philosophy in the twentieth century. John Rawls was not a public intellectual in the manner the term is usually understood. He did not appear on television. He did not write polemics. He wrote with the patience of a person building a cathedral — one carefully placed stone at a time, each load-bearing, each tested against the weight of what would rest upon it. A Theory of Justice runs to nearly six hundred pages of dense, methodical argument, and its central contribution is a thought experiment so simple that a child could grasp it and so powerful that five decades of philosophy have not exhausted its implications.
The thought experiment is this. Imagine you must design the rules that will govern a society — the institutions that determine who gets what, who does what, who decides what. The tax code. The educational system. The property laws. The labor regulations. The rules governing how economic gains are distributed and how economic losses are absorbed. You must choose these rules. But you must choose them from behind what Rawls called "the veil of ignorance" — a condition in which you do not know what position you will occupy once the rules take effect.
You do not know whether you will be wealthy or poor. You do not know whether you will be talented or ordinary. You do not know your race, your gender, your nationality, your religion, your conception of what makes a life worth living. You do not know whether you will be healthy or sick, young or old, educated or not. You know only that you will be someone, somewhere, subject to whatever rules you choose. The veil strips away every piece of information that would allow you to rig the system in your own favor. What remains is the bare rationality of a person who must choose wisely under conditions of radical uncertainty about their own fate.
This is not a parlor game. Rawls called it "the original position," and he intended it as the most rigorous method available for identifying principles of justice that any rational person could accept. The reasoning proceeds as follows. Behind the veil, you cannot design institutions that favor the rich, because you might be poor. You cannot design institutions that favor the talented, because you might be ordinary. You cannot design institutions that favor one race, one gender, one nationality, because any of these might be yours. The only rational strategy, Rawls argued, is to design institutions that make the worst possible position tolerable — because you might occupy the worst possible position, and you cannot afford to gamble on the assumption that you will not.
The philosophical term for this strategy is "maximin" — maximize the minimum. Choose the arrangement under which the worst-off person is as well-off as possible. Not because you are altruistic. Not because you care about the poor as a matter of sentiment. But because you are rational, and rationality under conditions of radical uncertainty about your own position demands that you protect against the worst outcome.
Rawls never wrote about artificial intelligence. A Theory of Justice mentions technology on precisely three occasions, and each mention is passing. When Rawls constructed the original position, he explicitly excluded knowledge about a society's level of technological development from the information available behind the veil. The parties in the original position do not know whether they are designing institutions for an agrarian society or an industrial one, a pre-digital economy or a post-digital one. This exclusion was deliberate. Rawls believed that principles of justice should hold regardless of technological circumstance — that what fairness requires does not change when the tools change, even when the tools change everything else.
This belief is both the strength and the limitation of Rawlsian theory as applied to the present moment. The strength is that the framework does not require updating to accommodate AI. The veil of ignorance works whether the society in question uses hand looms or large language models. The difference principle — which the next chapter will examine in detail — applies with equal force to the distribution of gains from steam engines and from neural networks. The limitation is that Rawls's framework, precisely because it operates at the level of general principle, does not specify the particular institutions that justice requires in a particular technological context. It tells you what the institutions must achieve. It does not tell you what they must look like.
The AI transition demands both. It demands the principled clarity that Rawls provides — the insistence that the distribution of gains and losses is a matter of justice, not merely of efficiency or market dynamics. And it demands the institutional specificity that Rawls deliberately left to what he called the "legislative stage" — the stage at which general principles are translated into particular laws and policies in light of specific social and economic conditions.
Consider what it means to stand behind the veil of ignorance in the winter of 2025, as described in Segal's account in The Orange Pill. Behind the veil, you do not know whether you are the Google principal engineer who sat down with Claude Code and watched a year of her team's work reproduced in an hour. You do not know whether you are the engineer's teammate whose expertise was rendered demonstrably redundant in that same hour. You do not know whether you are the developer in Lagos for whom AI tools represent the first genuine opportunity to build at scale, or the senior architect in San Francisco for whom those same tools represent the dissolution of a career built through decades of patient, friction-rich learning. You do not know whether you are the parent whose child asked at dinner whether homework still matters, or the child who asked the question, or the teacher who must answer it tomorrow morning without knowing whether the answer is true.
Behind the veil, you do not know whether you possess the cognitive flexibility to thrive in the new landscape — what Segal calls the capacity to "fight" rather than "flee" — or whether your particular configuration of talents and training has positioned you precisely in the path of the disruption. You do not know whether the "twenty-fold productivity multiplier" that Segal documents will multiply your output or eliminate your role. You do not know whether you are the founder who can now prototype a product over a weekend or the technical co-founder whose decade of training has been commoditized by a hundred-dollar subscription.
This uncertainty is not hypothetical. It is the lived condition of millions of people in the present moment. What Rawls's framework adds to this lived condition is the insistence that the uncertainty be taken seriously as a basis for institutional design — that the rules governing the AI transition be designed as though any participant might occupy any position, and that no arrangement is just if it fails to protect the person who draws the worst hand.
The objection that arises immediately, and that must be addressed with the seriousness it deserves, is that this framework is too conservative. The maximin strategy, critics argue, sacrifices too much potential gain for the majority in order to protect the worst-off. Why should society forgo enormous aggregate benefits simply because some people will be displaced? Is it not better, in utilitarian terms, to maximize total welfare even if some individuals bear disproportionate costs?
Rawls's response to this objection is the foundation of his entire theory, and it is worth stating precisely because it cuts against the dominant moral logic of the technology industry. The utilitarian calculation — maximize total welfare — treats individuals as vessels for the production of aggregate good. It permits, in principle, the sacrifice of some for the benefit of many. It permits an arrangement in which ninety-five percent of the population gains enormously from AI while five percent is crushed, provided the total gains exceed the total losses. Rawls rejected this calculation as a matter of principle, not merely of sentiment. The separateness of persons — the fact that each person lives one life, bears one set of costs, experiences one trajectory of flourishing or suffering — cannot be dissolved into an aggregate. The five percent who are crushed are not compensated by the ninety-five percent who flourish. They are simply crushed.
Iason Gabriel, a researcher at Google DeepMind, made this point with precision in his 2022 paper "Toward a Theory of Justice for Artificial Intelligence," published in MIT Press's Daedalus. Gabriel argued that the basic structure of society should be understood as a composite of sociotechnical systems, and that the operation of these systems is increasingly shaped and influenced by AI. Consequently, egalitarian norms of justice apply to the technology when it is deployed within these systems. Gabriel's key insight — one that reframes much of the conversation around "AI ethics" — is that the moral properties of algorithms are not internal to the models themselves but rather a product of the social systems within which they are deployed. A language model is not just or unjust. The institutional arrangement within which the model operates — who benefits, who bears costs, who decides, who is excluded — is just or unjust.
This reframing is essential. The technology industry's dominant moral framework is individualistic: individual founders making individual choices, individual users exercising individual agency, individual workers adapting to individual circumstances. Rawls's framework insists that this individualism is philosophically insufficient. Individual choices occur within institutional structures, and those structures determine the range of choices available. A displaced worker's "choice" to retrain is not a free choice if the retraining infrastructure does not exist, if the worker cannot afford to stop earning income during the retraining period, if the retraining programs available are designed for the convenience of the institution rather than the needs of the worker. A developer's "choice" to build responsibly is not meaningful if the market structure rewards speed over care and punishes the company that pauses to consider downstream effects.
The moral weight falls not on individuals but on institutions — and institutions are designed, maintained, and reformed by collective action, not individual virtue.
This is why the original position matters for the AI transition. Not as a metaphor. Not as a thought experiment to be admired and set aside. As a method — the most rigorous method available — for designing institutions that no rational person could reject. Behind the veil, where any participant might be the displaced worker or the empowered builder, the rational choice is to design institutions that protect the worst-off position. Not because the worst-off position is the most common. Not because the worst-off deserve special sympathy. But because any of us might be there, and an arrangement that fails to protect the worst-off is an arrangement that asks some people to bear the costs of others' gains without their consent.
The empirical evidence supports this reasoning with striking force. In 2023, a team of researchers at Google DeepMind, led by Laura Weidinger, published a study in the Proceedings of the National Academy of Sciences that operationalized the veil of ignorance as an experimental protocol. Across five incentive-compatible studies with over 2,500 participants, they asked people to choose principles to govern an AI assistant. Some participants chose from behind the veil — without knowledge of their relative position in the group. Others chose with full knowledge of their position. The result was consistent and robust: participants behind the veil showed a clear preference for principles that instructed the AI to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explained these choices. They appeared to be driven by elevated concerns about fairness — precisely as Rawls predicted.
The study demonstrated something remarkable. When people are placed in a position of genuine impartiality — when they cannot rig the system in their own favor — they choose justice over advantage. The veil of ignorance is not merely a philosopher's device. It is a description of the moral reasoning that human beings actually engage in when the conditions for impartial judgment are met.
The conditions are not currently met. Every participant in the debate about AI's future — the triumphalists, the elegists, the builders, the displaced, the investors, the regulators — argues from a known position. The technology CEO argues for arrangements that favor technology CEOs. The displaced professional argues for arrangements that protect displaced professionals. The investor argues for arrangements that maximize returns. Each position is understandable. None is impartial.
Rawls's contribution is the insistence that impartiality is not optional. It is a requirement of justice. And the method for achieving it — the veil of ignorance, the original position, the discipline of choosing as though you might be anyone — is not a luxury that can be deferred until the transition is complete. It is the condition under which just institutions are designed. It must operate now, during the transition, when the stakes are highest and the temptation to argue from known positions is strongest.
The question, then, is not whether AI creates value. It manifestly does. The question is whether the institutions governing the AI transition are ones that rational people would choose if they did not know what position they would occupy. By this standard — the only standard that Rawls's framework recognizes as legitimate — the current arrangements fail. The next chapter examines why.
The difference principle is the most controversial and the most powerful element of Rawls's theory of justice. It states, with a precision that admits no evasion, that social and economic inequalities are permissible only if they are arranged to the greatest benefit of the least advantaged members of society.
This formulation requires careful unpacking, because each word carries philosophical weight that casual usage tends to erode. "Social and economic inequalities" refers not to individual transactions but to the structural patterns of advantage and disadvantage produced by the basic institutions of society — the tax code, the property regime, the labor market, the educational system, the corporate governance framework. "Permissible" means that inequality is not inherently unjust; what is unjust is inequality that fails to benefit those at the bottom. "Greatest benefit" means that among all possible institutional arrangements, the just arrangement is the one that makes the least advantaged as well-off as possible. And "least advantaged" refers to the group of people who occupy the worst position in the distribution of what Rawls called "primary goods" — the things that every rational person wants regardless of their particular conception of the good life: income, wealth, opportunities, the social bases of self-respect.
The difference principle does not require equality. This is the point that both its critics and its most enthusiastic interpreters frequently miss. Rawls did not argue that a just society must eliminate all differences in wealth, income, or status. A society in which some people earn more than others, in which some positions carry greater authority and compensation, can satisfy the difference principle — provided that the inequalities serve to improve the condition of those at the bottom. The talented surgeon who earns more than the hospital janitor does not violate the difference principle if the institutional arrangements that produce this inequality — the medical training system, the compensation structure, the labor market — also produce better healthcare for the least advantaged than any alternative arrangement would.
The key word is "alternative." The difference principle does not ask whether the least advantaged are better off than they would be with no institutions at all. It asks whether they are better off than they would be under any alternative institutional arrangement. If a different tax code, a different educational structure, a different set of labor regulations would make the least advantaged better off without reducing the total gains, then the current arrangement is unjust — regardless of how much total wealth it produces, regardless of how many people in the middle benefit, regardless of how the arrangement compares to some hypothetical state of nature.
Applied to the AI transition, the difference principle asks a question that the technology industry has been remarkably successful at avoiding: Does the current distribution of AI's gains benefit the least advantaged?
The gains themselves are not in dispute. Segal documents them with the honesty of a builder who has lived inside the transformation. The twenty-fold productivity multiplier in Trivandrum. The engineer who built a complete user-facing feature in two days without prior frontend experience. The solo founder who shipped a revenue-generating product without writing a line of code by hand. The collapse of the imagination-to-artifact ratio to the width of a conversation. These are real expansions of human capability, and no honest assessment can deny them.
The distribution of those gains is where the difference principle finds its purchase. In the first eight weeks of 2026, as Segal records, a trillion dollars of market value shifted in the software industry. Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. The value did not evaporate. It moved — from established software companies to the AI platforms and their investors, from the incumbents to the disruptors, from the many to the few. The SaaS Apocalypse, as the market called it, was not a destruction of value. It was a transfer of value. And the transfer flowed upward — toward the companies that controlled the AI platforms, toward the shareholders of those companies, toward the early adopters with the resources and skills to exploit the new tools most effectively.
Meanwhile, the costs flowed in the opposite direction. The senior professionals whose expertise was commoditized in months. The communities whose economic foundations — built around industries that AI was restructuring — contracted without replacement. The workers documented in the Berkeley study by Ye and Ranganathan, who found that AI did not reduce work but intensified it, producing task seepage into previously protected time, fractured attention, and burnout symptoms that accumulated with the regularity of compound interest. The students whose educational institutions had not adapted to the new landscape. The parents lying awake at two in the morning wondering whether the skills they spent decades acquiring would still be worth anything by the time their children graduated.
The difference principle evaluates this distribution with austere clarity. The concentration of gains in technology companies and their shareholders is justified only if the displaced workers, the commoditized professionals, the restructured communities, and the burned-out knowledge workers are better off under this arrangement than they would be under any alternative. If an alternative arrangement could distribute the gains more broadly — through progressive taxation of AI-generated profits, through robust retraining infrastructure, through income support during the transition period, through investment in the communities most affected — without reducing the total gains, then the current arrangement is unjust by the standard the difference principle sets.
There is a sophisticated objection to this analysis that deserves direct engagement. The objection comes from what might be called the dynamic efficiency argument, and it runs as follows: Any attempt to redistribute the gains of AI will reduce the incentive to produce those gains. Taxation reduces investment. Regulation slows innovation. The total pie shrinks when you try to divide it more fairly, and the least advantaged end up with a larger share of a smaller whole — which may be less, in absolute terms, than their smaller share of the larger whole. Therefore, the arrangement that maximizes total gains and lets distribution follow market dynamics is, paradoxically, the arrangement that best serves the least advantaged.
Rawls was aware of this objection and addressed it directly. The difference principle permits inequalities that serve as incentives for productive activity, provided those inequalities genuinely benefit the least advantaged. If a particular pattern of AI investment requires the expectation of outsized returns to motivate the investment, and if that investment produces gains that ultimately improve the condition of the worst-off, then the outsized returns are permitted. The difference principle does not demand equality of outcome. It demands that inequality serve a purpose — and that the purpose be the benefit of those at the bottom, not merely the enrichment of those at the top.
But the dynamic efficiency argument proves less than its proponents claim. It assumes that the current level of inequality is the minimum necessary to incentivize productive activity. It assumes that technology companies and their investors would cease to innovate if their gains were taxed progressively. It assumes that the market, left to its own dynamics, distributes gains in a way that no deliberate institutional design could improve upon. Each of these assumptions is empirically questionable. The history of technological transitions, as documented by economists Daron Acemoglu and Simon Johnson in Power and Progress, demonstrates that the gains of technology are not automatically shared. They require institutional intervention — labor movements, legislation, regulatory frameworks — to translate into broadly distributed improvements in living standards. The translation is not automatic. It is political.
G.A. Cohen, one of Rawls's most penetrating critics, pressed this point further in Rescuing Justice and Equality. Cohen argued that if the talented could produce the same output without inequality-generating incentives — if the surgeon would perform surgery at a lower wage, if the technology executive would innovate at a lower return — then the inequalities are not justified by the difference principle. They are justified only by the fact that the talented hold their labor hostage to extract rents that the basic structure permits. Cohen's critique does not invalidate the difference principle. It sharpens it. It insists that the principle be applied rigorously, not as a rubber stamp for whatever inequalities the market happens to produce, but as a genuine test: Is this inequality the minimum necessary to secure the gains that benefit the least advantaged? Or is it rent extraction disguised as incentive?
The AI transition makes this question urgent rather than academic. The concentration of gains in a small number of companies is not obviously the minimum necessary to incentivize AI development. The question of whether alternative institutional arrangements — different intellectual property regimes, different data governance frameworks, different tax structures, different labor protections — could distribute the gains more broadly without reducing the total is an empirical question, not a philosophical one. But the difference principle establishes the standard by which the answer must be evaluated: the arrangement is just only if the least advantaged benefit as much as they possibly can. Not as much as is convenient. Not as much as is politically feasible. As much as is possible.
A further complication demands attention. Scholars studying the application of Rawlsian fairness to algorithmic systems have identified what the Swedish philosophers Olle Häggström and colleagues call a "missing aggregation property" of the difference principle. The problem is this: achieving Rawlsian fairness at the level of individual algorithmic decisions does not guarantee Rawlsian fairness at the aggregate level. A hiring algorithm that satisfies the difference principle in each individual decision — favoring the least advantaged candidate — may produce aggregate hiring patterns that violate the principle at the societal level, because the effects of individual decisions compound in unpredictable ways.
This aggregation problem is not a refutation of the difference principle. It is a reminder that the principle applies to the basic structure — the institutional framework within which individual decisions occur — rather than to individual decisions themselves. The just arrangement is not one in which each algorithmic decision satisfies the difference principle. It is one in which the basic structure — the legal framework, the regulatory regime, the tax code, the educational system, the labor protections — produces a distribution that satisfies the principle at the societal level.
The distinction matters because the technology industry has a powerful tendency to individualize moral responsibility. When the distribution of AI's gains is challenged, the response is typically that individuals should adapt — retrain, reskill, pivot, build new capabilities. This response places the burden on the displaced individual rather than on the institutional structure that produced the displacement. Rawls's framework categorically rejects this individualization. The basic structure is the primary subject of justice. Individual adaptation is necessary, but it is not sufficient, and it is unjust to demand individual adaptation while leaving the basic structure unreformed.
The difference principle does not answer every question about the AI transition. It does not specify the optimal tax rate, the ideal retraining program, the correct labor regulation. What it provides is something more fundamental: a standard against which every proposed arrangement can be measured. Does this arrangement benefit the least advantaged as much as any alternative would? If not, it is unjust. The simplicity of the standard is its power. The difficulty of meeting it is the moral challenge of the present moment.
The institutions that govern the AI transition are being designed right now — in boardrooms, in legislatures, in the quiet decisions of companies about whether to retain their teams or reduce headcount, in the choices of educational institutions about how to prepare students for a transformed landscape. Each of these decisions shapes the basic structure. Each is subject to the difference principle. And each, evaluated honestly, falls short.
The question is not whether the gap will close. The question is whether it will close through deliberate institutional design — through the construction of arrangements that rational people would choose behind the veil — or through the slow, painful, costly process of after-the-fact correction that has characterized every previous technological transition. The Luddites paid the price of an absence of just institutions. Their grandchildren benefited from the institutions that were eventually built. The question for the present moment is whether this generation will bear the cost that institutional failure imposes, or whether the institutions can be built in time.
Rawls's answer is unambiguous. The institutions must be built. The difference principle requires it. Justice demands it. The alternative — allowing the market to distribute the gains and losses of AI according to its own dynamics, without institutional intervention designed to benefit the least advantaged — is not a neutral choice. It is a choice that fails the standard of justice. And the failure is not mitigated by the magnitude of the total gains, because the total gains, however large, do not compensate the specific people who bear the specific costs.
Rawls called his theory "justice as fairness" — a phrase that has been repeated so often it has lost much of its original force. The phrase does not mean that justice and fairness are synonyms. It means something more precise and more demanding: that the concept of justice is properly understood as the outcome of a fair procedure. If the procedure for choosing principles is fair — if the parties choosing are genuinely impartial, genuinely ignorant of their own position, genuinely rational — then whatever principles emerge from that procedure are just, by definition. Justice is not discovered. It is constructed, through a process that meets the conditions of fairness.
This procedural conception of justice has a critical implication for the AI transition. It means that the question "Is the AI transition just?" cannot be answered by looking at outcomes alone. A transition that produces enormous aggregate wealth is not just simply because the total is large. A transition that displaces millions of workers is not unjust simply because the displacement is painful. Justice is determined by the procedure: Were the institutions governing the transition designed under conditions of fairness? Were the interests of all affected parties — including the least advantaged — given due weight? Were the arrangements chosen ones that no rational person, ignorant of their own position, would have reason to reject?
The answer, examined with minimal honesty, is no. The institutions governing the AI transition were not designed behind any kind of veil. They were designed by the people who stood to benefit most from the transition — the technology companies, their investors, and the early adopters whose existing advantages positioned them to capture the largest share of the gains. This is not conspiracy. It is the ordinary operation of institutional design in the absence of a fairness constraint. Those with power design institutions that reflect their interests. Those without power bear the consequences.
Rawls did not regard this as inevitable. He regarded it as a failure — a failure of the basic structure, which is the primary subject of justice. The basic structure of society, in Rawls's terminology, consists of the fundamental institutions that distribute the advantages and disadvantages of social cooperation: the constitution, the legal system, the property regime, the tax code, the educational system, the labor market, the corporate governance framework. These institutions are not natural features of the landscape. They are designed, maintained, and reformed by human beings. And because they determine the life prospects of every person subject to them, they must satisfy the requirements of justice.
The AI transition has created a new layer of the basic structure — one that Rawls could not have anticipated but that his framework is designed to evaluate. The platforms that mediate access to AI capabilities. The data practices that determine whose information trains the models. The intellectual property regimes that determine who owns the outputs. The labor arrangements that determine how the productivity gains are distributed between capital and labor. The educational institutions that determine who is prepared for the new landscape and who is left behind. These are not secondary features of the economy. They are the basic structure of the AI age, and they are subject to the same requirements of justice as any other element of the basic structure.
Iason Gabriel made this argument with particular clarity in his 2022 paper. The basic structure of society, Gabriel argued, should be understood as a composite of sociotechnical systems — systems in which human institutions and technological capabilities are intertwined to the point of inseparability. The tax code operates through software. The labor market is mediated by algorithms. Educational institutions deliver instruction through digital platforms. When AI reshapes these systems, it reshapes the basic structure. And when the basic structure is reshaped, the requirements of justice apply — not as an afterthought, not as a corporate social responsibility initiative, not as a regulatory add-on, but as a primary consideration in the design of the new arrangements.
This reframing transforms the conversation about AI ethics. The technology industry's dominant ethical framework focuses on what might be called internal properties of AI systems: Are the algorithms biased? Are the outputs accurate? Is the training data representative? Are the systems transparent? These are important questions, but they are, from a Rawlsian perspective, radically insufficient. The moral properties of an AI system are not internal to the model. They are a product of the social system within which the model operates. A language model that is perfectly unbiased in its outputs can still be deployed within an institutional framework that concentrates gains at the top and distributes costs to the bottom. The model is fair. The arrangement is unjust.
Consider the twenty-fold productivity multiplier that Segal documents in Trivandrum. From a purely technical perspective, this is an unambiguous good. Twenty engineers, each operating with the leverage of a full team, producing more, reaching further, attempting things they could not have attempted before. The imagination-to-artifact ratio compressed to a conversation. The engineer who had never written frontend code building a complete user-facing feature in two days. These are genuine expansions of human capability, and they deserve the celebration they receive.
But from the perspective of justice as fairness, the relevant question is not whether the multiplier is real — it manifestly is — but what basic structure governs how the gains from that multiplier are distributed. If the gains flow primarily to the company's shareholders while the engineers' employment remains contingent, if the productivity increase allows the company to reduce headcount rather than expand capability, if the twenty engineers' enhanced output is captured as margin rather than shared as compensation, then the arrangement may be efficient without being just.
Segal addresses this tension with characteristic honesty when he describes the boardroom conversation about headcount. The arithmetic was clear: if five people can do the work of a hundred, why not have five? The market rewards efficiency. Investors understand headcount reduction. Segal chose to keep and grow his team. But he acknowledges that the choice was his — a personal decision, not a structural guarantee. Another CEO, facing the same arithmetic, might choose differently. And in the absence of institutional constraints that channel the productivity gains toward the workers who produce them, the market's gravity pulls toward the cheaper option.
This is precisely what Rawls meant by the primacy of the basic structure. Individual virtue — the CEO who chooses to keep the team — is admirable but insufficient. Justice requires institutions that produce just outcomes regardless of the virtue of the individuals operating within them. A just basic structure for the AI transition would not depend on the goodwill of individual employers. It would create incentives, regulations, and norms that channel productivity gains toward the least advantaged as a structural feature of the system, not as an act of individual generosity.
The philosopher Salla Westerstrand, in her 2024 paper in Science and Engineering Ethics, argued that Rawlsian principles offer a specific advantage over typical AI ethics frameworks: they work hierarchically, making it easier to identify which principles have priority in each context. Existing AI ethics guidelines, Westerstrand observed, tend to present a menu of values — fairness, transparency, accountability, privacy, beneficence — without specifying how to adjudicate conflicts between them. When transparency conflicts with privacy, when efficiency conflicts with fairness, the guidelines offer no resolution. Rawls's framework does. The first principle — equal basic liberties — takes absolute priority over the second. Within the second principle, fair equality of opportunity takes priority over the difference principle. The hierarchy is strict, lexical, and non-negotiable. Liberty first, then fair opportunity, then distribution. This ordering does not eliminate hard cases, but it provides a structured method for addressing them — something that the amorphous "AI ethics" industry conspicuously lacks.
A critical feature of justice as fairness that bears directly on the AI transition is what Rawls called the "publicity condition." A just society, Rawls argued, is one in which the principles of justice are publicly known and publicly endorsed. Citizens understand the principles governing their institutions, accept those principles as fair, and can see that the institutions actually operate according to them. This condition is not merely aspirational. It is constitutive. An arrangement that is just in substance but opaque in operation fails the publicity condition and is therefore, in Rawls's framework, not fully just.
The AI industry's relationship with publicity is, to put it mildly, strained. The algorithms that increasingly shape the basic structure — hiring algorithms, credit-scoring algorithms, content recommendation algorithms, the models that determine which workers are retained and which are displaced — operate with a degree of opacity that would have troubled Rawls profoundly. Grace and Bamford, in their 2020 analysis of UK government uses of AI through a Rawlsian lens, quoted Rawls directly on this point: "In a well-ordered society, one effectively regulated by a shared conception of justice, there is also a public understanding as to what is just and unjust." The requirement is not merely that the principles be correct but that they be known — that citizens can see the principles operating, can evaluate whether they are being followed, can hold institutions accountable when they are not.
The opacity of algorithmic decision-making violates this condition. When a hiring algorithm rejects a candidate, the candidate typically cannot see the principles governing the rejection. When a content algorithm shapes what information a citizen encounters, the citizen typically cannot see the principles governing the selection. When a productivity tool restructures the labor market, the workers affected typically cannot see the principles governing the restructuring. The opacity is not incidental. It is structural — built into the business models of the companies that deploy the systems, protected by intellectual property law, defended as competitive advantage.
From the perspective of justice as fairness, this opacity is not merely inconvenient. It is unjust. Not because the algorithms are necessarily wrong in their outputs, but because the institutional framework within which they operate fails the publicity condition. Citizens cannot evaluate what they cannot see. They cannot endorse principles they do not know. They cannot hold institutions accountable for standards they have never been told about. The opacity is a structural feature of the basic structure, and it undermines the conditions under which justice is possible.
The AI transition, then, is not merely an economic event or a technological event. It is a restructuring of the basic structure of society — the institutions that determine who gets what, who does what, who decides what, and who knows what. These restructurings are matters of justice. They demand institutions designed under conditions of fairness, governed by principles that benefit the least advantaged, and operated with sufficient transparency that citizens can evaluate and endorse them. By each of these standards, the current arrangements fall short. The question is what arrangements would satisfy them — and that question requires the disciplined application of the veil of ignorance to the specific circumstances of the present moment.
The power of the veil of ignorance is not its abstraction. It is its specificity. The thought experiment works not because it asks a general question about fairness but because it forces a particular exercise of imagination: the genuine, uncomfortable confrontation with the possibility that you might occupy the worst position in the arrangement you are designing.
This confrontation is what most discussions of the AI transition systematically avoid.
The triumphalist literature on AI — the celebration of productivity multipliers, the documentation of solo founders shipping products in weekends, the evangelism of the twenty-fold gain — assumes a position. It assumes the reader is the person who will wield the tool effectively, who will ride the wave rather than be crushed by it, who possesses the cognitive flexibility and the institutional support to thrive in the new landscape. This assumption is not stated. It does not need to be. It is the water in the fishbowl: invisible to the fish, obvious to anyone outside it.
The elegist literature — the mourning of lost depth, the anxiety about commoditized expertise, the fear for children growing up in a world of instant answers — also assumes a position. It assumes the reader is someone with something to lose: a career built through decades of patient mastery, a professional identity rooted in skills that the market once rewarded handsomely, a place in the hierarchy of expertise that the hierarchy itself has begun to dissolve. The elegist's position is more sympathetic than the triumphalist's, perhaps, but it is equally partial. It cannot see the developer in Lagos for whom the same tools represent liberation rather than loss.
Rawls's veil strips both positions away. Behind the veil, you do not know whether you are the triumphalist or the elegist. You do not know whether AI represents your liberation or your displacement. You do not know whether you possess the adaptability that the new landscape rewards or the specific expertise that the new landscape renders obsolete. And this radical uncertainty — this genuine not-knowing — is what transforms the question from "What is best for people like me?" to "What is just for everyone?"
The exercise requires concreteness. Abstract uncertainty produces abstract principles. Specific uncertainty — the vivid confrontation with specific possible positions — produces principles that account for real human circumstances. So consider, with the specificity that justice demands, the positions you might occupy behind the veil.
You might be the senior software architect whom Segal describes at a San Francisco conference — the one who spent twenty-five years building systems and could feel a codebase the way a doctor feels a pulse. Behind the veil, you do not know this. You do not know that you have spent twenty-five years depositing thin layers of understanding through friction, through the specific resistance of systems that did not do what you expected. You do not know that your embodied knowledge — the capacity to sense that something is wrong before you can articulate what — was built through thousands of hours of patient, difficult, often frustrating work. All you know is that you might be this person. And if you are, the AI transition has not merely changed your job. It has eroded the foundation of your professional identity. The market still needs you, perhaps — but differently, and less, and for how long?
You might be the non-technical founder whom Segal celebrates — the person with ideas and ambition but without the institutional infrastructure to realize them. Behind the veil, you do not know whether your ideas are good or whether your ambition is matched by judgment. You know only that the barrier between your imagination and its realization has collapsed to a conversation. If you are this person, the AI transition is liberation. The tools have granted you access to capabilities that would have required a team, a runway, and years of training just twenty-four months ago. The imagination-to-artifact ratio has dropped from infinity to an evening.
You might be the engineer in Trivandrum who had never written frontend code and built a complete user-facing feature in two days. If you are this person, the transition is exhilarating. New capabilities. New reach. The boundaries of your role have dissolved, and the dissolution feels like freedom. But you might also be the colleague sitting next to that engineer, the one whose specialized frontend expertise — built over years, rewarded by the market, central to professional identity — is now accessible to anyone with a subscription and the ability to describe what an interface should feel like. If you are this person, the dissolution of boundaries feels less like freedom and more like erasure.
You might be the twelve-year-old who asked her mother, "What am I for?" — the child who has watched machines do her homework, compose songs, write stories, and is now lying in bed wondering what remains for a human being to contribute. If you are this child, the existential question is not philosophical. It is immediate, practical, and urgent. The adults around you cannot answer it clearly, because they are asking the same question themselves. The educational institutions that are supposed to prepare you for the world have not adapted to the world that actually exists. The skills your parents spent decades acquiring may or may not be relevant by the time you enter the workforce. Nobody knows. The uncertainty is not abstract. It is the texture of your daily life.
You might be the spouse whose partner cannot stop building — the one who wrote "Help! My Husband is Addicted to Claude Code" and watched the post go viral because it named a condition that thousands of families recognized. If you are this person, the AI transition is not happening to you directly. It is happening to the person you live with, and through them, to the texture of your shared life. The productive addiction — the compulsion to build that looks like flow and feels like flow but cannot be turned off and erodes the spaces that sustain a relationship — is not your choice. It is something that has been done to the ecology of your home by a tool that was designed to be maximally engaging.
You might be the worker documented in the Berkeley study — the one whose AI-assisted workday expanded to fill every available minute, whose lunch breaks became prompting sessions, whose cognitive rest periods were colonized by the possibility of one more task, whose burnout accumulated invisibly until it manifested as diminished empathy, deteriorating relationships, and the flat affect of a nervous system that had been running too hot for too long. If you are this person, the productivity gains are real. The cost is also real. And the cost is borne entirely by you, while the gains are captured largely by the institution that employs you.
Behind the veil, any of these positions might be yours. The rational response, Rawls argued, is to choose institutions that make the worst position as tolerable as possible. Not because the worst position is the most likely — in aggregate, the AI transition may benefit far more people than it harms — but because you cannot afford to gamble. The stakes are too high. The worst position is not merely inconvenient. It is the dissolution of professional identity, the erosion of family life, the existential uncertainty of a child who cannot see her place in the world, the burnout that compounds until it breaks something that cannot easily be repaired.
This reasoning illuminates a feature of the AI transition that purely economic analyses tend to obscure. The costs of the transition are not only economic. They are identity costs — the loss of the professional self that was built through decades of patient work. They are relational costs — the erosion of family time, the colonization of shared spaces by productive compulsion. They are existential costs — the child's question, "What am I for?", which is not a question about employment but about meaning. And they are cognitive costs — the atrophy of capacities that friction once built and that frictionlessness quietly dissolves.
Economic analyses capture the first of these costs and miss the other three. The difference principle, properly understood, captures all four, because what Rawls called "primary goods" — the things that every rational person wants regardless of their conception of the good life — include not only income and wealth but also opportunities, powers, and the social bases of self-respect. A transition that provides economic gains while eroding the social bases of self-respect — the sense that one's skills matter, that one's expertise is valued, that one's contribution is meaningful — is not a transition that benefits the least advantaged in the full Rawlsian sense. The economic gain may be real. The loss of self-respect is also real. And the two do not cancel out, because primary goods are not fungible. You cannot compensate a person for the loss of their professional identity by giving them money, any more than you can compensate a person for the loss of their liberty by giving them comfort.
This is the point at which Rawls's framework diverges most sharply from the utilitarian reasoning that dominates the technology industry. Utilitarian analysis aggregates. It adds up benefits and costs across the entire population, nets them out, and pronounces the result good or bad. If the total benefits exceed the total costs, the transition is justified. The fact that some specific individuals bear enormous costs while other specific individuals capture enormous gains is, in the utilitarian calculus, regrettable but acceptable — the price of progress.
Rawls refused this calculus. The separateness of persons means that gains to one person do not compensate losses to another. The engineer whose professional identity has been dissolved is not compensated by the fact that a thousand other engineers have been empowered. The child who lies awake wondering what she is for is not compensated by the fact that her generation will have access to tools of unprecedented power. The spouse whose partner cannot stop building is not compensated by the fact that the building produces something valuable. Each person bears their own costs. Each person lives their own life. And justice requires that the institutions governing the transition be designed with each person's potential costs in mind — not aggregated away into a net calculation that erases the individual behind the average.
Martha Nussbaum, extending Rawls's framework through the capabilities approach, sharpened this point further. What matters for justice is not merely what people have but what they are able to do and to be — their capabilities. A just society ensures that every person possesses the capabilities necessary for a life of dignity: the capability to think, to imagine, to reason, to form relationships, to participate in decisions that affect their lives, to experience emotions, to play, to have control over their material and political environment. The AI transition, evaluated through this lens, presents a mixed picture. Some capabilities are expanded enormously — the capability to create, to build, to access information and tools that were previously gated by institutional barriers. Other capabilities are threatened — the capability to experience productive struggle, to develop embodied expertise through friction, to maintain relationships against the encroachment of productive compulsion, to exercise genuine autonomy rather than the simulacrum of autonomy that consists of choosing among options pre-selected by an algorithm.
Behind the veil, you do not know which capabilities will be expanded and which will be threatened in your particular case. You know only that both possibilities are real. And the rational response — the response that no rational person behind the veil would have reason to reject — is to choose institutions that protect the threatened capabilities while enabling the expanded ones. Not institutions that maximize the expansion regardless of the threat. Not institutions that protect the threatened capabilities at the cost of suppressing the expansion. Institutions that do both, simultaneously, through the careful, ongoing, never-completed work of institutional design that justice demands.
The veil of ignorance is not a counsel of paralysis. It does not say: stop building. It does not say: slow the transition. It says: build, but build justly. Expand capability, but protect the people who bear the costs. Design institutions that no rational person — ignorant of their own position, uncertain of their own fate, confronting the genuine possibility that they might be the one who is crushed rather than the one who soars — would have reason to reject.
The institutions that currently govern the AI transition do not meet this standard. They were not designed behind any veil. They were designed by interested parties — parties who knew their position, who designed for their own advantage, who externalized costs to others without those others' consent. Rawlsian justice demands that this design be revisited — not after the transition is complete, when the costs have already been borne and the institutions have already calcified, but now, during the transition, when the basic structure is still fluid enough to be shaped by deliberate choice rather than inertia.
The next chapters examine what that redesign would require: which liberties must be protected, what fair equality of opportunity demands in the AI context, and what institutions would satisfy the difference principle in a world where the gains are enormous, the costs are real, and the distribution of both is a matter of justice rather than market dynamics.
The argument to this point has been procedural — establishing the framework within which the justice of the AI transition can be evaluated. The veil of ignorance provides the method. The difference principle provides the standard. Justice as fairness provides the overarching conception. The specificity of the positions one might occupy behind the veil provides the moral urgency.
Now the framework must be applied. And the application yields a conclusion that is uncomfortable but, given the evidence, unavoidable: the current distribution of AI's gains fails the difference principle. The arrangement governing the AI transition is unjust.
This conclusion requires careful demonstration, because the word "unjust" carries weight that casual usage tends to erode. To say that an arrangement is unjust, in the Rawlsian sense, is not to say that it is regrettable, or suboptimal, or in need of minor adjustment. It is to say that the arrangement violates the fundamental requirements of fairness — that rational people behind the veil of ignorance would reject it, because it fails to make the least advantaged as well-off as possible. The claim is strong. The evidence supporting it is stronger.
Begin with the geography of the gains. The AI transition has produced extraordinary concentrations of value. In the first quarter of 2026, the combined market capitalization of the five largest AI companies exceeded ten trillion dollars. The revenues of the leading AI platforms grew at rates that made the previous decade's technology giants look modest by comparison. Anthropic's Claude Code run-rate revenue crossed $2.5 billion within months of the capability threshold that Segal describes. The venture capital flowing into AI startups set records that surpassed even the peaks of previous technology cycles. These gains are real, and they represent genuine expansions of economic value — not merely transfers from one sector to another but the creation of new capabilities, new products, new forms of productivity that did not previously exist.
The gains are concentrated, however, in a remarkably narrow segment of the population. The direct beneficiaries are the shareholders and employees of AI companies, the venture capital firms that funded them, and the early adopters — predominantly in wealthy nations, predominantly in technology hubs, predominantly possessing the educational credentials and institutional connections that position them to exploit new tools before the broader population has access. The gains flow through a funnel whose narrow end points toward the already-advantaged.
This concentration is not, by itself, unjust. The difference principle permits inequality — even substantial inequality — provided the inequality benefits the least advantaged. The question is not whether the gains are concentrated but whether the concentration produces benefits for those at the bottom that no alternative arrangement could match.
Consider the mechanisms through which the gains might flow downward. The most commonly cited is the democratization of capability — the argument, which Segal advances with genuine conviction, that AI tools lower the floor of who gets to build. The developer in Lagos, the student in Dhaka, the non-technical founder anywhere in the world can now access productive capabilities that previously required institutional infrastructure, capital, and years of specialized training. This democratization is real. It represents a genuine expansion of opportunity that no honest assessment can dismiss.
But democratization of access to tools is not the same as democratization of outcomes. The developer in Lagos may access the same model as the developer in San Francisco. She does not access the same market connections, the same institutional support, the same educational infrastructure, the same legal protections, the same financial safety net, the same cultural capital that translates capability into career. Formal equality of access — same tool, same subscription price — coexists with profound substantive inequality in the conditions under which the tool is used. The floor has risen. The ceiling has risen faster. And the distance between them, which is what the difference principle ultimately measures, has not narrowed in the way that the democratization narrative implies.
Amartya Sen, in The Idea of Justice, pressed this distinction with characteristic precision. Sen argued that Rawls's focus on the design of ideal institutions was insufficient — that justice requires attention not merely to the institutional framework but to the actual capabilities that people possess and the actual lives they are able to lead. A person who formally has access to a tool but lacks the infrastructure, the training, the stability, and the social support to use it effectively does not possess the capability that the access is supposed to provide. The capability is nominal, not real. And nominal capability, evaluated by the standard of justice, is not enough.
Now consider the geography of the costs. The costs of the AI transition are distributed with a precision that almost exactly inverts the distribution of the gains. The people who bear the greatest costs are those who possess the least capacity to absorb them.
The displaced knowledge workers — the professionals whose expertise has been commoditized by tools that can perform competently in their domains within minutes — bear the most visible costs. But visibility should not be confused with severity. The displaced software architect in San Francisco, painful as the displacement may be, possesses savings, social networks, educational credentials, and a labor market that, however disrupted, still values the judgment and integrative thinking that years of experience produce. The displacement is real. The capacity to recover is substantial.
The workers who bear the severest costs are those whose displacement receives the least attention: the administrative assistants, the junior analysts, the entry-level knowledge workers whose roles served as the first rungs of a ladder that no longer exists. These workers did not possess deep expertise. They possessed the willingness to do the work that constituted the bottom of the professional hierarchy — the work that AI tools can now perform with remarkable competence. Their displacement is not mourned by the elegists, who tend to focus on the loss of deep craftsmanship. It is not celebrated by the triumphalists, who tend to focus on the empowerment of the already-capable. It is simply absorbed, quietly, by the people who can least afford it.
The communities that depended on these workers — the local economies organized around office parks and professional service firms and the restaurants and shops and schools that served them — bear costs that are even less visible and even harder to recover from. Economic restructuring at the community level is not a matter of individual retraining. It is a matter of institutional collapse and institutional reconstruction, a process that takes decades and that, historically, succeeds only when deliberate policy intervention directs investment toward the affected communities. Absent such intervention, the communities do not recover. They decline, and the decline compounds, and the people who remain in them bear costs that no individual adaptation can address.
The students whose educational institutions have not adapted bear costs that are not yet fully visible but that will compound over time. An educational system that prepares students for a world that no longer exists is not merely inefficient. It is a mechanism for producing disadvantage — for channeling young people into trajectories that the labor market will not reward, burdening them with debt for credentials that the market has devalued, and failing to equip them with the judgment, the integrative thinking, and the questioning capacity that the new landscape demands. The cost is borne not by the institutions, which continue to collect tuition and award degrees, but by the students, who discover the mismatch only after the investment has been made.
The difference principle evaluates this distribution with the clarity that the framework provides. The question is not whether the gains are large — they are enormous. The question is whether the arrangement that produces these gains makes the least advantaged as well-off as possible. And the answer, examined against the available alternatives, is no.
Alternative arrangements exist. Progressive taxation of AI-generated profits, directed toward retraining infrastructure, income support during the transition period, and investment in affected communities, could distribute the gains more broadly without eliminating the incentives that drive AI development. The difference principle does not require that the taxation eliminate inequality. It requires that the inequality, after taxation and redistribution, benefit the least advantaged more than any alternative arrangement would. The current arrangement — in which the gains are captured almost entirely by shareholders and early adopters while the costs are externalized to displaced workers, affected communities, and unprepared students — is not the arrangement that maximizes benefit to the least advantaged. It is the arrangement that maximizes benefit to the already-advantaged. And maximizing benefit to the already-advantaged, when an alternative arrangement could better serve the least advantaged, is the definition of injustice under the difference principle.
Different labor protections — portable benefits, retraining guarantees, transition income, requirements that companies investing in AI contribute to the communities affected by the displacement their investments produce — could address the costs borne by displaced workers without suppressing the innovation that produces the gains. The difference principle does not require the suppression of innovation. It requires that innovation's fruits be shared in a way that benefits those who benefit least.
Different educational structures — curricula that teach judgment rather than execution, questioning rather than answering, integration rather than specialization — could prepare students for the landscape that actually exists rather than the one that existed when the curricula were designed. The difference principle requires that educational institutions serve the interests of the least advantaged students, not merely the interests of the institutions themselves.
Different data governance frameworks — giving individuals meaningful control over the data that trains AI models, compensating the communities whose cultural production constitutes the training data, ensuring that the value extracted from collective human knowledge is returned, in some measure, to the collective — could address the extraction that currently flows in a single direction. The training data for large language models includes the creative work, the professional knowledge, the cultural output of billions of human beings. The value of that data, once processed through models that generate enormous revenue, flows almost entirely to the companies that process it. The creators, the workers, the communities whose knowledge constitutes the raw material receive nothing — not because nothing is owed, but because no institutional mechanism exists to ensure that what is owed is paid.
Each of these alternatives is feasible. None requires the suppression of AI development. None demands the elimination of inequality. Each simply redirects a portion of the gains toward the people who bear the costs — exactly as the difference principle requires, exactly as rational people behind the veil of ignorance would demand.
The dynamic efficiency objection returns here, more insistent than before. Any redistribution, the objection holds, will reduce the incentive to innovate. Taxation will drive investment elsewhere. Regulation will slow development. Labor protections will increase the cost of deployment. The total gains will shrink, and the least advantaged will end up with a larger share of a smaller whole.
The objection must be taken seriously, because it contains a genuine insight: incentive structures matter, and institutional design must account for them. But the objection proves less than it claims. The historical record of technological transitions, documented with rigor by Acemoglu and Johnson, demonstrates that institutional intervention to distribute the gains of technology does not suppress innovation. It redirects it. The labor protections of the early twentieth century did not end industrialization. They ended the specific form of industrialization that depended on the exploitation of workers. The environmental regulations of the late twentieth century did not end chemical manufacturing. They ended the specific form of manufacturing that externalized environmental costs. In each case, the intervention produced a period of adjustment — during which the affected industries protested, predicted catastrophe, and eventually adapted — followed by a period of innovation within the new constraints that proved as productive as the innovation that preceded it.
The claim that redistribution will suppress AI innovation is an empirical prediction, not a logical necessity. And the empirical evidence from previous transitions suggests that the prediction is wrong — that innovation adapts to institutional constraints, that the total gains are more resilient than the objection assumes, and that the least advantaged benefit more from a redistributed share of a slightly smaller whole than from a negligible share of a larger one.
The conclusion is not that the AI transition is harmful. The conclusion is that the AI transition, as currently governed, is unjust — that the institutional arrangements fail the difference principle, that alternative arrangements could better serve the least advantaged without suppressing the gains, and that the failure to implement these alternatives is a moral failure, not merely a policy gap.
This failure is not inevitable. It is a choice — made by omission rather than commission, perhaps, but a choice nonetheless. The basic structure of the AI transition is being designed in real time, and the design currently reflects the interests of the parties with the most power to shape it. The difference principle demands that the design reflect instead the interests of the parties with the least. The distance between what justice requires and what currently exists is the measure of the moral work that remains to be done.
The preceding chapters have established the framework and the verdict. The framework is justice as fairness — principles chosen behind a veil of ignorance, applied to the basic structure of society, evaluated by the difference principle. The verdict is that the current distribution of AI's gains fails this standard. The concentration of benefits at the top, the externalization of costs to the bottom, the absence of institutional mechanisms designed to redirect gains toward the least advantaged — these features of the current arrangement would be rejected by rational people who did not know which position they would occupy.
The question that follows is constructive rather than critical: What institutions would rational people behind the veil actually choose?
This question must be approached with the methodological care that Rawls brought to it. Rawls distinguished between the choice of principles — which occurs in the original position, behind the veil — and the design of institutions — which occurs at what he called the "legislative stage," where general principles are translated into specific laws and policies in light of the actual social and economic conditions of a particular society. The principles are chosen under conditions of strict impartiality. The institutions are designed with full knowledge of the society's circumstances — its level of economic development, its technological capabilities, its cultural traditions, its particular configuration of advantages and disadvantages.
The principles that rational people would choose behind the veil have been established. Equal basic liberties, in strict priority. Fair equality of opportunity. Distribution of social and economic advantages to the greatest benefit of the least advantaged. These principles are not specific to the AI transition. They are general requirements of justice that apply to any society, at any level of technological development, under any economic conditions.
The institutional question is specific. Given these principles, what institutions does justice require in a society undergoing the AI transition? The answer must account for the particular features of this transition — its speed, its scope, the nature of its gains, the distribution of its costs — while remaining faithful to the principles that govern any just arrangement.
The first institutional requirement is a mechanism for redirecting gains toward the least advantaged. The difference principle is not satisfied by the hope that gains will trickle down. It requires institutional structures that channel gains deliberately — that take a portion of the enormous value created by AI and direct it toward the people who bear the transition's costs.
The most straightforward mechanism is progressive taxation of AI-generated profits, calibrated to the magnitude of the gains and directed toward specific purposes: retraining programs for displaced workers, income support during the transition period, investment in communities whose economic foundations are being restructured. This is not a novel institutional form. It is the same mechanism that societies have used in every previous technological transition to distribute gains that the market would otherwise concentrate. The novelty lies not in the mechanism but in the scale of the gains to be distributed and the speed at which the distribution must occur.
The speed matters because the AI transition moves faster than any previous technological transition. The telephone took seventy-five years to reach fifty million users. ChatGPT took two months. The institutional mechanisms that distributed the gains of electrification, of the automobile, of the personal computer were built over decades — often after the costs had already been borne by a generation that received no compensation. The AI transition does not afford this timeline. The costs are materializing now. The institutions must be built now. The difference principle does not permit a generation to bear costs that institutional design could prevent, on the promise that future generations will benefit from the institutions that are eventually constructed.
The practical design of such taxation raises questions that political philosophy alone cannot answer — questions about rates, about incidence, about the administrative mechanisms that translate tax revenue into effective programs. These are questions for economists, policymakers, and the democratic process. What the Rawlsian framework provides is the principle that must govern the answers: the taxation must be sufficient to make the least advantaged as well-off as possible, consistent with maintaining the incentives that produce the gains in the first place. Below that threshold, the arrangement is unjust. Above it, the arrangement is permissible.
The second institutional requirement is retraining infrastructure that meets the actual needs of displaced workers, not the convenience of the institutions that provide it. Current retraining programs, where they exist, suffer from a systematic failure that Rawlsian analysis illuminates: they are designed for the median worker rather than for the least advantaged. The programs assume a baseline of educational attainment, digital literacy, financial stability, and time availability that the least advantaged displaced workers frequently do not possess. A retraining program that requires a worker to forgo income for six months while completing an online course is not accessible to a worker who lives paycheck to paycheck and cannot afford a single week without earnings.
Fair equality of opportunity — the second component of Rawls's second principle — requires that retraining infrastructure be designed from the perspective of the least advantaged, not from the perspective of the institution. This means retraining that includes income support for the duration of the program. It means retraining that accounts for the actual educational starting points of the workers it serves, rather than assuming a uniform baseline. It means retraining that is located where the displaced workers are, not where the training institutions find it convenient to operate. And it means retraining that prepares workers for the capabilities the new landscape actually rewards — judgment, integrative thinking, the capacity to ask good questions, the ability to direct AI tools wisely — rather than for the technical skills that AI is in the process of commoditizing.
The third institutional requirement is investment in the communities most affected by the transition. Economic restructuring at the community level cannot be addressed by individual retraining alone. When the industries that supported a community's economy are displaced, the entire ecosystem — the schools, the healthcare facilities, the local businesses, the social institutions — is affected. Recovery requires deliberate investment directed toward building new economic foundations, not merely retraining individual workers for jobs that may exist elsewhere. The difference principle requires that this investment be funded by the gains of the transition — that the communities bearing the costs receive a portion of the value that the transition creates.
Carl Benedikt Frey, in The Technology Trap, documented with meticulous historical detail what happens when this investment does not occur. The communities displaced by the Industrial Revolution did not recover automatically. Many did not recover at all. The workers who bore the costs of that transition passed the costs to their children, who passed them to their children, in a compounding cycle of disadvantage that persisted for generations. The communities that eventually recovered did so because of deliberate institutional intervention — public investment in infrastructure, education, and economic development that redirected the gains of industrialization toward the places that had borne its costs. The intervention was late. It was insufficient. It came after enormous human suffering that earlier intervention could have prevented. The AI transition need not repeat this pattern. The difference principle demands that it not.
The fourth institutional requirement is reform of the educational system — not as a matter of pedagogical preference but as a requirement of justice. The current educational system, in most nations, is designed for a world that no longer exists. It teaches execution rather than judgment. It rewards the production of correct answers rather than the formulation of good questions. It trains students for the specific technical skills that AI is commoditizing rather than for the integrative capabilities that the new landscape rewards. This mismatch is not merely inefficient. It is unjust, because it channels young people — particularly the least advantaged young people, who depend most heavily on public education and have the fewest alternative pathways — into trajectories that the labor market will not reward.
Fair equality of opportunity requires that the educational system prepare all students — not merely the most advantaged — for the world that actually exists. This means curricula that emphasize judgment, questioning, and the capacity to direct AI tools wisely. It means pedagogical methods that develop the capacity for sustained attention, for sitting with uncertainty, for the productive struggle that builds genuine understanding — the capacities that Segal, following the philosopher Byung-Chul Han, identifies as threatened by the aesthetic of smoothness. It means assessment methods that evaluate not what students can produce with AI assistance but what questions they can formulate, what judgments they can exercise, what connections they can see between domains that appear, on the surface, to be unrelated.
The fifth institutional requirement is data governance that gives meaningful control to the individuals and communities whose data trains AI models. The training data for large language models includes the creative output, the professional knowledge, the cultural production of billions of human beings. This data was not donated. It was extracted — scraped from public websites, harvested from user interactions, accumulated through terms of service that few users read and fewer understood. The value of this data, once processed through models that generate enormous revenue, flows almost entirely to the companies that control the models. The creators, the workers, the communities whose collective knowledge constitutes the raw material receive no share of the value their contributions produced.
The difference principle does not require that this arrangement be reversed entirely — that every individual whose data contributed to a model's training be compensated in proportion to their contribution. The administrative complexity of such a scheme would be prohibitive. What the principle does require is that the institutional framework governing data extraction produce outcomes that benefit the least advantaged. This might take the form of a data dividend — a mechanism through which a portion of the revenue generated by AI models is returned to the public through direct payments, public investment, or funding for the institutions that serve the least advantaged. It might take the form of data governance frameworks that give communities meaningful voice in how their collective knowledge is used. The specific mechanism is a matter for the legislative stage. The principle governing the choice is the difference principle: the arrangement must benefit the least advantaged as much as possible.
The sixth institutional requirement is mechanisms for ongoing adjustment. Rawls recognized that just institutions are not designed once and maintained forever. They require continuous evaluation and reform as circumstances change. The AI transition is moving faster than any previous technological transition, and the institutions governing it must be designed to adapt at a pace that the traditional legislative process cannot match. This means regulatory frameworks that include built-in review mechanisms — automatic triggers for re-evaluation when specified conditions are met, such as a certain level of workforce displacement or a certain concentration of market value. It means advisory bodies with the independence and the expertise to evaluate the effects of AI on the basic structure in real time and to recommend adjustments before the costs compound.
The institutional requirements are demanding. They are also feasible. Every requirement described here has precedent in the institutional responses to previous technological transitions — labor protections, progressive taxation, retraining programs, community investment, educational reform, governance frameworks. What is novel is not the type of institution but the speed at which it must be built and the scale at which it must operate. The AI transition does not afford the luxury of decades-long institutional development. The costs are materializing now. The institutions must be built now.
Behind the veil of ignorance, rational people would choose these institutions — not because they are generous, but because they are prudent. Any rational person, not knowing whether they will be the CEO who captures the gains or the worker who bears the costs, would insist on institutional protections that make the worst position tolerable. The institutions described here do not eliminate inequality. They do not suppress innovation. They redirect a portion of the gains toward the people who bear the costs, through mechanisms that are proven, feasible, and required by the most rigorous standard of justice available.
The failure to build them is not a failure of imagination. It is a failure of political will — a failure that the difference principle identifies as a failure of justice.
Rawls's two principles of justice are ordered lexically — a technical term that carries enormous practical weight. Lexical ordering means that the first principle takes absolute priority over the second. No amount of economic gain, no improvement in the condition of the least advantaged, no satisfaction of the difference principle can justify a violation of equal basic liberties. Liberty comes first. Distribution comes second. The ordering is strict, and it is non-negotiable.
This ordering has implications for the AI transition that cut in directions that neither the triumphalists nor the critics typically acknowledge.
The basic liberties that Rawls identified as inviolable include freedom of thought, freedom of expression, freedom of conscience, freedom of association, the right to hold personal property, freedom from arbitrary arrest and seizure, and the political liberties associated with democratic self-governance. These liberties are not instrumental goods — valued because they tend to produce good outcomes — but constitutive goods: elements of the status of free and equal citizenship that cannot be traded away, even voluntarily, even in exchange for substantial economic benefits.
The first implication of the priority of liberty is that the freedom to develop, deploy, and use AI tools is itself a liberty that must be protected. This may seem obvious, but it is not universally acknowledged. Proposals to ban or severely restrict AI development, motivated by concerns about displacement, about safety, about the concentration of power — have been advanced by serious people with serious arguments. From a Rawlsian perspective, these proposals face a significant burden. They restrict freedom of thought and expression — the liberty to explore, to create, to build, to pursue intellectual inquiry through whatever tools are available. The burden is not insuperable; liberties can be restricted to the extent necessary to maintain a system of equal liberties for all. But the restriction must be justified by the protection of other liberties, not merely by economic or distributional concerns.
The freedom to build with AI is not the freedom of the powerful to exploit the vulnerable. It is also, and perhaps primarily, the freedom of the previously excluded to participate. The developer in Lagos whose access to AI tools represents the first genuine opportunity to build at scale is exercising a basic liberty. The student who uses AI to explore domains previously gated by institutional barriers is exercising a basic liberty. The non-technical creator who can now realize ideas that would have died in the gap between imagination and implementation is exercising a basic liberty. To restrict AI in the name of protecting the displaced is to weigh one set of liberties against another — and Rawls's framework requires that such weighing be done with extreme care, because the restriction of liberty is the gravest institutional act a just society can undertake.
The second implication is more troubling, and it is the one that the technology industry is least prepared to hear. The priority of liberty also protects against the specific threats that AI poses to basic liberties — threats that are not hypothetical but actual, observable, and accelerating.
Consider freedom of thought. Freedom of thought requires not merely the absence of censorship but the presence of conditions under which genuine thinking is possible. A mind saturated by algorithmic content recommendation, trained by design to engage with whatever the platform serves, habituated to the dopamine rhythm of notifications and feeds, is not a mind exercising freedom of thought. It is a mind whose attentional environment has been shaped by systems designed to maximize engagement rather than to support the conditions for independent reflection. Segal, following the philosopher Byung-Chul Han, describes this as the colonization of cognitive space — the erosion of the capacity for genuine thought by tools designed to be more interesting than anything the mind might produce on its own.
The priority of liberty means that this colonization is not merely regrettable. It is a violation of the first principle. If the basic liberties include freedom of thought, and if freedom of thought requires the cognitive conditions under which thought is possible, then institutional arrangements that systematically erode those conditions — algorithmic feeds designed to capture attention, notification systems designed to interrupt reflection, platforms designed to make disengagement feel like deprivation — are arrangements that violate the first principle and must be reformed, regardless of the economic gains they produce.
This is not an argument against AI. It is an argument about the institutional framework within which AI operates. A large language model used as a collaborative tool — the kind of partnership that Segal describes in his work with Claude — does not violate freedom of thought. It enhances it, by providing a conversational partner that holds ideas, finds connections, and returns the user's intention clarified. The violation occurs not in the tool but in the attention-capture mechanisms that surround it — the engagement optimization, the notification architecture, the design patterns that convert voluntary use into compulsive use. The priority of liberty requires institutional intervention at this specific point: not to restrict the tool but to restrict the mechanisms that undermine the cognitive conditions for its free use.
Consider freedom of association. The AI transition is reshaping the organizations within which people associate — the workplaces, the professional communities, the collaborative networks that constitute a significant portion of most people's associational life. When AI tools enable a single person to do the work of a team, the team dissolves. When productivity multipliers allow companies to reduce headcount, the workplace community contracts. When the boundaries between professional roles blur, the professional identities around which communities were organized become less legible. These changes are not inherently unjust. But they affect the conditions under which freedom of association is exercised, and the priority of liberty requires that the effects be taken seriously.
The most significant liberty threat posed by the AI transition may be the one that is hardest to see from inside it: the erosion of what Rawls called the "social bases of self-respect." Self-respect, in Rawls's framework, is the most important primary good — the good without which all other goods lose their value. The social bases of self-respect are the institutional conditions that support a person's sense that their life plan is worth pursuing and that they possess the capabilities to pursue it. These conditions include meaningful work, social recognition, the sense that one's skills and contributions are valued by others.
The AI transition threatens the social bases of self-respect for a significant portion of the population — not by making them unemployed, necessarily, but by making their contributions feel less significant. The senior engineer whose embodied expertise is matched by a junior developer with a subscription. The writer whose craft is approximated by a model trained on centuries of literature. The teacher whose knowledge is accessible to any student with a prompt. These people may retain employment. They may even retain income. But the social recognition of their expertise — the sense that their years of patient mastery are valued by others, that their specific capabilities are needed, that their contribution cannot be easily replaced — is eroded. And the erosion of social recognition is an erosion of the social bases of self-respect.
The priority of liberty requires institutional protection of these bases — not because the displaced professionals deserve special treatment, but because self-respect is a condition of free and equal citizenship, and institutional arrangements that systematically undermine it violate the first principle. What such protection would look like is a matter for institutional design: perhaps expanded professional development pathways that help experienced workers redefine their contribution at higher cognitive levels; perhaps certification and recognition systems that valorize the judgment and integrative capability that experience produces; perhaps workplace structures that create visible roles for the mentorship, the quality assessment, and the architectural vision that seasoned professionals provide. The specific mechanisms are for the legislative stage. The principle is clear: the social bases of self-respect must be protected, because without them, the liberty that the first principle guarantees is hollow.
There is a tension here that must be acknowledged rather than resolved, because premature resolution would be dishonest. The freedom to build with AI and the freedom of thought that AI's attention-capture mechanisms threaten are both basic liberties. The expansion of creative liberty that AI enables and the erosion of self-respect that AI produces are both real effects on the conditions of free and equal citizenship. Rawls's framework does not eliminate these tensions. It provides a structure within which they can be adjudicated — the structure of equal basic liberties, adjusted to the maximum extent compatible with a similar system for all.
The adjustment is the work. It is the ongoing, never-completed institutional labor of ensuring that the expansion of some liberties does not come at the cost of others — that the freedom to build does not require the sacrifice of the freedom to think, that the democratization of capability does not produce the erosion of self-respect, that the gains of the transition are channeled through institutions that protect the conditions of free and equal citizenship for everyone, including those who bear the transition's greatest costs.
Westerstrand's observation that Rawlsian principles work hierarchically — making it easier to identify which principles have priority in each context — is most valuable here. When freedom of thought conflicts with freedom of enterprise, the priority of liberty requires that both be protected to the maximum extent compatible with a similar system for all, with particular attention to the liberty whose erosion would be most damaging to the status of free and equal citizenship. When the expansion of creative capability threatens the social bases of self-respect, the hierarchy requires that self-respect be protected first, because without it, the creative capability has no person to inhabit.
The priority of liberty does not resolve the AI transition's tensions. It orders them. And the ordering — liberty first, then opportunity, then distribution — provides the framework within which just institutions can be designed, evaluated, and reformed. The framework is demanding. It requires that every institutional choice be tested against the conditions of free and equal citizenship. It requires that economic gains, however large, never justify the erosion of basic liberties. And it requires that the liberties protected include not only the obvious ones — freedom of expression, freedom of thought — but the less visible ones whose erosion is harder to see and harder to measure: the cognitive conditions for genuine thinking, the associational fabric of professional life, the social bases of self-respect.
These are the liberties at stake. The next chapter turns to the second component of Rawls's second principle — fair equality of opportunity — and examines what it demands in a world where access to tools is formally equal and substantively anything but.
The difference principle addresses the distribution of outcomes. Fair equality of opportunity addresses the distribution of starting positions. The two are related but distinct, and Rawls gave fair equality of opportunity strict priority over the difference principle — meaning that no arrangement can satisfy the difference principle if it violates fair equality of opportunity, regardless of how much it benefits the least advantaged.
Fair equality of opportunity, in Rawls's formulation, requires that positions in society be open to all who possess the relevant talents and are willing to make the relevant effort — and, critically, that individuals with similar talents and similar willingness should have similar life prospects regardless of their social starting position. The child born into wealth and the child born into poverty, if they possess similar capabilities and similar motivation, should face similar prospects. This does not mean identical outcomes. It means that the institutional framework — the educational system, the labor market, the legal protections, the social infrastructure — must not allow the accident of birth to determine the range of accessible futures.
The principle is demanding, and no existing society fully satisfies it. But the distance between the principle and the reality — the gap between what fair equality of opportunity requires and what the current institutional framework provides — is the measure of the injustice that must be addressed. The AI transition has simultaneously narrowed this gap in certain dimensions and widened it in others, and the net effect depends entirely on the institutions that govern the transition.
The narrowing is real and should be acknowledged with the seriousness it deserves. AI tools have lowered the floor of who gets to build. The barriers to productive participation that previously required years of specialized training, institutional connections, and capital have been substantially reduced for a significant class of work. A person with an idea, a subscription, and the ability to describe what they want in natural language can now produce working software, functional designs, competent analyses. The imagination-to-artifact ratio has collapsed. This collapse represents a genuine expansion of opportunity — not a theoretical expansion but a practical one, measurable in the number of people who are building things that they could not have built twenty-four months ago.
But the narrowing occurs along one dimension — access to productive tools — while leaving other dimensions of inequality untouched or exacerbated. Fair equality of opportunity is multi-dimensional. It requires not only access to tools but access to the entire ecosystem of conditions that allows tools to be used effectively: education, infrastructure, institutional support, market connections, financial stability, social capital, and the cognitive conditions for sustained, productive engagement.
Consider education, the dimension where the gap between what fair equality of opportunity requires and what the current institutional framework provides is most acute. The educational systems of most nations were designed for a world in which the primary value of education was the transmission of knowledge and the development of technical skills. The AI transition has devalued both of these functions with extraordinary speed. Knowledge is now accessible to anyone with a prompt. Technical skills are being commoditized by tools that can perform competently in domains that previously required years of specialized training. What the new landscape rewards — judgment, integrative thinking, the capacity to formulate good questions, the ability to direct AI tools wisely, the taste that distinguishes what deserves to exist from what merely can exist — is precisely what most educational systems are least equipped to develop.
This mismatch affects all students, but it affects the least advantaged students disproportionately. Students from wealthy families have access to alternative educational pathways — tutoring, enrichment programs, schools with the resources and the flexibility to adapt their curricula to the new landscape. Students from disadvantaged backgrounds depend on public education, which is, in most nations, the slowest institutional sector to adapt. The gap between what these students need and what their schools provide is not a gap that AI tools can close, because the tools themselves require the judgment and the questioning capacity that the schools have failed to develop.
The result is a paradox that fair equality of opportunity requires us to confront directly. AI tools are formally available to all. The capacity to use them effectively is not. And the capacity to use them effectively depends on precisely the educational foundations that the least advantaged students are least likely to possess. Formal equality of access — same tool, same interface, same subscription — coexists with profound substantive inequality in the ability to translate access into capability.
This paradox has historical precedent. The public library system, established in the nineteenth century, provided formal equality of access to books. The capacity to benefit from that access — literacy, leisure time, the habit of reading, the intellectual framework within which the information in books could be processed and applied — was distributed with radical inequality. The library was available to all. The ability to use it was available to some. Fair equality of opportunity was not achieved by building libraries. It was achieved — to the extent it was achieved at all — by building the educational and social infrastructure that allowed all citizens to benefit from the resources the libraries provided.
The parallel to AI is precise. Providing access to AI tools is the equivalent of building the library. It is necessary but not sufficient. Fair equality of opportunity requires building the educational and social infrastructure that allows all people — not merely those with the most advantageous starting positions — to benefit from the tools. This infrastructure includes curricula designed for the world that actually exists, pedagogical methods that develop judgment and questioning capacity, assessment systems that evaluate the skills the new landscape rewards, and the material conditions — nutrition, housing, healthcare, safety — that allow students to learn at all.
Fair equality of opportunity in the AI transition also requires attention to the dimension of infrastructure — the physical and digital systems that determine whether AI tools are accessible in practice, not merely in principle. Connectivity is not equally distributed. In large portions of the world, reliable internet access remains a luxury. Hardware costs, though declining, still represent a significant barrier relative to local wages in many nations. The language models themselves, trained predominantly on English-language data and optimized for the workflows of Western knowledge workers, are less effective for users whose primary language is not English and whose professional context does not match the training distribution. These barriers do not eliminate the gains of AI for the globally disadvantaged. But they attenuate the gains significantly, and the attenuation compounds: a person with unreliable connectivity, limited hardware, and a model that is less effective in their language and context derives substantially less benefit from the same formal access than a person in San Francisco with gigabit fiber, the latest hardware, and a model optimized for their professional workflow.
The difference between formal and substantive equality of opportunity maps onto a distinction that Amartya Sen drew with characteristic clarity. What matters for justice, Sen argued, is not what resources people possess but what they are able to do with those resources — their capabilities. A person who possesses access to an AI tool but lacks the education to formulate good prompts, the infrastructure to maintain reliable access, the financial stability to invest time in learning the tool, and the institutional support to translate the tool's outputs into career advancement does not possess the capability that the access is supposed to provide. The capability is nominal. And nominal capability, evaluated by the standard of justice, does not satisfy fair equality of opportunity.
The institutional requirements are substantial but not unprecedented. Educational reform must be designed from the perspective of the least advantaged students, not from the perspective of the educational institutions. Infrastructure investment must be directed toward the communities with the least access, not toward the communities that are already well-served. Language model development must account for the linguistic and professional diversity of the global population, not merely the preferences of the wealthiest users. Financial support must be available for the workers and students who need time to develop the capabilities that the new landscape rewards.
Thomas Pogge, extending Rawls's framework to the global level, argued that the institutional arrangements governing international economic relations impose a duty on wealthy nations to reform structures that produce and perpetuate global poverty. Applied to the AI transition, Pogge's argument suggests that the nations and corporations that capture the largest share of AI's gains bear a corresponding duty to invest in the conditions that would allow the globally disadvantaged to benefit from the same tools. This is not charity. It is a requirement of justice — a requirement that follows from the recognition that the institutions governing the AI transition are global in scope, and that the requirements of fair equality of opportunity do not stop at national borders.
The gap between what fair equality of opportunity requires and what the current institutional framework provides is large and growing. The tools are becoming more powerful. The educational systems are not keeping pace. The infrastructure is not equally distributed. The institutional support that translates formal access into genuine capability is available to the already-advantaged and absent for the least advantaged. Every dimension of this gap represents a failure of justice — not because the technology is unjust, but because the institutional framework within which the technology operates fails to meet the standard that fair equality of opportunity sets.
The standard is not impossibly high. It does not require that every person derive identical benefit from AI tools. It requires that the institutional framework not allow the accident of birth — being born poor rather than rich, in Lagos rather than San Francisco, to parents without education rather than parents with advanced degrees — to determine the range of benefits that a person can derive from tools that are, in principle, available to all. When the framework allows birth circumstances to determine benefit in this way, it violates fair equality of opportunity. And when fair equality of opportunity is violated, no satisfaction of the difference principle — however generous the redistribution, however well-designed the retraining programs, however substantial the investment in affected communities — can make the arrangement just.
This is the force of Rawls's lexical ordering. Fair equality of opportunity is prior to the difference principle. The institutions must be built in order: first, protect liberties; then, ensure fair opportunity; then, and only then, evaluate distribution by the standard of the difference principle. The current discussion of the AI transition has the ordering reversed. It begins with distribution — who captures the gains — and treats opportunity and liberty as secondary concerns. Rawls's framework insists on the correct ordering. The opportunity must be fair before the distribution can be evaluated. And the opportunity, at present, is not fair.
The least advantaged. The phrase recurs throughout Rawls's work with the insistence of a moral compass needle returning to north. Every institutional arrangement, every policy choice, every structural feature of the basic structure must be evaluated from one perspective above all others: the perspective of the people who benefit least.
This insistence is what separates Rawls from the utilitarian tradition that dominates the technology industry's moral reasoning. Utilitarian analysis asks: What is the total benefit? If the total is large enough, the arrangement is justified, regardless of how the benefits are distributed. Rawls asks a different question: What happens to the people at the bottom? If the people at the bottom could be made better off under an alternative arrangement without reducing the total gains, then the current arrangement is unjust — full stop, no exceptions, no appeals to aggregate welfare.
Identifying the least advantaged in the AI transition requires precision that the current discourse rarely provides. The most visible casualties — the senior professionals whose expertise has been commoditized, the software engineers watching their specialized skills matched by junior colleagues with subscriptions — command significant attention. Their losses are real. But they are not, by the Rawlsian standard, the least advantaged. They possess educational credentials, professional networks, accumulated savings, and the cognitive flexibility that decades of knowledge work tend to develop. Their displacement is painful. Their recovery, while neither guaranteed nor easy, draws on substantial reserves.
The least advantaged are elsewhere. They are the administrative workers and junior knowledge workers whose roles constituted the entry points of professional hierarchies that are now being compressed or eliminated. These workers did not possess deep expertise to be commoditized. They possessed willingness and availability — the readiness to perform the routine cognitive labor that organizations required and that AI tools now perform with competence that improves quarterly. Their displacement receives little attention from either the triumphalists or the elegists, because their work was never celebrated. It was simply done, day after day, by people who depended on it for their livelihoods and their sense of participation in the productive economy.
The least advantaged are also the workers in the global supply chains that support the AI infrastructure — the data labelers in Kenya and the Philippines, whose labor of categorizing, annotating, and moderating content makes the training of large language models possible. These workers earn a fraction of what the models they train generate. They are exposed to disturbing content without adequate psychological support. They work under conditions that would be considered exploitative if the work were performed in the countries where the AI companies are headquartered. The institutional arrangements governing their labor were not designed behind any veil of ignorance. They were designed to minimize cost, and the minimization was achieved by locating the work where labor protections are weakest and wages are lowest.
The least advantaged include the communities whose economic foundations are being restructured without their participation in the decisions that produce the restructuring. These communities did not choose to depend on industries that AI would disrupt. They did not choose the educational systems that failed to prepare their children for the new landscape. They did not choose the infrastructure deficits that limit their access to the tools that might allow them to participate in the new economy. Their disadvantage is structural — produced by the basic structure of society, maintained by institutional arrangements that predate the AI transition and that the transition has exacerbated.
The least advantaged include the students in educational systems that have not adapted — particularly the students in public schools in disadvantaged communities, whose schools are the last to receive updated curricula, the last to receive technology infrastructure, the last to attract teachers with the flexibility and training to teach judgment rather than execution. These students bear a double disadvantage: the disadvantage of their starting position, compounded by the disadvantage of an educational system that is preparing them for a world that no longer exists.
The difference principle requires that every institutional arrangement governing the AI transition be evaluated from the perspective of these people specifically. Not from the perspective of the technology executives who capture the gains. Not from the perspective of the early adopters who exploit the tools most effectively. Not from the perspective of the senior professionals whose displacement, however painful, draws on reserves that the least advantaged do not possess. From the perspective of the data labeler in Nairobi. The administrative assistant whose role has been eliminated. The student in a public school in rural Mississippi. The community whose major employer has restructured around AI and reduced its workforce by half.
This evaluation is not sentimental. It is procedural. The difference principle is a test, not a plea. The test asks: Is the current arrangement the one that makes these people as well-off as possible, consistent with maintaining the gains that the transition produces? If an alternative arrangement — different labor protections, different data governance, different educational investment, different taxation — could make them better off without reducing the total gains, then the current arrangement fails the test. And the failure is a failure of justice, not merely of policy.
The argument encounters here its most formidable practical objection, distinct from the dynamic efficiency concern addressed earlier. The objection is institutional: even if the principles are correct, the institutions required to implement them cannot be built at the speed the transition demands. Democratic institutions move slowly. Legislative processes are captured by the interests of the powerful. Regulatory frameworks lag behind the technologies they are meant to govern. The gap between the speed of AI capability and the speed of institutional response is not closing but widening.
This objection is empirically accurate. The gap is real. The institutions are inadequate. The legislative process is, in most democracies, struggling to keep pace with a technological transformation that moves at a speed no previous transformation has matched.
But the accuracy of the objection does not diminish the obligation. The difficulty of building just institutions does not reduce the requirement to build them. It increases the urgency. And it places a corresponding burden on the actors who possess the capacity to act faster than democratic institutions — the technology companies themselves, the investors who fund them, the professional communities that develop norms and standards, the educational institutions that shape the next generation's capabilities.
Rawls recognized that the design of just institutions is not a one-time act but an ongoing process. He described the work of justice as occurring in stages — the constitutional stage, the legislative stage, the judicial stage — each requiring the application of the principles of justice to increasingly specific circumstances. The work never terminates. The institutions must be continuously evaluated, adjusted, and reformed as circumstances change. The beaver does not build one dam and walk away. The reflective equilibrium does not reach a final resting point.
This is the deepest and most demanding feature of Rawlsian justice. It is not a destination. It is a practice — the continuous, never-completed labor of evaluating institutional arrangements against the requirements of fairness and adjusting them when they fall short. The AI transition makes this practice more urgent, more difficult, and more consequential than it has ever been, because the technology evolves faster than the institutions that govern it, and the costs of institutional failure compound at the same accelerating rate.
The process that Rawls called reflective equilibrium — the patient, iterative movement between principles and particular judgments, adjusting each in light of the other — is not merely a philosophical method. It is a description of the intellectual and moral work that the present moment demands. The silent middle that Segal describes in The Orange Pill — the people who hold the exhilaration and the loss in both hands, who feel both the power of the tools and the weight of their costs, who cannot find a clean narrative to offer because the truth does not fit into a clean narrative — is engaged in precisely this work. They hold the general principle that AI creates value alongside the particular judgment that the value is not fairly distributed, and they are trying to bring the two into coherence without abandoning either.
This process is uncomfortable. It does not produce the clear positions that social media rewards. It does not generate the confidence of the triumphalist or the moral clarity of the critic. It produces something more modest and more honest: the ongoing recognition that the principles are correct and the institutions are inadequate, that the gains are real and the costs are real, that justice requires both the expansion of capability and the protection of the vulnerable, and that holding both of these requirements simultaneously is not a failure of resolve but a description of what justice actually demands.
The reflective equilibrium, applied to the AI transition, produces not a stable set of conclusions but a stable set of questions. Are the institutions governing the transition designed to benefit the least advantaged? If not, what alternative institutions would? Are the liberties threatened by the transition being protected? If not, what institutional mechanisms would protect them? Is fair equality of opportunity being maintained? If not, what investments would restore it?
These questions do not have permanent answers. They have answers for the present moment, answers that must be revised as the technology evolves, as the effects become visible, as the costs compound or diminish, as new populations are affected. The work of justice is the work of asking these questions continuously and responding to the answers with institutional reform. The AI transition does not change this requirement. It accelerates it.
The failure of the current moment is not that the questions are unanswerable. It is that the questions are not being asked — not with the rigor that the Rawlsian framework demands, not from the perspective of the least advantaged, not with the institutional seriousness that the magnitude of the transition requires. The technology industry asks what can be built. The market asks what will be profitable. The policy community asks what can be regulated without impeding innovation. Almost no one asks: What would the people who bear the greatest costs choose, if they had the power to choose? What institutions would they design, if their perspective governed the design?
The veil of ignorance is the instrument that forces this question. It strips away the known positions — the builder's excitement, the investor's calculation, the regulator's caution — and replaces them with a single, devastating uncertainty: you might be the person who bears the greatest cost. And under that uncertainty, the rational choice is institutions that make the greatest cost as small as possible.
The institutions must be built. The work must continue. The questions must be asked, again and again, with each iteration of the technology and each revelation of its effects. This is not a flaw in the theory. It is a description of what justice demands of creatures who must organize their shared life under conditions of perpetual change and radical uncertainty about who will benefit and who will bear the cost.
---
What would a just society look like after the AI transition?
The question is constructive, and it requires the full apparatus of Rawlsian theory — the two principles, in their lexical ordering, applied to the specific circumstances of a society transformed by artificial intelligence. The answer is not a utopia. Rawls was deeply suspicious of utopian thinking, regarding it as both politically dangerous and philosophically confused. The answer is a set of institutional conditions — realistic, achievable, demanding — that satisfy the requirements of justice as fairness in a world where the machines have entered the river and the current has changed.
The first condition is the protection of basic liberties under circumstances that the framers of existing constitutional orders could not have imagined. Freedom of thought, in a just society after the AI transition, means not merely the absence of censorship but the presence of institutional protections for the cognitive conditions under which genuine thinking is possible. This includes regulation of the attention-capture mechanisms that degrade the capacity for sustained reflection — not prohibition of the platforms, but structural requirements that their design serve the conditions of free thought rather than undermine them. It includes protections for the autonomy of individuals in their interactions with AI systems — the right to know when one is interacting with an AI, the right to understand the principles governing algorithmic decisions that affect one's life, the right to opt out of automated processes when the stakes are high. It includes protections against the specific form of self-exploitation that Segal, following Han, identifies as the achievement subject's compulsion — institutional norms, reinforced by labor law, that protect the boundary between work and rest against the pressure of tools that make productive engagement possible at every hour.
The freedom to develop and use AI tools is protected. This liberty is not negotiable. It is part of the system of equal basic liberties that the first principle guarantees. The developer in Lagos and the student in Dhaka have the same right to build with AI as the engineer at Google. Restrictions on this liberty are permissible only to the extent necessary to maintain a system of equal liberties for all — which means that restrictions motivated purely by the desire to protect incumbent industries or existing hierarchies of expertise fail the test of the first principle.
The second condition is fair equality of opportunity — substantive, not merely formal. In a just society after the AI transition, the educational system has been redesigned to develop the capabilities that the new landscape rewards: judgment, integrative thinking, the capacity to formulate good questions, the ability to direct AI tools wisely, the taste and discernment that distinguish what deserves to exist from what merely can exist. This redesign is funded by the gains of the transition — through progressive taxation of AI-generated profits directed toward educational investment — and is designed from the perspective of the least advantaged students, not from the perspective of the educational institutions.
Infrastructure — digital connectivity, hardware access, language support — is distributed with attention to the communities that currently possess the least, not merely to the communities that generate the most demand. The gap between formal access and substantive capability is addressed through deliberate investment — investment that recognizes, as Sen argued, that what matters for justice is not what resources people possess but what they are able to do with those resources.
Retraining infrastructure exists for displaced workers — not as a symbolic gesture but as a genuine pathway, funded adequately, designed from the perspective of the workers it serves, available without the requirement that workers forgo income during the retraining period. The retraining prepares workers not for the specific technical skills that AI is commoditizing but for the higher-order capabilities that the new landscape rewards — the judgment, the integrative thinking, the questioning capacity that no tool can replace.
The third condition is the satisfaction of the difference principle. The gains of the AI transition are distributed through institutional mechanisms that direct a portion of the value toward the least advantaged. Progressive taxation of AI-generated profits funds the educational investment, the retraining infrastructure, and the community investment that justice requires. Data governance frameworks ensure that the value extracted from collective human knowledge — the training data that constitutes the raw material of AI models — returns, in some measure, to the communities whose knowledge was extracted. Labor protections ensure that the productivity gains enabled by AI are shared between capital and labor rather than captured entirely by capital.
The institutions governing this distribution are transparent — satisfying Rawls's publicity condition — so that citizens can see the principles governing the distribution, evaluate whether they are being followed, and hold institutions accountable when they are not. The opacity that currently characterizes the operation of AI systems within the basic structure — the algorithmic black boxes, the proprietary models, the terms of service that no one reads — is replaced by institutional requirements for transparency sufficient to enable meaningful public oversight.
The fourth condition is institutional mechanisms for ongoing adjustment. The just society after the AI transition is not a fixed state. It is a dynamic arrangement, continuously evaluated and reformed as the technology evolves and its effects become visible. Regulatory frameworks include built-in review mechanisms — automatic triggers for re-evaluation when specified thresholds are crossed. Advisory bodies with genuine independence and deep expertise monitor the effects of AI on the basic structure and recommend adjustments before the costs compound. The reflective equilibrium that Rawls described as the method of justice is institutionalized — built into the governance structure as a permanent feature, not as an occasional intervention.
This is not a utopia. It is a set of achievable institutional conditions that follow, with logical rigor, from the principles that rational people would choose behind the veil of ignorance. Every element has precedent in the institutional responses to previous technological transitions. Progressive taxation exists. Retraining programs exist. Educational reform has been accomplished, in various forms, in various nations. Data governance frameworks are being developed. Regulatory review mechanisms are standard features of governance in many domains. The novelty lies not in the type of institution but in the coherence of the framework — the insistence that these institutions be designed together, evaluated together, and reformed together, in light of a single standard: the benefit of the least advantaged.
The distance between this vision and the current reality is the distance between justice and its absence. The current institutional framework governing the AI transition was not designed behind any veil of ignorance. It was designed by the parties who benefit most from the transition, reflects their interests, and externalizes costs to the parties who benefit least. The framework fails every test that Rawlsian justice applies: the difference principle, fair equality of opportunity, the publicity condition, the protection of basic liberties against the specific threats that AI poses.
This failure is not inevitable. It is a choice — a choice made daily, in boardrooms and legislatures and classrooms and homes, by people who could choose differently. The difference principle does not demand perfection. It demands improvement — the continuous reduction of the gap between the institutional framework that justice requires and the institutional framework that currently exists. Every step that narrows the gap is a step toward justice. Every choice that widens it, or that maintains the status quo in the face of evidence that alternative arrangements would better serve the least advantaged, is a choice for which the choosers bear moral responsibility.
Rawls was not an optimist in the conventional sense. He did not believe that history bends naturally toward justice, that technological progress automatically produces moral progress, or that the market, left to its own dynamics, distributes gains fairly. He believed that justice is an achievement — the product of deliberate institutional design, maintained through continuous effort, always at risk of erosion when the effort is relaxed. The just society is not the inevitable outcome of the AI transition. It is the possible outcome, the achievable outcome, the outcome that requires the same kind of deliberate, sustained, never-completed institutional labor that every just arrangement in human history has required.
The institutions must be built. They must be maintained. They must be reformed as circumstances change and as the effects of the transition become visible. The work is demanding. It is also, in Rawls's framework, non-optional. Justice requires it. The principles that rational people would choose behind the veil of ignorance demand it. And the people who bear the greatest costs of the transition — the least advantaged, the displaced, the unprepared, the communities whose foundations are being restructured without their consent — deserve institutions that no rational person, ignorant of their own position, would have reason to reject.
The standard is high. It has always been high. The dignity of the standard is that it refuses to lower itself to accommodate the convenience of the powerful. The AI transition is the most consequential restructuring of the basic structure of society since industrialization. The principles of justice that govern it are the same principles that have always governed the design of just institutions. They were not designed for this moment. They were designed for every moment in which human beings must choose how to organize their shared life under conditions of inequality and uncertainty. This is such a moment. The principles apply. The institutions must be built. The work, as always, never ends.
---
The word that reoriented everything for me was not "intelligence" or "amplification" or even "justice." It was "position."
I had been thinking about the AI transition the way builders think — from the inside out, from the excitement of what the tools can do, from the concrete reality of watching twenty engineers in Trivandrum produce what would have taken a hundred. I was measuring capability. I was counting the gains. I was building dams in the river and feeling the satisfaction that comes with building.
Rawls asked a question I had not considered: What position are you measuring from?
Not what position do you occupy — I know that answer. I am the builder, the one with access, the one whose skills happen to align with this particular moment. The question is more radical than that. What if you did not know your position? What if you had to design the rules of this transition without knowing whether you would be the person whose capability is amplified twenty-fold or the person whose career is dissolved in an afternoon? What if you might be the data labeler in Nairobi whose work trains the model you celebrate? What if you might be the twelve-year-old lying in bed asking her mother what she is for?
I wrote in The Orange Pill that the question AI forces us to ask is "Are you worth amplifying?" Rawls made me realize there is a prior question: Who decides? And a question prior to that: Under what conditions would the decision be fair?
The difference principle — inequalities are just only if they benefit the least advantaged — sounds abstract until you sit with it in the context of what I have watched happen. I kept the team. I chose to grow rather than cut. But that was my choice, made from my position, with my values. The difference principle does not rely on my values. It does not rely on any individual's goodness. It relies on institutions — structures that produce just outcomes regardless of whether the people operating within them happen to be generous.
That is harder than building. Building is what I know. Institutional design at the scale Rawls demands — that is the work I have not been doing. That is the work almost no one in the technology industry has been doing. We build the tools. We celebrate the gains. We acknowledge the costs in passing. We do not design the structures that would ensure the costs are borne fairly.
The five-stage pattern I described — threshold, exhilaration, resistance, adaptation, expansion — placed us in Stage Four: adaptation. Rawls made me see that adaptation without justice is not adaptation. It is consolidation — the calcification of arrangements that favor the people who happened to be in the right position when the threshold was crossed.
I do not know how to build all the institutions Rawls's framework demands. I know how to build dams. I know how to tend them. But the institutional architecture that would satisfy the difference principle at the scale of the AI transition — progressive taxation designed for AI-generated wealth, retraining infrastructure designed from the perspective of the least advantaged, educational systems redesigned for judgment rather than execution, data governance that returns value to the communities whose knowledge was extracted — this is work that exceeds any single builder's capacity.
What it does not exceed is any single builder's obligation. The awareness is the beginning. The position behind the veil — the imaginative act of not knowing whether you will be the one who gains or the one who bears the cost — is something any person can undertake. And once undertaken, it does not let you go. It follows you into the boardroom where the headcount question returns every quarter. It follows you to the dinner table where your son asks whether AI will take everyone's jobs. It follows you to the screen at three in the morning where the flow state has tipped into compulsion and you cannot tell whether you are building something or being consumed by it.
Justice is not a destination. It is the practice of asking, again and again, whether the arrangements governing our shared life would be chosen by people who did not know which life would be theirs.
I have taken the orange pill. Rawls made me see that the pill comes with a responsibility I had not fully reckoned with — not just to build, not just to tend the dam, but to ask whether the dam is in the right place, serving the right people, built to a standard that no rational person, uncertain of their fate, would have reason to reject.
The work never ends. That is not a burden. It is what justice looks like in practice.
-- Edo Segal
** AI's twenty-fold productivity multiplier is real. So is the trillion dollars that shifted in eight weeks. The question no one in technology is asking is the one that matters most: Would the rules governing this transition survive a fairness test designed by people who didn't know whether they'd capture the gains or bear the costs?
John Rawls built the most rigorous framework for answering that question. His difference principle -- inequalities are just only when they benefit those who benefit least -- turns the triumphalist narrative inside out. Applied to the AI moment, it reveals an uncomfortable truth: the current arrangements fail. Not because the technology is harmful, but because the institutions surrounding it were designed by the people who stood to gain most.
This book applies Rawls's framework to the revolution unfolding now -- and asks what just institutions would actually look like while there is still time to build them.

A reading-companion catalog of the 25 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that John Rawls — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →