The Veil of Ignorance — Orange Pill Wiki
CONCEPT

The Veil of Ignorance

Rawls's thought experiment requiring the design of just institutions from a position of radical ignorance about one's own future place within them — the single most consequential methodological device in twentieth-century political philosophy.

The veil of ignorance is the central instrument of Rawls's theory of justice. It asks each participant in institutional design to choose the rules of society without knowing which position they will occupy once the rules take effect — rich or poor, talented or ordinary, able-bodied or not, majority or minority, early adopter or displaced worker. The veil is not a description of reality but a method for generating impartial principles: the conditions under which rational self-interest and fairness converge. What emerges from behind it is not a single best answer but a standard of justification — arrangements that no rational person could reasonably reject. Applied to the AI transition, the veil forces every participant to confront the possibility that they might be the one whose expertise is commoditized rather than amplified, and to design accordingly.

In the AI Story

Hedcut illustration for The Veil of Ignorance
The Veil of Ignorance

The veil emerged from Rawls's decades-long engagement with the social contract tradition — the lineage running through Hobbes, Locke, Rousseau, and Kant. Each of those thinkers had grounded political legitimacy in some form of hypothetical agreement, but each had been criticized for smuggling partial assumptions into the supposedly neutral starting point. Rawls's innovation was to strip the starting point of all particulars. The parties in the original position know the general facts about human societies — that resources are scarce, that people have different conceptions of the good, that institutions shape life prospects — but they do not know which life will be theirs.

The philosophical work done by this ignorance is considerable. It blocks every move toward rigging the system in one's own favor. A participant who knows she is talented has reason to favor meritocracy; a participant who knows she is disadvantaged has reason to favor redistribution. A participant who knows neither must find principles that both could accept. What Rawls discovered — and what a 2023 empirical study by Weidinger and colleagues at Google DeepMind subsequently confirmed with over 2,500 participants — is that people placed behind the veil reliably choose principles that protect the least advantaged.

The veil's application to AI is not obvious from Rawls's text. He deliberately excluded knowledge of technological development from the information available behind the veil, believing that principles of justice should hold regardless of technological circumstance. This exclusion is both the strength and the limitation of applying Rawlsian theory to AI. The principles are general enough to apply; the institutional specifications must be worked out at what Rawls called the legislative stage, where general principles meet particular conditions.

The discipline the veil imposes is the discipline of argument from a position no one actually occupies. Every real participant in the AI debate — the builder, the investor, the regulator, the displaced expert, the developer in Lagos — argues from a known position with known interests. The veil is the method by which these known positions are provisionally suspended so that the principles governing their shared arrangements can be evaluated impartially. Without it, institutional design is merely the projection of existing power into institutional form.

Origin

Rawls developed the veil of ignorance in the late 1950s and elaborated it across the 1960s, culminating in A Theory of Justice (1971). The concept drew on Kant's categorical imperative — the demand that principles be universalizable — and on the earlier contractarian tradition's invocation of hypothetical agreement. What distinguished Rawls's formulation was its methodological rigor: the veil was not a metaphor but a specific informational constraint designed to produce a determinate result.

The concept's reception was immediate and sustained. It reshaped political philosophy and influenced adjacent fields including economics, law, and — eventually — computer science and AI ethics. Iason Gabriel's 2022 paper on justice for AI and the 2023 PNAS study operationalizing the veil as experimental protocol represent the contemporary extension of Rawls's framework into the governance of systems he could not have anticipated.

Key Ideas

Radical informational constraint. The veil strips away knowledge of position, talent, and particular conception of the good, leaving only general knowledge about societies and the bare rationality of self-interested choice under uncertainty.

Maximin reasoning. Under radical uncertainty about one's own position, the rational strategy is to maximize the minimum — choose institutions that make the worst possible position as tolerable as possible, because you might occupy it.

Procedural justice. Whatever principles emerge from a fair procedure are just by definition; justice is constructed through the right method, not discovered as a pre-existing fact.

The separateness of persons. The veil refuses the utilitarian aggregation that treats individual gains and losses as fungible; each person lives one life and bears one set of costs.

Empirical vindication. The Weidinger study demonstrated that people actually placed in veil-like conditions reliably choose principles prioritizing the worst-off — the veil describes moral reasoning humans engage in when impartiality is structurally enforced.

Debates & Critiques

Critics have challenged the veil from multiple directions. Communitarians argue it produces an impossibly abstract self, stripped of the particulars that make moral reasoning meaningful. Libertarians argue it smuggles in egalitarian assumptions disguised as procedural neutrality. Amartya Sen argued that focusing on the design of ideal institutions from behind the veil distracts from the comparative task of identifying and removing remediable injustices in actual societies. Each critique has force. None has displaced the veil as the most widely used instrument of impartial institutional evaluation in contemporary political philosophy.

Appears in the Orange Pill Cycle

Further reading

  1. John Rawls, A Theory of Justice (Harvard University Press, 1971; revised edition 1999)
  2. John Rawls, Justice as Fairness: A Restatement (Harvard University Press, 2001)
  3. Laura Weidinger et al., "Using the Veil of Ignorance to Align AI Systems with Principles of Justice," PNAS 120:18 (2023)
  4. Iason Gabriel, "Toward a Theory of Justice for Artificial Intelligence," Daedalus 151:2 (Spring 2022)
  5. Samuel Freeman, Rawls (Routledge, 2007)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT