The Publicity Condition — Orange Pill Wiki
CONCEPT

The Publicity Condition

Rawls's requirement that a just society must be one in which the principles of justice are publicly known, understood, and endorsed by citizens — and in which institutions can be seen to operate according to those principles.

The publicity condition is constitutive, not decorative. An arrangement that is just in substance but opaque in operation fails the publicity condition and is therefore, in Rawls's framework, not fully just. The requirement has two dimensions. The first is that the principles themselves must be publicly known — citizens must be able to articulate the principles under which their institutions operate. The second is that the operation of the institutions must be sufficiently transparent that citizens can evaluate whether the institutions actually follow the stated principles. Both dimensions are threatened, in the AI transition, by the opacity of algorithmic decision-making. Hiring algorithms, credit-scoring algorithms, content recommendation algorithms, models that determine which workers are retained and which are displaced — these systems operate with a degree of opacity that would have troubled Rawls profoundly, because citizens cannot evaluate what they cannot see, cannot endorse principles they do not know, and cannot hold institutions accountable for standards they have never been told about.

In the AI Story

Hedcut illustration for The Publicity Condition
The Publicity Condition

Rawls's defense of the publicity condition rested on several considerations. Principles that cannot be publicly acknowledged will not receive the stable endorsement that just institutions require. Citizens who cannot see how institutions operate cannot form accurate beliefs about whether they are being treated justly. Institutions that operate opaquely undermine the conditions of self-respect by denying citizens the standing to evaluate the terms of their own cooperation. For these reasons, publicity is not merely instrumentally useful; it is part of what justice requires.

The AI industry's relationship with publicity is strained to the point of structural violation. The algorithms that increasingly shape the basic structure are protected as intellectual property and defended as competitive advantage. Even when outputs are observable, the reasoning that produced them typically is not. A candidate rejected by a hiring algorithm cannot see the principles governing the rejection. A user whose content is demoted by a recommendation algorithm cannot see the principles governing the demotion. A worker whose performance is evaluated by an AI-powered monitoring system cannot see the principles governing the evaluation. The opacity is not incidental; it is structural, built into the business models of the companies that deploy the systems.

Grace and Bamford's 2020 analysis of UK government AI through a Rawlsian lens quoted Rawls directly: "In a well-ordered society, one effectively regulated by a shared conception of justice, there is also a public understanding as to what is just and unjust." The requirement is not merely that principles be correct but that they be known — that citizens can see the principles operating, can evaluate whether they are being followed, can hold institutions accountable when they are not. The opacity of algorithmic decision-making violates this condition not accidentally but by design.

The practical implication is that a just basic structure for the AI transition requires institutional mechanisms for algorithmic accountability that the current regulatory environment does not provide. These mechanisms need not eliminate all competitive advantage or expose all proprietary reasoning; they must, however, provide sufficient transparency that citizens subjected to algorithmic decisions can understand the principles governing those decisions and can challenge those principles through democratic processes. The specific form of such mechanisms — algorithmic audits, right-to-explanation provisions, public documentation requirements, independent oversight bodies — is a matter for institutional design. The principle governing the design is the publicity condition.

Origin

Rawls introduced the publicity condition in A Theory of Justice (§29) as one of the formal constraints on principles of justice and developed it further in Political Liberalism (1993), where public reason became a central organizing concept. Publicity in the latter work took on additional weight as Rawls sought to specify the conditions under which citizens holding diverse comprehensive doctrines could nevertheless reach an overlapping consensus on political principles.

Key Ideas

Constitutive, not decorative. Publicity is part of what justice requires, not a supplementary nice-to-have added after the substantive principles are fixed.

Two dimensions. Principles must be publicly known; operations must be publicly visible; both are required for the condition to be satisfied.

Structural opacity as structural injustice. Algorithmic systems that operate beyond the reach of citizen understanding violate the publicity condition regardless of the justice of their outputs.

Connection to self-respect. Citizens whose institutions operate opaquely are denied the standing to evaluate the terms of their own cooperation, eroding the social bases of self-respect.

Accountability infrastructure. A just basic structure for the AI transition requires institutional mechanisms for algorithmic accountability — audits, explanations, oversight — that the current regulatory environment does not provide.

Debates & Critiques

The publicity condition has been contested on practical grounds — full transparency may be incompatible with certain legitimate interests in privacy, competitive advantage, and operational security — and on theoretical grounds — the distinction between the publicly knowable and the publicly known may not be as sharp as Rawls suggested. Rawls's responses emphasized that publicity admits of degrees and that different institutions require different levels of transparency depending on the stakes involved. The condition remains a demanding standard for AI deployment, and its violations in the current landscape are extensive enough to constitute one of the most significant tensions between the framework and actual practice.

Appears in the Orange Pill Cycle

Further reading

  1. John Rawls, A Theory of Justice, §29
  2. John Rawls, Political Liberalism, Lecture VI
  3. Thomas Grace and Cheryl Bamford, "AI and Rawlsian Justice in the UK Public Sector," (2020)
  4. Frank Pasquale, The Black Box Society (Harvard, 2015)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT