Rawls's defense of the publicity condition rested on several considerations. Principles that cannot be publicly acknowledged will not receive the stable endorsement that just institutions require. Citizens who cannot see how institutions operate cannot form accurate beliefs about whether they are being treated justly. Institutions that operate opaquely undermine the conditions of self-respect by denying citizens the standing to evaluate the terms of their own cooperation. For these reasons, publicity is not merely instrumentally useful; it is part of what justice requires.
The AI industry's relationship with publicity is strained to the point of structural violation. The algorithms that increasingly shape the basic structure are protected as intellectual property and defended as competitive advantage. Even when outputs are observable, the reasoning that produced them typically is not. A candidate rejected by a hiring algorithm cannot see the principles governing the rejection. A user whose content is demoted by a recommendation algorithm cannot see the principles governing the demotion. A worker whose performance is evaluated by an AI-powered monitoring system cannot see the principles governing the evaluation. The opacity is not incidental; it is structural, built into the business models of the companies that deploy the systems.
Grace and Bamford's 2020 analysis of UK government AI through a Rawlsian lens quoted Rawls directly: "In a well-ordered society, one effectively regulated by a shared conception of justice, there is also a public understanding as to what is just and unjust." The requirement is not merely that principles be correct but that they be known — that citizens can see the principles operating, can evaluate whether they are being followed, can hold institutions accountable when they are not. The opacity of algorithmic decision-making violates this condition not accidentally but by design.
The practical implication is that a just basic structure for the AI transition requires institutional mechanisms for algorithmic accountability that the current regulatory environment does not provide. These mechanisms need not eliminate all competitive advantage or expose all proprietary reasoning; they must, however, provide sufficient transparency that citizens subjected to algorithmic decisions can understand the principles governing those decisions and can challenge those principles through democratic processes. The specific form of such mechanisms — algorithmic audits, right-to-explanation provisions, public documentation requirements, independent oversight bodies — is a matter for institutional design. The principle governing the design is the publicity condition.
Rawls introduced the publicity condition in A Theory of Justice (§29) as one of the formal constraints on principles of justice and developed it further in Political Liberalism (1993), where public reason became a central organizing concept. Publicity in the latter work took on additional weight as Rawls sought to specify the conditions under which citizens holding diverse comprehensive doctrines could nevertheless reach an overlapping consensus on political principles.
Constitutive, not decorative. Publicity is part of what justice requires, not a supplementary nice-to-have added after the substantive principles are fixed.
Two dimensions. Principles must be publicly known; operations must be publicly visible; both are required for the condition to be satisfied.
Structural opacity as structural injustice. Algorithmic systems that operate beyond the reach of citizen understanding violate the publicity condition regardless of the justice of their outputs.
Connection to self-respect. Citizens whose institutions operate opaquely are denied the standing to evaluate the terms of their own cooperation, eroding the social bases of self-respect.
Accountability infrastructure. A just basic structure for the AI transition requires institutional mechanisms for algorithmic accountability — audits, explanations, oversight — that the current regulatory environment does not provide.
The publicity condition has been contested on practical grounds — full transparency may be incompatible with certain legitimate interests in privacy, competitive advantage, and operational security — and on theoretical grounds — the distinction between the publicly knowable and the publicly known may not be as sharp as Rawls suggested. Rawls's responses emphasized that publicity admits of degrees and that different institutions require different levels of transparency depending on the stakes involved. The condition remains a demanding standard for AI deployment, and its violations in the current landscape are extensive enough to constitute one of the most significant tensions between the framework and actual practice.