Who gets to speak is the question Young's framework insists on placing first. The dominant AI policy discourse is organized around distributive questions: how should the economic gains from AI be shared? Should there be a universal basic income? Should AI companies pay a tax on displaced labor? Young's communicative democracy insists these questions are structurally downstream. They accept the existing decision-making architecture and ask only how to redistribute its outputs. The upstream question — who is in the room where the decisions are being made, whose voices count as authoritative, whose perspective shapes the terms of the debate — is the procedural question that determines whether distributive outcomes can claim democratic legitimacy at all.
The current answer to the procedural question is stark. Decisions about AI development are made by a remarkably small and homogeneous group: predominantly male, predominantly white or East Asian, predominantly educated at a handful of elite universities, predominantly located in a handful of cities, predominantly embedded in a corporate culture that treats speed and scale as unquestionable virtues. Decisions about AI governance are made by regulators lacking both technical expertise and political will. Decisions about AI adoption are made by managers facing competitive pressures without the mandate to consider structural consequences. The people most affected — displaced workers, eroded communities, cultural traditions absorbed into training datasets — have almost no voice in any of these decisions.
Young's framework identifies this procedural exclusion as structural injustice in its purest form. Not a villain. Not a conspiracy. Not a failure of individual morality. A structure — a confluence of institutional rules, economic incentives, cultural norms, and accumulated practices — that systematically produces unjust outcomes through the ordinary behavior of people acting in good faith. The remedy is not distributive but procedural: restructuring the decision-making architecture so that affected populations have binding authority over the processes that are reshaping their lives.
The question has additional force because AI is reshaping the economic, cultural, and political landscape at a speed and scale without precedent. Whatever distributive arrangement emerges will be the distributive arrangement that a tiny, structurally powerful elite designed for itself. Young's framework insists that such an arrangement cannot be legitimate regardless of its technical merits, because the process that produced it was structurally exclusionary. Legitimacy requires inclusion; inclusion requires not merely access but authority; authority requires institutional redesign. See differentiated representation.
The question is Young's operationalization of the deep democratic claim that animates her entire body of work: that legitimacy derives from the procedural inclusion of affected parties in decisions that shape their lives. The AI-specific urgency of the question emerged from the early 2020s literature on AI governance, which systematically avoided procedural questions in favor of distributive and safety-oriented questions — a pattern Young's framework diagnoses as predictable and structurally inadequate.
Procedural before distributive. How decisions are made determines whether their outcomes can claim legitimacy.
The current answer is stark. A tiny, homogeneous elite decides for everyone.
Structural injustice in pure form. The exclusion operates through ordinary institutional processes, not individual malice.
Remedy is procedural. Redistribution without redesigning decision-making reproduces the injustice in a different form.
Legitimacy requires authority, not access. Consultation without binding power is performance, not inclusion.