In Chapter 16 of The Orange Pill, Segal explicitly invokes the priesthood model: 'people with deep understanding of complex systems' who 'mediate between that domain and those who do not understand it.' The priest serves because he understands. His knowledge confers not just capability but obligation — to act responsibly, to consider downstream effects, to build structures that protect the ecosystem. Segal proposes the test of the priesthood: whether its members' actions make others more capable. Mouffe's framework accepts the diagnosis and presses the deeper question. The priesthood model solves the competence problem — decisions are made by those who understand — at the cost of the legitimacy problem. Who authorized the priests? The answer, in the technology industry, is: no one. The priests authorized themselves, through the self-reinforcing logic that understanding confers the right to decide.
There is a parallel reading that begins not from priesthood's self-authorization but from democracy's own legitimacy crisis. The premise that democratic process confers legitimate authority assumes functional democratic institutions — assumption nowhere in evidence at the scale AI governance requires. What Mouffe frames as technocracy's usurpation might equally be read as democracy's abdication.
Consider the actual mechanisms through which 'democratic participation' would operate at the frontier of AI development. Parliamentary debate among representatives who understand neither transformers nor gradient descent. Public comment periods flooded by coordinated campaigns. Regulatory capture by whoever funds the longest lobbying effort. The demand for democratic governance of AI assumes a democracy that does not exist — one capable of rapid technical decision-making, resistant to manipulation, and operating at global scale. The priesthood model's flaw is not that it substitutes expertise for democracy. The flaw is that it pretends the choice exists. Every major technology transition — electricity, aviation, nuclear power, the internet — was governed primarily by technical expertise with democratic oversight emerging decades later, if at all. The pattern is not conspiracy but necessity. Speed of technical change outpaces speed of democratic consensus-building by orders of magnitude. Mouffe's accessible representations and participatory mechanisms are admirable in principle and structurally impossible in practice. By the time the public forums convene, the architecture is deployed. The priesthood serves badly not because it governs but because it pretends not to — cloaking technical necessity in the language of consultation while the fundamental decisions remain exactly where they began.
The distinction between competence and legitimacy is foundational. Under the priesthood model, engineers, researchers, and executives make decisions about how AI is developed, deployed, and regulated. They consult ethicists. They publish safety research. They establish internal review boards. All of these activities are genuine and some admirable. But the political structure remains unchanged: people with knowledge decide, and people without knowledge are governed by those decisions. Knowledge confers obligation in Mouffe's framework as well — but the obligation is not to decide wisely for others. It is to make knowledge accessible, creating the conditions under which non-experts can participate meaningfully in decisions about the systems that shape their lives.
The objection that most people do not understand AI well enough to participate meaningfully is empirically true and circular. People do not understand AI because the institutional structures that would make such understanding accessible do not exist. The priesthood has not built the educational infrastructure, public forums, accessible documentation, or participatory governance mechanisms that would enable non-expert engagement. It has built advisory boards, ethics panels, and safety teams that operate within the priesthood structure — experts advising other experts. The architecture of expertise is self-reinforcing, and it presents its own persistence as evidence of its necessity.
Danielle Arets, in a submission to the UN Office of the High Commissioner for Human Rights, argued from an explicitly Mouffean perspective that artistic practices could serve as the bridge between expert knowledge and democratic participation. Democratic participation does not require technical expertise; it requires accessible representations of what technologies do, what choices are embedded in their design, and what alternatives exist.
The priest who serves well is still a priest. The system that depends on the priest's wisdom has substituted expertise for democracy. The distinction matters enormously for the AI transition, which will reshape economic life, educational development, and political participation for billions. The transition deserves governance that is democratic in structure, not merely benevolent in intention.
The priesthood metaphor has a long history in political philosophy, from Plato's philosopher-kings through the Progressive Era's scientific management to contemporary technocracy. Segal deploys it self-critically; Mouffe uses it as a diagnostic term for the structural problem rather than as an aspiration to be realized more fully.
Competence without legitimacy. Expertise does not authorize governance; democratic process does.
Self-authorization is the signature move. The priesthood's claim to authority is circular — it derives from the understanding only the priesthood possesses.
Obligation is distributive, not substitutive. Knowledge creates an obligation to make knowledge accessible, not to decide on behalf of the uninformed.
Ethics panels are not democracy. Internal governance structures serve the priesthood's interests, however sincerely staffed.
The weighing here depends entirely on which question dominates at each decision point. On technical architecture — model design, safety protocols, capability thresholds — Segal's priesthood model is 85% correct. Decisions genuinely require deep understanding, and the consequences of incompetent intervention are catastrophic. Mouffe's critique applies primarily to the framing: expertise creates obligation but not authority. The priest who understands the system should inform, not decide alone.
On deployment decisions — what gets built, for whom, under what constraints — the weighting reverses to 70% Mouffe. These are fundamentally political choices about resource allocation and social priorities, dressed in technical language. The accessible representations Arets proposes are not naive; they are the basic infrastructure of informed consent. The counter-reading is correct that democratic process operates too slowly for technical governance, but this describes a coordination failure, not an argument for abandoning democratic legitimacy. The right frame is not priesthood versus democracy but nested authority: technical decisions by those with competence, under political constraints set through democratic process.
The synthesis the topic itself demands is temporal. Competence and consent operate at different speeds, and the structure must accommodate both. Rapid technical decisions happen within boundaries set more slowly through accessible public deliberation. The priesthood model fails not because experts govern but because it conflates all decisions into a single category requiring the same authority structure. Mouffe's framework succeeds by insisting on the distinction — but only if it accepts that some decisions genuinely cannot wait for consensus. The bridge is not eliminating expertise but making its exercise continuously accountable to democratically established constraints.