The priesthood-and-people problem is the oldest fault line in democratic theory: a group acquires specialized knowledge giving it genuine power over collective life and claims authority on the basis of that knowledge. The claim is not fraudulent—the knowledge is real, the power effective, the expertise produces results. But the claim is democratically illegitimate because it substitutes competence for consent. Rosanvallon traces this pattern across democratic history: physicians who monopolized knowledge of the body, jurists who monopolized knowledge of law, central bankers who monopolized monetary policy. Each exercised genuine authority based on genuine competence, and each was eventually subjected to democratic accountability—not because the public became equally competent but because democracies invented mechanisms (medical boards with public members, judicial review, legislative oversight) translating expertise into accountability. The AI priesthood—builders who understand systems from inside—faces this crisis in its most acute form: the knowledge gap between those who build AI and those who live inside its effects is arguably the largest in democratic history.
The problem's sharpest historical expression was the Abbé Sieyès's 1789 pamphlet asking 'What is the Third Estate?' The aristocracy and clergy governed France; the common people had no formal power. Sieyès did not argue the people were competent to govern—he argued no one else had the right to govern without their consent. The distinction between competence and legitimacy restructured European civilization. Every subsequent democratic breakthrough has navigated this distinction: subjecting expertise-based authority to popular oversight without destroying the expertise. The task is never complete because new forms of expertise continuously emerge, each claiming authority, each requiring new institutional mechanisms for democratic accountability.
Edo Segal's The Orange Pill proposes what it explicitly calls a 'priesthood of attention'—technologists who understand AI systems and bear moral obligation to serve as stewards. The proposal is made in good faith. Segal's confession about building addictive products lends the priesthood argument confessional weight that pure advocacy would lack. He is not claiming priests are virtuous—he is claiming they are necessary, that someone must tend the dam, and those who understand the river are equipped to do it. The democratic problem is not that the argument is wrong but that it is incomplete in ways historical experience suggests will become dangerous.
Every priesthood in democratic history has justified its autonomy on identical grounds: the work is too complex for popular oversight, stakes too high for amateur interference, and those who understand the system are better positioned to govern it than those who merely live inside its effects. Central bankers said this about monetary policy, nuclear engineers about reactor safety, intelligence agencies about national security. In every case the claim contained genuine truth—work was complex, stakes high, experts did understand things the public did not. In every case, autonomy following from the claim produced pathologies only democratic accountability could correct. The 2008 financial crisis was not caused by ignorant bankers but by brilliant bankers operating inside expertise-based autonomy insulated from counter-democratic vigilance and judgment.
Rosanvallon's insight is that the pathology is structural, not moral. It does not require corrupt priests—only autonomous ones. When a group exercises authority based on knowledge the governed do not share, and when mechanisms for holding that authority accountable are weak or absent, authority will drift—slowly, imperceptibly, with genuine good intentions—toward serving the group's interests rather than the public's. This is not conspiracy theory but institutional tendency, as reliable as gravity, counteracted only by institutions specifically designed to counteract it. The AI priesthood is subject to this tendency in its most acute form because the knowledge gap is unprecedented and the mechanisms for democratic accountability have not been built.
Competence is not consent. Expertise-based authority is real but democratically illegitimate unless subjected to institutional mechanisms translating specialized knowledge into popular accountability—the foundational democratic principle the AI transition has not yet operationalized.
Every priesthood drifts. When groups exercise authority based on knowledge the governed do not share, without mechanisms for accountability, authority drifts toward self-serving logic—not through corruption but through institutional tendency as reliable as gravity, requiring designed counterweight.
Historical pattern of democratic response. Physicians, jurists, central bankers each claimed authority based on specialized knowledge and were eventually subjected to democratic oversight through institutional invention—medical boards, judicial review, legislative oversight—translating expertise into accountability.
The AI knowledge gap is unprecedented. Medieval peasants could watch the lord's soldiers and understand the power governing them; factory workers could see machines and comprehend the wage-determining system; citizens today cannot see training data, read model weights, audit inference, or evaluate alignment procedures.
Individual ethics structurally insufficient. Segal's confession about building addictive products despite knowing the harm demonstrates that individual ethical awareness does not reliably constrain institutional behavior—the pressures overwhelming conscience, requiring institutional mechanisms performing the function individual virtue cannot.