The hierarchist reading sees the professional as the carrier of quality. Decades of training, credentialing, peer review, and institutional socialization produce the judgment that distinguishes reliable work from plausible output. AI threatens this system at the root, because it produces plausible output without the judgment, and because it lowers the barrier between the credentialed practitioner and the informed amateur. The hierarchist is right that something is at stake here — the very architecture of how societies distinguish the expert from the charlatan — but the response often reduces to defending the gate rather than asking what the gate was for.
You On AI's observation that AI removes the translation cost between intention and artifact reads as threatening to the hierarchist for precisely this reason. If the non-credentialed developer can build what previously required the credentialed one, what does the credential mean? One answer — the hierarchist answer — is that the credential should be strengthened, its boundaries policed more carefully, its authority defended against the leveling tendency of the tool. Another answer — the one Wildavsky would have offered — is that the credential's meaning has changed, and the institution's task is to find the new boundary between judgment (still scarce) and execution (now abundant), rather than to defend the old one.
The contemporary AI safety establishment is substantially hierarchist in its cultural logic. International coordination, compute thresholds, mandatory evaluations, licensing of frontier models — these are hierarchist proposals, structured around the idea that a competent expert body can identify the relevant risks and impose appropriate controls. The egalitarian critique of this approach — that the expert body will be captured by the firms it regulates — is the characteristic friction between the two cultural positions.
Wildavsky's relationship to the hierarchist position was complex. He respected institutions and believed pluralism required their presence, but he was acutely aware of their failure modes: ossification, capture, the production of rules that protected the rulemakers rather than the public. His preferred institutions were those with strong feedback mechanisms — those that could fail visibly and correct quickly. An AI governance regime that meets this standard remains hypothetical; most current proposals do not.
The hierarchist position is the cultural home of bureaucratic institutions, professional associations, and regulatory agencies. Applied to technology, it produces the credentialing and oversight apparatus that governs pharmaceuticals, aviation, and financial services.
The hierarchist reading of AI is visible in the proposals of institutions like the EU's AI Office, the UK AI Safety Institute, and the internal governance teams at frontier AI labs. Its strongest intellectual defenders argue that AI is too powerful to leave to market dynamics or individual judgment, and that only coordinated institutional response is adequate to the scale of the technology.
Order is the primary value. The risk is disorder — the breakdown of the systems that distinguish reliable from unreliable, expert from amateur.
Credentials carry judgment. Professional training produces not just skill but the capacity to evaluate, which AI threatens to decouple.
Expert institutions as remedy. Licensing, certification, and regulatory oversight are the characteristic responses.
Coordination over competition. International coordination and standardization are preferred to market-driven discovery.
The failure mode is capture. Hierarchist institutions ossify and are captured by the firms they regulate — the chronic vulnerability of the position.
The most consequential internal debate among hierarchists concerns which institutions should govern AI — national regulators, international bodies, professional associations, or the AI firms themselves through internal governance. Each answer produces a different political program and different vulnerabilities to capture.