Stasist AI governance describes policy frameworks that respond to AI's power through centralized institutional control: pre-deployment safety assessments, capability tier restrictions, licensing requirements for frontier models, government supervision of leading labs. The European Union's AI Act (2024), classifying systems by risk and mandating compliance, is paradigmatically stasist. Biden's October 2023 executive order, establishing reporting requirements and federal oversight, follows the same pattern. Postrel's framework diagnoses these as stasist not through ideological judgment (stasists can be well-intentioned, knowledgeable, correct about specific risks) but through structural prediction: centralized governance of rapidly evolving, knowledge-intensive technologies consistently fails because the information required to direct development is too dispersed, too tacit, and too fast-changing for any central authority to possess. Stasist governance produces rigidity: systems optimized to prevent known harms become unable to adapt when harms arrive in unexpected forms—which, in complex domains, they reliably do.
The stasist impulse is legitimate in origin. AI systems can cause harm—misinformation at scale, surveillance amplification, labor displacement faster than institutions can absorb, potential existential risks from future capable systems. The stasist reads these risks and concludes that development must be slowed, controlled, concentrated in responsible hands subject to governmental oversight. The logic is internally coherent. Postrel's critique is not that the risks are imaginary but that the governance response makes the system more fragile, not less.
Helen Toner's May 2025 essay articulated the stasist trap within AI safety discourse. Proposals for fewer leading AI projects, concentrated development in government-supervised labs, nonproliferation of frontier capabilities—all stasist positions producing stasist vulnerabilities. Concentration creates single points of failure. Government supervision introduces political capture and bureaucratic lag. Nonproliferation eliminates the distributed testing, diverse use cases, and competitive pressure that reveal problems central plans miss. The attempt to make AI safe through control may make it more dangerous by eliminating the adaptive capacity that decentralized systems provide.
Stasist governance also misallocates resources. Compliance frameworks require enormous organizational investment—documentation, assessments, legal reviews, bureaucratic navigation. The investment flows to legibility (what can be measured and reported) rather than to genuine safety (what actually reduces harm). Smaller organizations and open-source projects, which lack compliance resources, are effectively excluded—concentrating development in large companies whose resources can absorb regulatory costs. The result is precisely the concentration that stasist governance was supposed to prevent, achieved through the mechanism supposed to distribute it.
The Postrelian alternative is dynamist governance: light-touch frameworks that enable rather than direct, investments in human capacity rather than restrictions on technological capability, polycentric experimentation under conditions making failures informative. Open models, broad distribution, diverse testing, competitive pressure for safety innovation. Institutional support for transitions (retraining, portable benefits, safety nets) rather than prevention of change. Cultural development of judgment (aesthetic education, critical thinking, evaluative capacity) rather than assumption that experts will decide for others. The dynamist path is messier, produces more visible failures, and—if the empirical record of previous transitions is any guide—outperforms centralized alternatives.
Postrel's stasist category emerged from observing political responses to internet governance, biotech regulation, urban planning battles. In every case, a coalition formed around the conviction that powerful forces require authoritative direction—that markets, technologies, cultural shifts left to themselves produce chaos, harm, or suboptimal outcomes requiring correction. The coalition was not ideologically homogeneous but temperamentally consistent: low tolerance for emergent disorder, high confidence in institutional capacity to plan, preference for prevention over adaptation.
The application to AI was inevitable. The same coalitions that formed around internet regulation have re-formed around AI with the same arguments: the technology is too powerful to be left uncontrolled, the companies developing it cannot be trusted, the public requires protection, expert oversight is essential. Postrel's framework does not dismiss these claims but contextualizes them: they are not empirical observations but expressions of stasist temperament—which may be correct in specific cases but are structurally inclined toward governance failures that dynamist approaches avoid.
Centralized control as structural vulnerability. Concentration of AI development in supervised labs creates single points of failure, eliminates competitive pressure, and removes the distributed testing that reveals problems.
Compliance diverts resources from safety to legibility. Regulatory frameworks reward what can be measured and reported—not necessarily what actually reduces harm—distorting organizational priorities toward documentation over substance.
Exclusion of small and open-source actors. Compliance costs that large companies can absorb become barriers excluding the decentralized experimentation that stasist governance supposedly values—concentrating development in precisely the hands it aimed to constrain.
Rigidity prevents adaptation to unexpected harms. Systems optimized to prevent known risks become unable to respond when risks arrive in unanticipated forms—the characteristic failure mode of stasist governance in complex domains.