Helen Toner's May 2025 essay explicitly applied Virginia Postrel's Future and Its Enemies framework to AI safety discourse, arguing that influential positions within the AI safety community were 'what Postrel would call stasist'—prioritizing control and stability over dynamism's freedom and exploration. Toner identified specific stasist assumptions: that fewer leading AI projects would be safer, that development should be concentrated in government-supervised labs, that nonproliferation (preventing spread of frontier capabilities) is the path to security. She argued these positions, however well-intentioned, produce stasist risks: concentration creates single points of failure, eliminates competitive pressure for safety innovation, and removes the distributed testing that reveals problems centralized plans miss. Her alternative was dynamist: open models enabling decentralized use, testing, and research; broad capability distribution under frameworks making failures informative; competitive pressure driving safety improvement. The essay became the most cited application of Postrel's political framework to AI governance, validating Postrel's 1998 prediction that her dynamist-stasist axis would structure technological debates more clearly than left-right ideology.
Toner wrote from a unique position: former OpenAI board member involved in the organization's November 2023 governance crisis, Georgetown Center for Security and Emerging Technology researcher, someone who had been inside concentrated AI development and seen its vulnerabilities firsthand. Her credibility made the dynamist argument impossible to dismiss as libertarian ideology—this was someone who understood the risks, had worked on safety professionally, and concluded that the stasist path was dangerous.
The essay's reception split along predictable lines. AI safety researchers committed to concentration and oversight objected that Toner was underweighting catastrophic risks, that open models could enable bad actors, that some capabilities should not be widely distributed. Open-source advocates and dynamist-inclined technologists embraced the analysis as vindication. The split confirmed Postrel's framework: the division was not about data or expertise but about temperamental orientation toward change—stasists saw uncontrolled outcomes as the primary danger; dynamists saw controlled concentration as equally or more dangerous.
Toner's argument gained empirical support from the Software Death Cross and the SaaSpocalypse. Concentrated AI development in a few companies produced systems whose safety characteristics were opaque to external researchers, whose deployment decisions were made without democratic input, and whose failure modes (when they emerged) could not be independently investigated because the models were proprietary. The stasist dream—concentration enabling oversight—produced concentration enabling capture. Distributed alternatives (open models, independent researchers, decentralized testing) revealed more problems, faster, because transparency and competitive pressure outperformed closed supervision.
The essay also exposed a tension within Effective Altruism, the intellectual movement that had incubated much AI safety concern. EA's consequentialist framework could justify either dynamist or stasist governance depending on risk assessment. Toner's dynamist reading represented EA's empiricist wing: the wing that updates on evidence, learns from governance failures, and treats centralization's risks as seriously as decentralization's. The debate is unresolved, making Postrel's framework the sharpest vocabulary available for tracking it.
Toner's essay emerged from her own intellectual journey from stasist to dynamist positions on AI governance. Her early work emphasized concentrated development under oversight. Her experience at OpenAI and subsequent research shifted her view: she saw that concentration produced insularity, that oversight was captured by the supervised, that the problems most threatening safety were the ones centralized labs did not anticipate because no one was testing in the diverse conditions where systems would actually be deployed.
The explicit use of Postrel's framework was strategic. Toner needed vocabulary that was not left-right coded, that could explain why intelligent people of goodwill reached contradictory governance conclusions, and that had empirical grounding in how technological transitions actually play out. Postrel provided all three: a political typology orthogonal to conventional ideology, an explanation grounded in temperament and epistemology, and a historical record of stasist governance failures.
AI safety community has stasist tendencies. Influential positions favor concentration (fewer projects), oversight (government supervision), and nonproliferation (restricting capability spread)—classic stasist responses to powerful technology.
Stasist governance produces stasist risks. Concentration creates single points of failure; oversight enables capture; nonproliferation eliminates the distributed testing, diverse use cases, and competitive pressure revealing problems plans miss.
Open models as dynamist safety strategy. Broad distribution enables decentralized research, transparent evaluation, rapid problem discovery—safety through diversity rather than safety through control.
Empirical learning favors dynamism. The governance failures Toner observed (concentration producing insularity, oversight producing capture) were predictable from Postrel's framework—stasist approaches fail in domains where critical knowledge is distributed.