The dynamist-stasist distinction, introduced in The Future and Its Enemies (1998), identifies the fundamental political division of technological eras: between those who favor open-ended experimentation with emergent outcomes (dynamists) and those who favor stability, planning, and institutional direction (stasists). The typology cuts across traditional ideology—both left and right contain dynamists and stasists. What distinguishes them is orientation toward change: dynamists trust decentralized processes, tolerate failure, prize diversity; stasists fear uncontrolled outcomes, demand coordination, prioritize prevention of harm. The framework has proven remarkably durable, structuring debates over internet governance, biotech regulation, financial innovation, and now AI policy with precision that conventional left-right analysis cannot match.
Dynamism is not libertarianism. Postrel's dynamists include market advocates but also open-source developers, participatory governance designers, and anyone who believes complex systems are better directed through distributed experimentation than central planning. The unifying feature is epistemological humility: the recognition that the knowledge required to direct complex change is too dispersed and too rapidly evolving for any central authority to possess. Dynamists do not claim markets are perfect—they claim centralized alternatives are worse at managing the specific challenges of rapid, unpredictable, knowledge-intensive change.
Stasism is not conservatism, though some conservatives are stasists. Stasists seek to control change rather than prevent it—they favor planning over emergence, coordination over competition, prevention over adaptation. The EU AI Act is stasist in structure: it establishes risk categories, compliance requirements, pre-deployment assessments. The approach is coherent, well-intentioned, and—in Postrel's framework—dangerous because it concentrates authority in institutions that cannot adapt as fast as the technology evolves. Stasist governance produces rigidity traps: systems optimized to prevent one category of harm become unable to respond when harm arrives in unexpected forms.
Helen Toner's May 2025 application of the framework to AI safety discourse was the most explicit extension. She identified stasist assumptions within the AI safety community: that fewer leading AI projects would be safer, that development should be concentrated in governmentally supervised labs, that nonproliferation—preventing the spread of frontier capabilities—is the path to security. Toner argued these positions, however well-intentioned, produce stasist risks: concentration creates single points of failure, eliminates competitive pressure for safety innovation, and removes the distributed testing that reveals problems central plans miss. Her alternative was dynamist: open models, broad distribution, diverse experimentation under conditions that make failures informative rather than catastrophic.
The dynamist prescription for AI is not laissez-faire but active investment in the conditions that make decentralized adaptation successful: education developing aesthetic and evaluative judgment, labor market structures supporting transitions, cultural norms rewarding quality over quantity. These are dams in Segal's metaphor—not restrictions on technology but strengthenings of human capacity to direct it. The distinction between dynamist and stasist dams is whether they constrain the technology (stasist) or develop the people (dynamist). Both can redirect flows; only one builds adaptive capacity.
The framework emerged from Postrel's observation that political battles over technology, urban planning, and cultural change were producing strange coalitions. Environmentalists opposed environmental technologies. Safety advocates blocked safety innovations. The positions made no sense as left or right but perfect sense as stasist resistance to outcomes no one controlled. Postrel concluded that the meaningful political division was not ideological but temperamental: how people respond to change they did not authorize.
Her intellectual influences included Hayek's dispersed knowledge, Jacobs's organic urbanism, Popper's open society, and evolutionary theory's preference for variation. She synthesized these into a political philosophy: openness to experimentation as the precondition for discovering what works, with institutional intervention reserved for demonstrated harms rather than hypothetical risks. The philosophy was contrarian in the 1990s; the AI moment has made it central to governance debates.
The fundamental axis is dynamism versus stasis. Political orientation toward technological change divides along openness-to-emergence versus preference-for-control—a dimension that cross-cuts left-right ideology cleanly.
Knowledge dispersion favors experimentation. Centralized planning fails in domains where critical information is distributed, tacit, and rapidly changing—which describes every knowledge-intensive technology including AI.
Failure is information, not enemy. Dynamists treat experiments that fail as valuable signals; stasists treat them as harms to be prevented—producing systems that learn slowly or not at all.
Institutional investment over technological restriction. The dynamist response to powerful technology is strengthening human capacity to navigate change rather than constraining what the technology can do.