A polycentric system is one in which multiple centers of authority coexist, overlap, and interact without any single center exercising comprehensive control. It is neither anarchy nor hierarchy but something more complex: governance that emerges from the interaction of multiple, partially autonomous, partially overlapping centers of decision-making. The term was coined by Vincent Ostrom in 1961 with Charles Tiebout and Robert Warren; Elinor Ostrom developed it into a comprehensive framework applicable across extraordinary institutional range.
There is a parallel reading that begins from the political economy of distributed power rather than its institutional mechanics. While the Ostroms documented the superiority of polycentric arrangements in their studied contexts — metropolitan water districts, fisheries, irrigation systems — AI governance operates in a fundamentally different substrate: one where the primary resources are computational power, training data, and technical expertise, all exhibiting extreme concentration. The polycentricity we observe in AI governance is not the organic distribution of authority among roughly equal stakeholders managing shared resources, but the fragmentation of regulatory response to concentrated private power.
The coordination failures documented in that 2025 study may be better understood as features, not bugs. When governance is distributed across multiple centers while the governed activity remains concentrated in a handful of corporations, each governance center becomes a potential point of capture. The multiplication of forums, conflict-resolution mechanisms, and coordination processes that the entry advocates would multiply the surfaces for regulatory arbitrage. Tech companies already exploit differences between jurisdictions — incorporating in Delaware, processing data in Ireland, deploying models globally. Adding more "coordination infrastructure" without addressing the underlying power asymmetry creates more venues for the powerful to shape rules while appearing to submit to them. The polycentric ideal assumes rough parity among governance centers and rough distribution of the governed activity. In AI, we have neither. The result is not resilient distributed governance but fragmented oversight of concentrated power — a condition that serves those who would prefer no effective governance at all.
The structural advantages are documented across Ostrom's comparative research. Resilience: distributed governance degrades gracefully rather than collapsing catastrophically when any single center fails. Adaptiveness: multiple centers experimenting with different approaches generate more information about what works than a single center implementing a single approach. Responsiveness to local conditions: a single governance center cannot possess detailed knowledge of diverse conditions across the entire system. Democratic accountability: distributed governance allows participation at the scale most relevant to one's circumstances.
The AI governance landscape is already polycentric in fact — national governments, international bodies, corporations, professional communities, builder communities, individual practitioners all govern aspects of AI — but largely uncoordinated. The failure is not the polycentricity; it is the absence of the institutional linkages that effective polycentricity requires. Information does not flow effectively between centers. Rules developed at one level are not calibrated to conditions at other levels. The feedback that would allow learning across centers is interrupted.
A 2025 study in Global Public Policy and Governance applying Ostrom's framework to AI governance among the US, China, and the EU found that the failures documented were predominantly coordination failures rather than capacity failures. Each jurisdiction had developed governance arrangements of varying sophistication. The breakdowns occurred at the interfaces between them.
Moving from uncoordinated to coordinated polycentricity requires specific infrastructure: forums for communication between governance centers, mechanisms for conflict resolution between incompatible arrangements, processes for mutual learning, and frameworks for coordination that are not centralized mandates but shared parameters.
The concept emerged from Vincent Ostrom's analysis of metropolitan governance, which challenged the conventional assumption that metropolitan areas were governed too chaotically and would benefit from consolidation. The Tiebout-Ostrom-Warren analysis showed that multiple overlapping jurisdictions produced better outcomes on most measures than consolidation would. Elinor Ostrom extended the framework to common-pool resources and eventually to global-scale governance challenges including climate and, by implication, AI.
Neither anarchy nor hierarchy. Polycentric systems distribute authority across multiple centers without a single apex.
Structural advantages. Resilience, adaptiveness, local responsiveness, and democratic accountability emerge from distributed governance.
Already the AI reality. AI governance is already polycentric in fact; the failure is in coordination, not in structure.
Coordination infrastructure required. Communication forums, conflict-resolution mechanisms, and mutual learning processes must be built to complete the polycentric architecture.
The right governance architecture depends fundamentally on which aspect of AI we're examining. For technical standards and safety protocols, the Ostromian view dominates (80/20) — multiple research labs, companies, and standards bodies experimenting with different approaches generates more safety knowledge than any single authority could produce. The contrarian concern about regulatory arbitrage applies less here because safety failures harm the developers themselves. For labor displacement and economic disruption, the capture dynamics reading proves more relevant (70/30) — the concentrated power of AI companies to reshape entire industries does create asymmetric bargaining that fragmented governance cannot effectively address.
The question of coordination infrastructure reveals the sharpest divergence. If we're asking "what enables effective governance?", Ostrom's framework correctly identifies the need for forums, conflict resolution, and mutual learning (90/10). But if we're asking "who controls these coordination mechanisms?", the contrarian view accurately highlights how such infrastructure becomes another surface for capture (20/80). The 2025 study's finding of "coordination failures rather than capacity failures" can be read both ways: as evidence that we need better linkages (Ostrom) or as evidence that powerful actors benefit from maintaining poor linkages (contrarian).
The synthesis emerges when we recognize that polycentric governance and concentrated power are not incompatible but rather describe different layers of the same system. AI governance may indeed require polycentric structures precisely because the underlying technology is so concentrated — not as an ideal match of governance to governed, but as a necessary counterweight. The task is not choosing between these readings but designing coordination infrastructure that acknowledges both the benefits of distributed experimentation and the realities of concentrated power. This means transparency requirements, participation guarantees, and rotation mechanisms that prevent any actor — public or private — from controlling the coordination layer itself.