AI governance, viewed through Ostrom's institutional lens, is neither a market question nor a regulatory question but a polycentric governance challenge. The current debate oscillates between advocates of market solutions (let companies compete, let innovation proceed unimpeded) and advocates of state solutions (regulate AI development, establish government oversight bodies). The oscillation reproduces the conceptual error Ostrom spent her career dismantling: the assumption that market and state exhaust the available institutional possibilities.
There is a parallel reading that begins from the physical reality of AI infrastructure rather than its governance abstractions. The AI systems we seek to govern are not village irrigation networks or forest commons — they are planet-scale computational systems requiring billions in capital investment, rare earth minerals from specific geographic locations, and electricity consumption that rivals small nations. The companies that control these material prerequisites (NVIDIA's chip fabrication, Amazon's data centers, China's mineral processing) possess veto power over any governance arrangement, regardless of how beautifully polycentric its design.
The Ostromian framework assumes a certain relationship between resource and community that AI inverts. In traditional commons, the resource exists independently of governance — the forest precedes the forestry association. But AI systems are created by the very entities we seek to govern. OpenAI, Anthropic, and DeepMind are not communities discovering how to manage a pre-existing resource; they are resource creators who define what the resource is, how it operates, and who can access it. When Microsoft invests $13 billion in OpenAI, it purchases not just equity but architectural control over what governance is even possible. The community of AI practitioners that Ostrom's framework would empower exists downstream from infrastructure decisions already made. They can govern their use of AI tools, perhaps, but not the tools' fundamental nature, training data, or deployment constraints. The informational advantages Ostrom identifies actually work in reverse here: the companies building these systems possess information asymmetries so profound that external governance, whether by state or community, operates in permanent darkness about capabilities, risks, and development trajectories.
Ostrom's research demonstrated that between market and state lies a vast institutional landscape of self-governing arrangements, community-based management systems, polycentric governance structures, and hybrid institutional forms that combine elements of public, private, and communal governance in configurations that neither paradigm can adequately describe. The intelligence commons presents collective-action problems that Ostrom's framework illuminates with particular clarity.
Community governance has structural advantages that neither market nor state governance possesses. Informational advantages: practitioners who work with AI tools daily know things about the resource that no external monitor can observe. Motivational advantages: people who bear the consequences of governance failure have the strongest incentives to get governance right. Adaptive advantages: governance decisions can be modified quickly without the delays inherent in centralized regulatory processes. Legitimacy advantages: rules that emerge from collective deliberation within the community command greater compliance than rules imposed from outside.
This is not to suggest community governance is sufficient alone. Ostrom was no anarchist. State authority is necessary for antitrust enforcement, international coordination, and legal protection of community governance arrangements against corporate override. Market mechanisms are essential for efficient resource allocation and innovation incentives. The argument is not that community governance replaces market and state governance. The argument is that it occupies institutional space neither market nor state can adequately fill, and that ignoring this space produces governance arrangements systematically less effective than those incorporating all three mechanisms.
A 2025 study in Global Public Policy and Governance applying Ostrom's framework to AI governance among the US, China, and the EU found that the documented governance failures were predominantly coordination failures rather than capacity failures. The researchers concluded that a polycentric multilevel arrangement of governance mechanisms would be more effective than any single centralized mechanism, provided that the arrangement included the coordination infrastructure that polycentricity requires.
The Ostromian reading of AI governance emerged as scholars at the Ostrom Workshop and related research programs began applying the IAD framework and eight design principles to the AI domain, particularly as the limitations of the market-versus-state binary became analytically apparent.
False binary. Market and state do not exhaust the institutional possibilities; community self-governance is a viable third option.
Four structural advantages. Community governance brings informational, motivational, adaptive, and legitimacy advantages.
Not a replacement. Community governance complements rather than replaces market and state arrangements.
Coordination failure diagnosis. Current AI governance failures are predominantly coordination failures between existing governance centers, not capacity failures within any single center.
The validity of each perspective depends crucially on which layer of the AI stack we examine. At the application layer — how organizations use ChatGPT, how communities establish norms around AI-generated content — the Ostromian reading dominates (80% valid). Here, communities do possess informational advantages, can establish meaningful rules, and achieve legitimate governance through collective choice. The contrarian view has little purchase at this scale; communities can and do govern their AI use effectively.
At the infrastructure layer, the analysis inverts completely. The contrarian reading captures reality (85% valid) when examining compute allocation, model training, and foundational research. The $100 billion capital requirements for frontier models, the concentration of GPU manufacturing, and the electricity demands create power asymmetries that no amount of polycentric coordination can overcome. Ostrom's framework assumes rough equality among governance participants, but when OpenAI alone consumes more compute than most nations possess, that assumption collapses.
The synthetic frame that emerges recognizes AI governance as fundamentally bifurcated by scale. Below a certain threshold — roughly where capital requirements stay under $10 million and compute needs can be met by public clouds — Ostromian governance thrives. Communities develop effective norms, share resources, and solve collective action problems. Above that threshold, we enter a domain where only states and megacorporations operate, where community governance becomes decorative rather than determinative. The challenge is not choosing between Ostromian and infrastructural readings but mapping which domains each reading governs. Perhaps the meta-governance question is how to prevent the infrastructure layer from completely determining what's possible at the application layer — how to preserve spaces where community governance can function despite the gravitational pull of concentrated capital.