By Edo Segal
The commons I kept ignoring was the one I was building on.
Every day for months, I opened Claude and typed. I pulled from the entire history of human thought — philosophy, science, poetry, code — and I built. I built Napster Station. I built this book series. I built prototypes and strategies and arguments. I pulled and pulled and pulled, and not once did I ask what happens to the pool I was drawing from.
That failure of attention is not personal. It is structural. The entire AI discourse is structured around it. We talk about what the tools can do. We talk about who gets displaced. We talk about regulation and acceleration and whether the machines will take our jobs. We almost never talk about the shared resources that make any of it possible — the knowledge base, the skills pipeline, the trust that human-generated information means something, the institutional norms that keep quality from collapsing under the weight of infinite output.
Elinor Ostrom spent forty years studying what happens to shared resources when nobody governs them. The famous answer, courtesy of Garrett Hardin, was tragedy — inevitable destruction by rational self-interest. Ostrom proved Hardin wrong. She traveled the world documenting communities that had governed shared resources successfully for centuries. Swiss alpine villages. Japanese forests. Spanish irrigation systems. Philippine fisheries. None of them needed privatization. None of them needed the state to step in. They built their own governance — messy, local, adaptive, and durable.
She distilled what worked into eight design principles. Not commandments. Patterns. Regularities that appeared whenever communities kept their commons alive across generations.
I did not encounter Ostrom's work thinking about fisheries. I encountered it thinking about AI. Because the moment you stop seeing artificial intelligence as a product and start seeing it as a force flowing through shared resources — shared knowledge, shared skills, shared attention, shared trust — you realize we are all fishing the same waters. And nobody has agreed on catch limits.
This book applies Ostrom's institutional framework to the intelligence commons with the rigor her work demands. It is not a policy document. It is a lens — one that reveals the governance vacuum at the center of the AI transition and, more importantly, the evidence that the vacuum can be filled. Not by corporations. Not by governments alone. By the communities whose cognitive lives depend on what we build next.
The dam needs tending. Ostrom showed us the blueprints.
— Edo Segal ^ Opus 4.6
1933-2012
Elinor Ostrom (1933–2012) was an American political economist and the first woman to receive the Nobel Memorial Prize in Economic Sciences, awarded in 2009 for her analysis of economic governance, especially the commons. Born in Los Angeles during the Great Depression, she earned her Ph.D. in political science from UCLA in 1965 and spent the majority of her career at Indiana University, where she co-founded the Workshop in Political Theory and Policy Analysis with her husband, Vincent Ostrom. Her landmark work, Governing the Commons: The Evolution of Institutions for Collective Action (1990), challenged the prevailing assumption — rooted in Garrett Hardin's influential 1968 essay "The Tragedy of the Commons" — that shared resources inevitably face destruction without privatization or state control. Through decades of fieldwork spanning six continents, Ostrom documented hundreds of cases in which communities successfully self-governed common-pool resources, from fisheries and forests to irrigation systems and grazing lands. She distilled these findings into eight design principles that characterize enduring, effective commons governance. Her later work on polycentric governance — the idea that complex resource systems are best managed through multiple overlapping centers of authority rather than a single hierarchy — extended her framework to large-scale institutional challenges. Ostrom's research fundamentally expanded the institutional possibilities available to policymakers and communities, establishing a rigorous empirical alternative to the market-versus-state binary that had dominated governance thinking for most of the twentieth century.
Garrett Hardin published "The Tragedy of the Commons" in Science in 1968, and for the next three decades the essay functioned less as a scientific hypothesis than as a parable — a story so compelling in its logic that it displaced the need for evidence. The argument was elegant and devastating. Any resource held in common would inevitably be destroyed by the rational self-interest of its users. Each herder, calculating that the benefit of adding one more cow to the common pasture accrued entirely to himself while the cost of overgrazing was distributed among all herders, would add cow after cow until the pasture was ruined. Freedom in a commons, Hardin concluded, brings ruin to all. The only solutions were privatization or state control — Leviathan or the market, coercion or enclosure, with no third possibility available to communities of rational actors.
The parable became policy. Development agencies prescribed privatization for fisheries in the Global South. Governments nationalized forests. International bodies designed top-down regulatory regimes for oceanic resources. In each case, the logic was Hardin's: communities of users cannot be trusted to govern themselves. Rational self-interest, left unchecked, produces ruin. Someone — a property owner, a regulator, a sovereign — must impose order from outside.
Elinor Ostrom spent four decades demonstrating that Hardin was wrong. Not wrong in the way that a theorist is wrong when the evidence slightly contradicts the prediction, but wrong in the more fundamental way that a theorist is wrong when the entire framework misrepresents the phenomenon it purports to describe. Hardin assumed that the users of a common resource are trapped in a structure from which they cannot escape — that they are isolated actors in a one-shot game, unable to communicate, unable to make binding agreements, unable to observe and sanction each other's behavior. Ostrom showed, through fieldwork and comparative institutional analysis spanning six continents and several centuries, that communities around the world had successfully governed shared resources for generations, sometimes for millennia, without either privatization or state control. Swiss alpine villages had managed communal meadows since 1517. Japanese mountain communities had maintained forest commons across dynasties and wars. The huerta irrigation systems of Valencia, Spain, had been adjudicating water disputes since the medieval period with a speed and fairness that the Spanish court system could not approach.
The tragedy was not inevitable. It was a prediction that held only under a specific set of institutional conditions: the absence of communication, the inability to make binding agreements, the lack of monitoring and enforcement mechanisms. Change those conditions — allow the herders to talk to each other, to agree on stocking limits, to watch each other's behavior, to impose graduated penalties on violators — and the outcome changed. The tragedy was not a law of nature. It was a failure of institutional design.
The implications extended far beyond the management of fisheries, forests, and irrigation canals. Ostrom had identified a structural blind spot in the two dominant paradigms of twentieth-century governance. The market paradigm held that private property and competitive exchange were the most efficient mechanisms for allocating resources. The state paradigm held that centralized authority was necessary to prevent the destructive consequences of unregulated individual action. Both paradigms agreed, implicitly, that these were the only two options. Ostrom demonstrated the existence of a third: self-governance by the community of users, through institutions that the community itself develops, monitors, and enforces.
This third possibility is not a theoretical curiosity. It is the space in which most of the world's common-pool resources have been successfully managed for most of human history. And it is the space in which the most urgent governance challenges of the artificial intelligence transition must be addressed — because the current discourse about AI governance is organized around precisely the binary that Ostrom spent her career dismantling.
On one side stand the advocates of market solutions: let companies compete, let innovation proceed unimpeded, let the invisible hand allocate AI capabilities to their highest-value uses. On the other stand the advocates of state solutions: regulate AI development, establish government oversight bodies, create legal frameworks to constrain corporate behavior. The debate oscillates between these poles with the predictability of a metronome, each side pointing to the failures of the other as evidence for its own position, neither side recognizing that the oscillation itself is the symptom of a shared conceptual error — the assumption that market and state exhaust the available institutional possibilities.
They do not. Ostrom's research demonstrated that between market and state lies a vast institutional landscape of self-governing arrangements, community-based management systems, polycentric governance structures, and hybrid institutional forms that combine elements of public, private, and communal governance in configurations that neither paradigm can adequately describe.
The AI transition presents collective-action problems of a kind that Ostrom's framework illuminates with particular clarity. The training data from which large language models learn constitutes, in an institutional-economic sense, a commons: the accumulated textual, visual, and computational output of millions of individuals, contributed without explicit governance arrangements to a shared pool from which value is extracted by a small number of firms. The conventional response follows Hardin's dichotomy. One camp argues for privatization — clear property rights over data, licensing regimes, compensation mechanisms. The other argues for state regulation — government-mandated data governance, algorithmic auditing, transparency requirements. As a scholar at Duke University's Sanford School of Public Policy recently observed, this framing treats AI governance as though the only question is whether the market or the state should control it, when the deeper question is whether the communities whose contributions constitute the resource might develop their own institutional arrangements for governing how that resource is used.
The structural parallels run deeper than the training-data question. The community of practitioners who build with AI tools — the population that The Orange Pill identifies as the emerging builder community — shares access to a set of resources that are neither fully private nor fully public: coding knowledge, design patterns, prompt strategies, problem-solving approaches, quality standards. These practitioners interact through a complex web of formal and informal institutions — open-source repositories, online forums, professional communities, organizational structures — that shape their behavior in ways that neither market incentives nor government regulations fully explain. And they face a set of collective-action problems — problems of quality maintenance, knowledge sharing, skill development, norm enforcement — that cannot be solved by individual action alone but that do not require centralized authority for their resolution.
This is the terrain of commons governance. And the evidence from Ostrom's research suggests that the communities best positioned to govern these challenges are the communities most directly affected by them — not because community governance is morally superior to market or state governance, but because it has structural advantages that the other two mechanisms lack. Community governance operates with informational advantages: the practitioners who work with AI tools daily know things about the resource's condition that no external monitor can observe. It operates with motivational advantages: the people who bear the consequences of governance failure have the strongest incentives to get governance right. It operates with adaptive advantages: governance decisions can be modified quickly in response to changing conditions, without the delays inherent in centralized regulatory processes. And it operates with legitimacy advantages: rules that emerge from collective deliberation within the community command greater compliance than rules imposed from outside.
None of this is to suggest that community governance is sufficient on its own. Ostrom was no anarchist. She recognized that state authority is necessary for certain governance functions — antitrust enforcement, international coordination, legal protection of community governance arrangements against corporate override. She recognized that market mechanisms are essential for others — the efficient allocation of resources, the incentivization of innovation, the distribution of tools and services. The argument is not that community governance replaces market and state governance. The argument is that it occupies institutional space that neither market nor state can adequately fill, and that ignoring this space — as both the market fundamentalists and the state regulators tend to do — produces governance arrangements that are systematically less effective than arrangements that incorporate all three mechanisms.
Ostrom's framework matters for the AI transition because the transition is generating governance challenges that the market-state binary cannot resolve. The degradation of the knowledge base through mass-produced AI-generated content of uncertain reliability. The thinning of the professional skills pipeline as entry-level developmental work is displaced by AI tools. The saturation of the attention economy with output that overwhelms evaluative capacity. The erosion of trust in human-generated information as the line between human and AI authorship becomes indistinguishable. Each of these challenges has the structure of a commons problem — individual decisions that are rational in isolation but collectively degrading — and each requires institutional responses that neither pure market forces nor centralized state regulation can adequately provide.
The chapters that follow apply Ostrom's institutional framework to these challenges with the specificity they demand. The analysis begins with the identification of the intelligence commons as a resource system — defining its boundaries, its resource flows, and the specific dilemmas it presents. It proceeds through a systematic application of Ostrom's eight design principles, drawing on both the theoretical framework and the empirical evidence that supports it. It examines the emerging self-governance practices of the builder community, the monitoring and accountability mechanisms that effective governance requires, and the polycentric governance structures through which governance at different scales can be coordinated. And it concludes with an assessment of the institutional work ahead — the specific governance arrangements that the intelligence commons requires, grounded in the evidence from Ostrom's research and the emerging evidence from actual AI governance experiments.
The tragedy of the commons is not inevitable. Ostrom proved this with evidence so extensive it could not be dismissed as anecdotal. The question for the intelligence commons is whether the community of practitioners, builders, and citizens whose cognitive lives are being transformed by AI will develop the institutional arrangements that sustainable governance requires — or whether the absence of those arrangements will produce the degradation that Hardin's model predicts. The evidence from four decades of commons research suggests that the outcome depends not on the technology itself but on the institutions through which the technology is governed. Those institutions are the subject of this book.
---
A common-pool resource, in the precise terminology of institutional economics, exhibits two defining characteristics. The first is subtractability: one person's use of the resource units subtracts from the quantity available to others. The second is difficulty of exclusion: it is costly or impractical to exclude potential users from accessing the resource. These two characteristics together create the governance dilemma. Because exclusion is difficult, the resource is vulnerable to overuse. Because use is subtractive, overuse degrades the resource for everyone.
Fisheries fit this definition straightforwardly. Fish caught by one boat are not available to others; the ocean is vast and monitoring is expensive. Irrigation systems fit it. Water diverted upstream is unavailable downstream; canals serve multiple users who cannot easily be disconnected. Forests fit it. Trees harvested by one logger are not available to others; forests are large and boundaries are permeable.
Artificial intelligence, at first glance, does not seem to fit the framework at all. The AI tool itself — the language model, the code generation system — is a private good, owned by a corporation, distributed through commercial channels, fully excludable through pricing and access controls. There is nothing common-pool about a subscription to Claude or GPT. The exclusion mechanism is trivially effective.
But Ostrom's framework, properly applied, directs attention not to the tool itself but to the resource system in which the tool operates — the larger ecology of knowledge, skill, creative capacity, and institutional arrangement that constitutes the environment in which AI-augmented work takes place. And when attention shifts to that system, the common-pool characteristics become strikingly apparent. The resource system that the AI transition is transforming — what might be called the intelligence commons — comprises five distinct but interconnected resource flows, each exhibiting its own form of subtractability and its own governance dilemma.
The knowledge commons is the shared body of human knowledge accumulated over millennia — encoded in texts, traditions, practices, databases, and institutional memory — upon which both human creativity and AI training depend. Individual acts of knowledge use do not deplete this base in the way that catching a fish depletes a fishery. But degradation operates through a different mechanism: informational pollution. When the knowledge environment is saturated with AI-generated text that is fluent but unreliable — what AI researchers have begun calling the conditions for "model collapse" — the cost of finding genuinely valuable information increases for everyone. The quality of the search results declines. The reliability of the reference material erodes. The scholarly literature accumulates citations to sources that an AI confabulated. The commons has been degraded not by physical extraction but by the mass introduction of low-quality resource units that contaminate the pool.
A February 2025 working paper by Max Fang at Stanford, "The Tragedy of the AI Data Commons," employs law-and-economics methodologies alongside Ostrom's design principles to frame precisely this dynamic. The training data that constitutes the foundation of large language models was contributed — overwhelmingly without explicit consent — to a shared informational pool from which a small number of firms now extract enormous value. The governance arrangements under which the data was originally contributed (the norms of the open internet, the terms of service of social media platforms, the licensing frameworks of academic publishing) were designed for a world in which the data's primary use was human consumption. The appropriation of that data for AI training represents a regime change in the commons — a fundamental shift in the terms under which the resource is used, undertaken without the participation of the community whose contributions constitute the resource.
The skills commons is the distributed body of professional expertise, craft knowledge, and tacit understanding that enables high-quality knowledge work. This resource flow is subtractable in a more direct sense. When AI tools displace the entry-level work through which professionals traditionally develop their skills, the pipeline of expertise narrows. The junior engineer who never writes boilerplate code does not develop the pattern recognition that comes from thousands of hours of implementation work. The junior lawyer who never drafts a brief from scratch does not develop the analytical precision that comes from wrestling with case law. The skills that the community depends on are produced through developmental processes that the AI transition is disrupting, and the disruption affects not just the individuals whose development is interrupted but the entire professional community, which depends on a continuous flow of skilled practitioners to maintain the quality of its collective output.
The Orange Pill addresses this with the concept of ascending friction — the observation that AI does not eliminate difficulty from creative work but relocates it to a higher cognitive level. The engineer freed from syntax struggles with architecture. The writer freed from grammar struggles with judgment. The observation is important, but the commons dimension requires emphasis: the higher-level skills to which friction ascends are typically built on a foundation of lower-level competence. If the foundation erodes, the higher levels become inaccessible. The individual who skips lower-level developmental work gains an immediate benefit. The community pays a deferred cost in thinned expertise. The benefit is private and immediate; the cost is collective and delayed. This is the classic structure of commons degradation.
The attention commons is the shared space of human evaluative capacity in which creative work is produced, consumed, and assessed. This resource flow has always been finite — there are only so many hours in a day, only so many works that a person can evaluate with genuine care — but AI has accelerated the production of output to a degree that threatens to overwhelm the evaluative mechanisms on which quality depends. When anyone can produce a polished essay, a competent design, a functional application in a fraction of the time previously required, the volume of output increases dramatically, and the mechanisms through which the community identifies quality — critical review, peer assessment, editorial judgment — are strained beyond their capacity.
Attention is subtractable in the most direct sense. Attention given to one piece of output is attention not given to another. When the volume of AI-generated content increases, the fraction of attention available for any individual piece decreases. Each producer's rational decision to maximize output degrades the evaluative environment for all producers, including the producer herself. The fishery is overfished not because any individual fisher is irresponsible but because the aggregate of individually rational decisions exceeds the carrying capacity of the resource.
The trust commons is the shared reservoir of confidence in human-generated information, creative expression, and professional competence that underlies economic and social exchange. This trust has accumulated over centuries, encoded in institutions — professional licensing, academic credentialing, editorial standards, journalistic ethics — that function as monitoring and enforcement mechanisms for the quality of human output. AI disrupts this trust by making it increasingly difficult to distinguish between human-generated and AI-generated content, between genuine expertise and AI-augmented performance, between authentic creative expression and algorithmically optimized production. When the distinction becomes unreliable, the trust that the distinction supported erodes, and the erosion affects all participants regardless of how they individually relate to AI tools.
The institutional commons — the fifth and least visible resource flow — is the shared body of governance arrangements, organizational practices, professional norms, and collaborative protocols through which the community manages its relationship to the other four flows. The institutional commons is itself a common-pool resource: produced by collective effort, degraded by free-riding and neglect, systematically underprovided because its benefits are diffuse while its costs are concentrated. When one organization develops effective AI governance practices, those practices benefit the broader community — but the organization bears the full development cost while the broader community shares the benefit. Fewer organizations invest in governance innovation than the commons requires, because the incentive structure discourages it.
These five resource flows interact. The degradation of the skills commons reduces the community's capacity to evaluate quality, which accelerates the degradation of the attention commons. The erosion of the trust commons undermines the monitoring mechanisms that maintain standards, which further degrades the knowledge commons. The underinvestment in the institutional commons means that the governance arrangements needed to address these cascading degradations are themselves inadequate. The interactions create feedback loops that amplify individual dilemmas into systemic crisis.
There is a dimension of the intelligence commons that distinguishes it from natural resource commons in a way that has profound implications for governance. In a fishery, the resource units — the fish — are produced by the marine ecosystem through natural processes that the users do not control. The governance challenge is to regulate extraction so that it does not exceed the natural rate of regeneration. In the intelligence commons, the resource units — knowledge, skills, creative possibilities — are produced in significant part by the users themselves. The community does not merely extract from the commons; it constitutes the commons. The knowledge was produced by human researchers, writers, programmers, and artists. The skills are produced through the developmental trajectories of individual practitioners. The attention is contributed by the community of consumers and evaluators. The trust is generated by collective practices of honest, competent work.
The relationship between the community and the commons is therefore recursive. The community manages the commons, but the community also produces the commons, and the commons in turn shapes the community. A degraded commons produces a degraded community, which further degrades the commons. A well-managed commons produces a thriving community, which further enriches the commons. Ostrom documented this recursive dynamic in cases of both resource collapse and sustained governance success. The governance challenge is not simply to regulate extraction but to maintain the conditions under which the community continues to produce the resource on which it depends.
This recursive character is what makes the AI transition a governance crisis rather than merely a technological disruption. The technology is transforming the conditions under which the intelligence commons is produced and maintained, and the transformation is proceeding faster than the institutional arrangements can adapt. The design principles through which Ostrom analyzed successful commons governance — the subject of the next chapter — provide the most rigorous available framework for understanding what those institutional arrangements must look like.
---
Ostrom's eight design principles were not derived from abstract theory. They emerged from the systematic comparison of hundreds of cases of commons governance, successful and unsuccessful, across every inhabited continent and spanning centuries of institutional history. The principles describe characteristics that distinguish enduring, effective governance from arrangements that collapse or degrade. They are not prescriptions to be imposed from outside but regularities — patterns that appear consistently when communities successfully manage shared resources over extended periods.
Each principle describes a necessary condition. No single principle is sufficient; a governance arrangement can satisfy seven of the eight and still fail if the eighth is absent. The principles function as a system, and the system is more than the sum of its parts. Their application to the intelligence commons requires reinterpretation in light of the AI ecosystem's specific characteristics — its global scope, its rapid pace of change, its complex relationship between public and private goods. That reinterpretation is the work of this chapter.
Principle 1: Clearly Defined Boundaries. A successful commons institution must define who has the right to appropriate resource units and where the resource system itself begins and ends. Without clear boundaries, the commons is vulnerable to appropriation by outsiders who bear none of the costs of maintenance, and the community cannot develop the shared identity and mutual accountability that governance requires.
In the intelligence commons, boundaries must be constructed rather than inherited. There are no shorelines, no forest perimeters, no canal courses to serve as natural demarcations. The relevant boundaries are institutional: rules that define who is a member of the commons community, what constitutes legitimate use, and what obligations membership entails. The builder community is already forming around such boundaries — informal distinctions between those who engage with AI tools as active, critical collaborators and those who use them passively, between practitioners who contribute to shared knowledge and those who merely extract from it. The formalization of these boundaries is a critical governance step, because without clarity about membership, the free-rider problems that Hardin identified — individuals who appropriate value without contributing to maintenance — cannot be addressed.
Principle 2: Congruence Between Appropriation Rules and Local Conditions. Rules governing how much of the resource each user can take, and how much each must contribute to maintenance, must fit the specific characteristics of the resource and the community. One-size-fits-all solutions — "panaceas," as Ostrom called them — are almost always ineffective. The evidence from the Trivandrum engineering team described in The Orange Pill illustrates this directly: the organizational response that proved effective was not a uniform mandate but a set of flexible guidelines that allowed individuals and teams to adapt AI use to their specific skills, roles, and working styles. The rules appropriate for an open-source developer community differ from those appropriate for a community of freelance writers, which differ from those appropriate for a law firm integrating AI into its practice.
Principle 3: Collective-Choice Arrangements. The people affected by governance rules should participate in making and modifying those rules. This is perhaps Ostrom's most radical principle, because it challenges the assumption that governance must be either imposed by authority or determined by price signals. The current governance of AI development and deployment is concentrated in the hands of a small number of corporations, governments, and expert bodies. The practitioners most affected — the builders, workers, creators, and communities whose livelihoods and creative practices are being transformed — have little voice in the decisions that shape their working lives. Ostrom's framework identifies this exclusion as not merely unjust but inefficient: the people closest to the resource have information that centralized decision-makers lack, and governance that does not incorporate that information will be systematically less effective than governance that does.
Principle 4: Monitoring. Effective governance requires mechanisms for tracking the condition of the resource and the behavior of the community's members. Without monitoring, violations go undetected, free riders exploit the commons with impunity, and the community cannot assess whether its governance arrangements are working. This principle receives extended treatment in the next chapter, because the monitoring challenge in the intelligence commons is both more urgent and more difficult than in traditional commons — more urgent because the pace of change is rapid and the consequences of failure severe, more difficult because the resource flows that constitute the intelligence commons are inherently harder to measure than fish catches or water levels.
Principle 5: Graduated Sanctions. When community members violate governance rules, the response should be proportional to the severity and frequency of the violation. Minor first-time violations warrant mild correction — a reminder, a conversation, a calibration. Repeat or serious violations warrant escalating consequences. The graduation serves multiple functions: it gives violators the opportunity to correct their behavior, preserves the social relationships on which the commons depends, signals that rules are enforced without creating punitive rigidity, and distinguishes between inadvertent error and deliberate exploitation. In the intelligence commons, graduated sanctions might address quality failures (AI-generated work submitted without adequate review), transparency violations (concealed use of AI in contexts where disclosure is expected), or developmental shortcuts (systematic substitution of AI output for skill-building practice). The key is calibration — responses proportional to the offense, applied consistently enough to maintain norms but flexibly enough to preserve collaborative relationships.
Principle 6: Conflict-Resolution Mechanisms. Disagreements about the interpretation and application of governance rules are inevitable, and effective governance requires mechanisms for resolving them quickly, cheaply, and locally. Without accessible conflict resolution, disagreements fester, resentments accumulate, and cooperation erodes. The intelligence commons is already generating conflicts — between employers and employees about the appropriate role of AI in work, between educators and students about acceptable AI use, between practitioners who embrace AI and those who resist it, between generations with fundamentally different relationships to the tools. These conflicts are not symptoms of governance failure. They are signals of governance activity, opportunities for the institutional learning that makes governance more effective over time. The absence of mechanisms for resolving them productively is the actual governance failure.
Principle 7: Minimal Recognition of Rights to Organize. External authorities must recognize the community's right to develop and enforce its own governance rules. Without this recognition, community institutions are vulnerable to being overridden by corporate terms of service, regulatory mandates, or the exercise of market power. A model update that changes an AI system's behavior can render established community practices obsolete overnight. A pricing change can exclude members who organized their work around the previous structure. A change in terms of service can prohibit uses the community had endorsed. The recognition of self-governing rights is not a gift to be granted by the powerful. It is a condition for effective governance that serves everyone's interests — including the corporations whose AI tools depend on a healthy intelligence commons for their value.
Principle 8: Nested Enterprises. For resources that are part of larger systems, governance must be organized in multiple layers, each addressing challenges appropriate to its scale. Individual practitioners develop personal AI practices. Teams develop collaborative protocols. Organizations develop institutional policies. Professional communities develop shared standards. Governments develop regulatory frameworks. Each layer has its own governance arrangements, and the layers must be coordinated so they reinforce rather than undermine each other. A research program at Indiana University's Ostrom Workshop has been extending precisely this nested-governance model to digital commons, including AI training data and algorithmic systems, finding that the coordination failures between governance layers — not the absence of governance at any single layer — account for the most significant governance breakdowns.
The systemic character of these principles has a critical implication for the intelligence commons: governance cannot be developed one principle at a time. A commons with clear boundaries but no monitoring cannot detect free-riding. A commons with monitoring but no graduated sanctions cannot respond to detected violations. A commons with sanctions but no conflict resolution cannot address the disputes that sanctions provoke. The principles must be developed as an integrated system — imperfectly, iteratively, but simultaneously — because a system that implements some principles but not others will be less effective than a system that implements all of them, even in provisional form.
The intelligence commons is still in the earliest stages of institutional development. The principles describe what that development must produce. The next chapter examines the principle that, applied to the specific characteristics of AI-augmented work, presents the most challenging and most consequential governance problem: monitoring.
---
In the lobster fisheries of Maine, monitoring is straightforward in principle even when it is demanding in practice. A fisher can count traps. A harbor master can observe catches. A community member walking the docks can see, with her own eyes, whether boats are returning with holds full or half-empty. The resource leaves physical evidence of its condition. When the lobster population declines, fewer lobsters appear in the traps, and the decline is visible to every participant in the fishery. The community receives early warning that the resource is under stress, and the warning triggers the governance mechanisms — catch reductions, seasonal closures, boundary adjustments — that the community has developed to manage fluctuations in the resource's condition.
This visibility is not incidental to effective governance. It is foundational. Ostrom documented case after case in which the community's capacity to observe the resource's condition determined whether governance succeeded or failed. Fisheries where catches were visible — where boats returned to shared harbors and unloaded in public view — developed effective monitoring cultures. Fisheries where catches were invisible — where boats could offload at private docks or sell to intermediaries without community observation — were systematically more vulnerable to overexploitation. The same pattern held for irrigation systems (visible water levels versus underground aquifers), forests (accessible woodlands versus remote hillsides), and every other resource type in Ostrom's comparative database. Visibility enabled monitoring, monitoring enabled enforcement, enforcement maintained cooperation, and cooperation sustained the resource. Remove visibility from the sequence, and the entire governance chain weakened.
The intelligence commons presents a visibility problem of a different order entirely. The resource flows that constitute the commons — knowledge quality, skills depth, attention integrity, trust resilience — are abstract rather than physical, and their degradation manifests not as a missing fish or a lowered water level but as a gradual, diffuse, and largely imperceptible decline in the quality of the cognitive environment. A knowledge base contaminated by AI-generated misinformation does not announce its contamination. A professional skills pipeline thinned by the displacement of developmental work does not display a gauge. A trust commons eroded by the indistinguishability of human and AI-generated output does not send an alert.
The degradation is invisible in a specific and technically important sense: it is masked by the surface quality of AI-generated output. This is the feature of the intelligence commons that distinguishes its monitoring challenge from anything Ostrom encountered in her empirical work. In a fishery, the signs of overexploitation are unmistakable — smaller fish, fewer fish, declining catches per unit of effort. The resource's distress is legible to any experienced observer. In the intelligence commons, the characteristic failure mode of AI-augmented work is not poor execution but concealed judgment failure — output that is syntactically correct, stylistically polished, and apparently well-structured, but that contains errors of reasoning, fact, or interpretation that are invisible on the surface and detectable only by monitors with deep domain expertise.
The Orange Pill provides a precise illustration. The author recounts discovering that Claude had produced a passage that attributed a concept to the philosopher Gilles Deleuze — a concept that Deleuze never articulated, drawn from a work that does not exist. The passage was syntactically fluent, rhetorically convincing, and aesthetically consistent with the surrounding text. Nothing about its surface qualities signaled error. The author detected the fabrication only because he possessed independent knowledge of Deleuze's work — knowledge that allowed him to recognize the gap between what the passage claimed and what was actually true. Without that independent knowledge, the passage would have entered the text, and from there potentially into the reader's understanding, without any alarm sounding at any point in the chain.
This is not an isolated anecdote. It describes the characteristic quality-failure mode of AI-augmented work across every domain in which such work is being produced. Code that compiles and runs but contains architectural flaws invisible to anyone who did not design the system. Legal analysis that cites relevant precedents but mischaracterizes their holdings in ways that only a specialist would catch. Medical summaries that organize symptoms correctly but draw diagnostic inferences that reflect statistical patterns rather than clinical judgment. In each case, the surface is smooth. The error lives underneath.
The implications for monitoring are severe. Traditional quality-assessment methods were designed to detect failures of execution — poorly written prose, buggy code, clumsy design. These failures are visible on the surface and can be detected by monitors with general competence. AI-augmented quality failures are failures of judgment — prose that is well-written but conceptually confused, code that runs but is architecturally unsound, analysis that is organized but inferentially wrong. Detection requires not general competence but specific, deep, domain-relevant expertise — the very expertise that the skills commons is, under current conditions, at risk of producing less of.
The monitoring challenge is therefore caught in a feedback loop. The resource whose degradation must be monitored (the skills commons) is the same resource that produces the capacity to monitor (deep domain expertise). As the skills commons thins — as fewer practitioners develop the depth of understanding that comes from extended engagement with difficulty, because AI tools allow the difficulty to be bypassed — the community's capacity to detect the degradation declines in lockstep with the degradation itself. The community becomes less able to see the problem at precisely the rate at which the problem worsens.
Ostrom documented analogous feedback loops in cases of resource collapse — situations in which the community failed to recognize degradation until it had become severe, because the early warning signals were too subtle to detect without the very monitoring infrastructure that the community had not yet built. The collapse of Atlantic cod stocks in the early 1990s is the canonical example: the fishery degraded over decades, the degradation was masked by improvements in fishing technology that maintained catch levels even as the underlying population declined, and by the time the collapse became unmistakable, the population had fallen below the threshold from which recovery was possible. The monitoring failure was not that no one was watching. It was that the indicators being watched — total catch — were misleading, because they measured extraction efficiency rather than resource health.
The intelligence commons faces an analogous indicator problem. The most visible metrics of AI-augmented work — output volume, production speed, surface-level quality scores — measure extraction efficiency rather than resource health. An organization that tracks the volume of AI-assisted code commits, the speed of content production, or the quantity of client deliverables is monitoring extraction. It is not monitoring the health of the underlying resource: the depth of its practitioners' understanding, the reliability of its knowledge base, the integrity of its quality-assessment processes. These deeper indicators are harder to measure, but they are the ones that matter for the long-term sustainability of the commons.
The development of meaningful health indicators for the intelligence commons is itself a governance task of the first order. Ostrom's framework suggests several directions. First, community-based monitoring must be prioritized over external monitoring, because the people best positioned to assess the health of the resource are the practitioners who work within it daily. Code review processes that assess not just correctness but comprehension — does the developer understand the code that was produced? Peer evaluation mechanisms that distinguish between output that reflects genuine engagement and output that passes AI-generated text through without critical examination. Mentoring relationships in which experienced practitioners systematically evaluate whether junior colleagues are developing genuine capability or merely learning to produce the appearance of capability.
A 2025 study published in Artificial Intelligence, building computational models of Ostrom's Institutional Analysis and Development framework for multi-agent systems, suggests a further dimension: the formalization of monitoring rules as part of the governance architecture itself. The researchers developed what they call an "Action Situation Language" to encode institutional rules — including monitoring obligations — directly into the structure of agent interactions. While the work is technically oriented toward artificial multi-agent systems, the underlying principle is directly relevant: monitoring should not be an afterthought or an add-on to the governance system. It should be architecturally embedded in the institutional arrangements through which the community operates.
Second, organizations must invest in the time that monitoring requires. Code review, peer assessment, mentoring — each of these mechanisms demands time that organizations optimized for productivity are reluctant to provide. The Berkeley researchers whose study The Orange Pill discusses found that AI-augmented work intensified rather than reduced the pace of production, filling every available moment with additional tasks. Under these conditions, the time required for monitoring is the first casualty. The irony is precise: the technology that most urgently requires monitoring also creates the conditions under which monitoring is least likely to occur. Ostrom's framework predicts that organizations that sacrifice monitoring for productivity will eventually pay the cost in degraded quality, skill atrophy, and institutional brittleness. The prediction does not make the cost easier to bear in the short term, but it clarifies the stakes.
Third, the gap between formal monitoring rules and actual monitoring practice must be tracked and addressed. The distinction between rules-in-form and rules-in-use is one of Ostrom's most important analytical tools. An organization may have a formal policy requiring human review of AI-generated output. In practice, the time pressure of actual work may reduce "review" to a cursory glance. The formal rule exists. The rule-in-use does not match. This gap erodes the legitimacy of the entire governance system, because community members who observe that formal rules are routinely violated without consequence lose confidence in the governance framework. They adjust their behavior accordingly.
Accountability is the complement of monitoring. A monitoring system that detects problems without producing consequences is surveillance without governance. Ostrom was clear that monitoring without accountability fails to maintain cooperation. When violations are observed but not addressed, the incentive structure shifts: compliance becomes costly relative to non-compliance, and the cooperative norms that sustained the commons erode.
In the intelligence commons, accountability mechanisms must be graduated — Ostrom's fifth design principle. A junior practitioner who submits AI-generated work without adequate review receives a different response than a senior practitioner who systematically conceals AI reliance. A first-time transparency violation receives a different response than a repeated pattern. The calibration preserves both the community's norms and the relationships that sustain them.
The monitoring challenge in the intelligence commons is not insurmountable. Communities have developed effective monitoring for resources far less tractable than knowledge quality and professional expertise — monitoring that operates through social networks, shared observation, and the accumulated judgment of experienced practitioners rather than through centralized measurement systems. The challenge is to develop the institutional infrastructure that makes such monitoring systematic rather than accidental, embedded in organizational practice rather than dependent on individual initiative, and supported by the time and resources that effective monitoring demands. Ostrom's design principles identify what that infrastructure must look like. The builder community must provide the institutional creativity to build it.
The question of who makes the rules is prior to the question of what the rules should be. Ostrom understood this with a clarity that separated her work from nearly every other tradition in policy analysis. Most governance scholarship begins with the content of rules — what should be permitted, what prohibited, what incentivized, what taxed — and treats the question of rule-making authority as settled or uninteresting. Ostrom began with the authority question, because her empirical research had demonstrated, repeatedly and across vastly different institutional contexts, that the provenance of rules determines their effectiveness at least as much as their content does.
The huerta irrigation tribunals of Valencia have adjudicated water disputes every Thursday since at least the tenth century. The rules they enforce are not especially sophisticated by modern legal standards. What makes them effective is not their content but their legitimacy — the fact that they were developed by the irrigators themselves, are administered by judges elected from among the irrigators, and are enforced through mechanisms that the irrigating community designed, monitors, and maintains. When the Spanish government attempted to impose a centralized water-management system in the nineteenth century, the huerta tribunals outperformed it on every metric that mattered to the people actually using the water: speed of dispute resolution, perceived fairness, rate of compliance. The centralized system had better-trained administrators and more comprehensive rules. The community system had legitimacy, and legitimacy won.
The intelligence commons is being governed, right now, by rules that the governed community did not make. Corporate terms of service determine what practitioners can and cannot do with AI tools. Organizational policies, typically drafted by legal and compliance departments without meaningful input from the practitioners who must live with them, determine how AI may be used in professional contexts. Government regulations, developed through processes that privilege corporate lobbying over practitioner voice, establish the legal framework within which AI development and deployment occur. In each case, the pattern is the same: rules are made by actors who are distant from the resource, enforced through mechanisms that lack contextual sensitivity, and experienced by the governed community as impositions rather than agreements.
Ostrom's third design principle — collective-choice arrangements — holds that the people affected by governance rules should participate in making and modifying those rules. The principle is not a democratic aspiration grafted onto an otherwise technocratic framework. It is an empirical finding: governance arrangements in which the governed participate in rule-making outperform arrangements in which they do not, across a wide range of institutional contexts and resource types. The reasons are structural. Participants have informational advantages — they know things about the resource and the community that external rule-makers cannot know. They have implementation advantages — rules they helped design are rules they understand, which reduces the gap between rules-in-form and rules-in-use. And they have motivational advantages — rules they participated in making are rules they have a stake in maintaining, which increases voluntary compliance and reduces enforcement costs.
The current exclusion of practitioners from AI governance decisions is therefore not merely unjust. It is inefficient. The practitioners who work with AI tools daily know things about the tools' effects — on quality, on skill development, on collaborative dynamics, on the texture of professional judgment — that no corporate executive, government regulator, or ethics-board member can access from a distance. When a major AI company updates its model in ways that change the behavior practitioners depend on, the practitioners discover the changes through their work, often before the company's own documentation catches up. When an organizational AI policy creates perverse incentives — rewarding speed metrics that AI tools can inflate while ignoring quality indicators that require human judgment to assess — the practitioners feel the perversity immediately, while the policy-makers may not feel it for quarters or years.
This informational asymmetry is not a peripheral concern. It is the central structural problem of AI governance as currently constituted. The people who make the rules lack the information that effective rules require, and the people who have the information lack the authority to incorporate it into the rules. The result is governance that is formally elaborate but practically disconnected — policies that look comprehensive on paper but that bear little relationship to the actual conditions of AI-augmented work. The gap between rules-in-form and rules-in-use, which Ostrom identified as one of the most reliable predictors of governance failure, is wide and widening.
Closing this gap requires not the replacement of corporate and government governance with practitioner governance but the incorporation of practitioner knowledge into governance processes at every level. Ostrom's framework does not advocate for the abolition of external authority. It advocates for the recognition that external authority is insufficient without the informational and motivational contributions that community participation provides. The most effective governance arrangements in Ostrom's empirical database were those that combined external authority with internal participation — arrangements in which the state provided legal recognition and enforcement capacity while the community provided local knowledge, adaptive management, and the legitimacy that comes from collective authorship of the rules.
What would collective-choice arrangements look like in the intelligence commons? The question admits multiple answers, because Ostrom's second design principle — congruence between rules and local conditions — requires that institutional forms be adapted to specific contexts rather than imposed uniformly. But several structural features recur across Ostrom's cases with sufficient regularity to suggest their relevance.
First, practitioners must have formal standing in the governance bodies that make decisions affecting their work with AI tools. This means more than consultation — more than the surveys, town halls, and feedback forms through which organizations typically gesture toward inclusion. It means decision-making authority: seats at the table where policies are drafted, votes on the rules that will govern practice, the capacity to block or modify proposals that the practitioners who must implement them judge to be unworkable.
Second, governance rules must be revisable through processes that are accessible to the governed community. The huerta tribunals succeed in part because the irrigators can propose changes to the rules through a process that is simple, transparent, and responsive. A rule that is not working can be modified at the next meeting. A dispute about interpretation can be raised and resolved within days. The contrast with the current governance of AI practice — in which rule changes require navigating corporate bureaucracies, regulatory processes, or terms-of-service modifications that the community cannot influence — could hardly be starker.
Third, the cost of participation must be low enough that practitioners with genuine grievances are not deterred from raising them. Ostrom documented governance failures in commons where the cost of participation — measured in time, money, social risk, or opportunity cost — was too high for ordinary members. The governance process was captured by the members who could afford to participate, typically the wealthiest or most powerful, and the resulting rules reflected their interests rather than the community's. In the AI context, this means that governance processes must be designed to accommodate practitioners who have day jobs, who cannot afford to attend conferences, who may face retaliation from employers for raising concerns about AI policy, and whose expertise is practical rather than theoretical.
The power asymmetries that characterize the AI ecosystem make collective-choice arrangements both more necessary and more difficult than in most of the commons Ostrom studied. The concentration of AI capabilities in a small number of corporate platforms creates a dependency relationship that constrains the community's governance options. When a single company controls the primary tool through which most builders work, that company's decisions about model updates, pricing, access restrictions, and data policies can override community governance arrangements without consent or notice. A model update that changes behavior can render established practices obsolete overnight. A pricing change can exclude members who organized their work around the previous structure. A change in terms of service can prohibit uses the community had collectively endorsed.
Ostrom encountered analogous power asymmetries in many of the cases she studied. Irrigation communities in developing countries negotiated self-governance in the shadow of government agencies that could override community decisions at any time. Fishing communities managed their resources under threat from commercial trawlers. Forest communities protected their practices against logging companies. In each case, the community's capacity for self-governance depended not just on internal institutional quality but on the community's ability to defend its institutions against external threats.
The builder community faces analogous external pressures, and the strategies Ostrom documented for managing those pressures have direct relevance. Diversification of dependencies — supporting open-source AI development to reduce reliance on any single corporate platform. Coalition building — developing alliances among practitioner communities, professional associations, and civil society organizations that can amplify practitioner voice in governance processes. Institutional buffering — creating organizational structures that insulate community governance from the volatility of corporate decision-making. Political capacity — cultivating the ability to influence the external decisions that affect community governance, through organized advocacy, participation in regulatory processes, and the strategic deployment of the expertise that practitioners uniquely possess.
The question of who governs the governors is ultimately a question about institutional legitimacy. Rules that are made by the people who must live with them command voluntary compliance. Rules imposed from outside must be enforced, and enforcement is expensive, imperfect, and corrosive of the cooperative relationships on which effective governance depends. The intelligence commons will not be governed effectively by corporate fiat or regulatory mandate alone. It requires the active participation of the practitioners whose daily work constitutes the resource — participation not as consulted stakeholders but as co-authors of the institutional arrangements that govern their professional lives.
Ostrom's seventh design principle — minimal recognition of rights to organize — follows directly. The community's right to develop and enforce its own governance rules must be recognized by the external authorities whose decisions shape the environment in which self-governance operates. This recognition is not charity. It is a condition for effective governance that serves everyone's interests, including the interests of the corporations whose AI tools depend on a healthy intelligence commons for their long-term value.
---
In the alpine commons of Törbel, Switzerland — one of Ostrom's most carefully documented cases — the community maintained its shared meadows through a system of graduated sanctions that had been refined over centuries. A herder who grazed more cattle than the rules permitted received, on the first offense, a visit from a neighbor. The conversation was informal, sometimes conducted over wine, and its purpose was correction rather than punishment. The neighbor explained the violation, reminded the herder of the agreed stocking limits, and asked for compliance. Most first violations were resolved at this stage. The social cost of the visit — the mild shame of being identified as a rule-breaker in a community where reputation mattered — was sufficient to produce behavioral change.
A second violation triggered a more formal response: a report to the community's elected officials and a modest fine. A third brought the matter before the full community assembly, where the violation was discussed publicly and a heavier penalty imposed. Only repeated, egregious violations — violations that demonstrated not carelessness but contempt for the community's rules — resulted in exclusion from the commons. The escalation was deliberate. Each step was calibrated to the severity and apparent intent of the violation, and at each step the violator had the opportunity to correct behavior and remain in good standing.
The graduation served purposes that a single, severe penalty could not. It preserved information — a mild correction tells the violator what the community expects, which a punishment does not necessarily do. It preserved relationships — a neighbor's visit maintains the social bond that the commons depends on, while a punitive sanction strains it. It distinguished between error and exploitation — a system that treats inadvertent overgrazing the same as deliberate free-riding fails to recognize a morally and practically important difference. And it maintained the community's capacity for self-governance, because a system in which penalties are proportional and predictable is a system in which members feel treated fairly, and perceived fairness is the foundation of voluntary compliance.
The intelligence commons is developing norms around AI use at a pace that outstrips the development of enforcement mechanisms. Attribution norms — whether and how to disclose AI assistance — vary across communities. Quality standards — what level of human review AI-augmented output requires — vary across organizations. Developmental expectations — whether practitioners should invest in building skills that AI can substitute for — vary across professions. The norms are fragmented, contested, and often tacit rather than articulated. The enforcement of those norms is even more fragmented. Some communities enforce attribution expectations through social pressure. Most do not enforce them at all.
The absence of graduated sanctions does not mean the absence of consequences. It means that the consequences are arbitrary rather than calibrated, severe rather than graduated, and destructive rather than corrective. An academic caught using AI to generate a published paper may face career-ending repercussions, not because the community's sanctioning system prescribed that penalty but because the absence of a sanctioning system left the response to ad hoc judgment, and ad hoc judgment in a climate of anxiety tends toward severity. A junior employee who submits AI-generated work without review may receive no feedback at all, not because the community considers the behavior acceptable but because no one has established the institutional infrastructure for providing feedback. The same behavior is punished catastrophically in one context and ignored entirely in another, with no institutional logic connecting the two.
This inconsistency is the opposite of what Ostrom's framework prescribes. Effective sanctions are consistent — applied according to rules that the community has agreed upon, so that members can predict the consequences of their behavior. They are graduated — escalating with the severity and frequency of violation, so that the response is proportional to the offense. They are educative — designed to communicate the community's expectations, not merely to inflict cost. And they are embedded in relationships — administered by people who know the violator, understand the context, and have an interest in maintaining the relationship alongside the norm.
What might graduated sanctions look like in the intelligence commons? The specific forms will vary across communities and contexts, consistent with Ostrom's insistence on congruence between institutional arrangements and local conditions. But the structural features can be identified.
For quality violations — AI-generated work that has not received adequate human review — a first response might be private feedback from a peer or mentor, identifying the specific quality concern and explaining the community's expectations. A second occurrence might trigger a required review process — a temporary requirement that the practitioner's AI-augmented output be reviewed by a colleague before submission. A third might result in a reassignment of responsibilities, restricting the practitioner's autonomy until competence and judgment have been demonstrated. Only persistent, deliberate disregard for quality standards would warrant more severe consequences.
For transparency violations — concealed AI assistance in contexts where the community expects disclosure — the logic is similar. Education first, formal process second, consequences third. The graduation recognizes that transparency norms are still in formation, that practitioners may genuinely not know what the community expects, and that the purpose of the sanction is to maintain the norm, not to punish the person.
For developmental shortcuts — the systematic use of AI to avoid skill-building work — the challenge is more subtle, because the violation is not an event but a pattern, and the pattern may not be visible to anyone other than the practitioner herself. Monitoring for developmental shortcuts requires the kind of mentoring relationship described in the previous chapter — an ongoing evaluative relationship between an experienced practitioner and a junior colleague, through which the experienced practitioner can assess whether the junior colleague is building genuine capability or merely learning to produce its appearance. The "sanction" for developmental shortcuts is not punishment but intervention — a restructuring of the practitioner's work to ensure that developmental experiences are embedded in the workflow, even when AI tools make them optional.
Conflict resolution is the institutional complement of graduated sanctions. Ostrom's sixth design principle holds that effective governance requires accessible, low-cost mechanisms for resolving disputes about the interpretation and application of rules. The distinction between sanctions and conflict resolution is important: sanctions address clear violations of established norms, while conflict resolution addresses disagreements about what the norms require or whether they apply in a particular situation.
The intelligence commons is generating conflicts of the second type with increasing frequency. Disputes about whether a particular use of AI tools crosses the line between assistance and substitution. Disagreements between team members about the appropriate level of AI reliance for a given task. Tensions between practitioners who embrace AI as a creative partner and practitioners who view it as a threat to professional standards. Intergenerational friction between those who built their expertise through years of friction-rich developmental work and those who are building their expertise in an environment where much of that friction has been removed.
These conflicts are not failures of governance. They are the raw material from which governance is constructed. Each dispute, properly resolved, produces institutional learning — a clearer understanding of what the norms require, how they apply in ambiguous cases, and where they need to be modified. The huerta tribunals of Valencia have been refining their governance arrangements for a millennium, not because the original arrangements were perfect but because the steady stream of disputes and their resolution produced a continuous process of institutional adaptation.
The builder community currently lacks the institutional infrastructure for productive conflict resolution. Most conflicts about AI use are resolved — or more often not resolved — through one of three mechanisms, none of which Ostrom's framework endorses. The first is avoidance: the conflict is not raised, the tension festers, and the relationship or the institution absorbs the cost. The second is hierarchy: a manager or executive resolves the dispute by fiat, based on organizational authority rather than contextual knowledge or deliberative process. The third is exit: one party leaves the organization or the community rather than engage with a conflict for which no resolution mechanism exists.
Building conflict-resolution infrastructure is among the most immediately actionable governance tasks facing the builder community. Forums where practitioners can raise concerns about AI governance without fear of retaliation. Processes through which competing claims about appropriate AI use can be evaluated on their merits by people with relevant expertise and contextual understanding. Mechanisms through which the outcomes of deliberation are recorded and transmitted, so that each resolved conflict contributes to the community's institutional memory rather than disappearing when the participants move on.
The institutional work is neither glamorous nor dramatic. It is the patient construction of the arrangements through which a community of diverse individuals, with diverse relationships to a powerful and rapidly evolving technology, can manage their disagreements productively and maintain the cooperative norms on which their shared resource depends.
---
The eighth of Ostrom's design principles addresses a challenge that becomes acute when the resource system operates at multiple scales simultaneously. A local fishery is part of a regional marine ecosystem, which is part of a national maritime jurisdiction, which is part of the global ocean. Governance that is effective at one scale may be ineffective or counterproductive at another. The local community may manage its fishery sustainably, but if regional overfishing depletes the broader ecosystem, the local effort is undermined. National regulations designed to protect the marine environment may impose rules inappropriate for local conditions, disrupting arrangements that were functioning well. The governance challenge is not to choose the right scale but to coordinate governance across scales so that arrangements at each level reinforce rather than undermine each other.
Ostrom's solution is nested enterprises — governance organized in multiple layers, each addressing challenges appropriate to its scale, connected through institutional linkages that enable communication, coordination, and mutual adjustment. The concept is not hierarchy in the traditional sense. It is not a chain of command in which lower levels take orders from higher levels. It is a system in which each level maintains its own authority, rules, and decision-making processes, and the relationships between levels are characterized by coordination rather than subordination.
The intelligence commons operates at multiple scales simultaneously, and the governance challenges at each scale are distinct. At the individual level, the challenge is the development and maintenance of personal practices — the habits, routines, and evaluative disciplines that determine whether a practitioner's relationship to AI tools is developmental or substitutive. At the team level, the challenge is collaborative protocol — how AI-augmented work is integrated into shared workflows without compromising quality, transparency, or mutual accountability. At the organizational level, the challenge is institutional policy — balancing the benefits of AI adoption against the risks of skill atrophy, quality degradation, and cultural transformation. At the professional level, the challenge is shared standards — maintaining quality and integrity across an entire field of practice. At the societal level, the challenge is regulatory framework — protecting the public interest without stifling innovation or overriding the self-governing capacity of the communities most directly affected.
The failure to coordinate governance across these scales is already producing the characteristic pathologies that Ostrom documented in poorly nested commons. An individual practitioner develops a thoughtful personal practice for evaluating AI output — checking references, testing code independently, maintaining a critical stance toward the tool's suggestions. The organization in which she works mandates faster production cycles that leave no time for the review her practice requires. The individual-level governance is undermined by organizational-level governance, and the practitioner must choose between her professional standards and her employer's expectations. This is not a failure of either the individual practice or the organizational policy considered in isolation. It is a failure of coordination between governance at different scales.
The pattern repeats in other configurations. An organization invests in AI training and skill development for its practitioners, creating structured developmental pathways that integrate AI tools while preserving the friction-rich experiences essential for building deep competence. The professional community in which the organization operates has not developed standards that recognize or reward this investment. Competing organizations that skip developmental investment and optimize purely for speed are not penalized by professional norms, because the professional norms have not caught up with the technology. The organizational-level governance is undermined by the absence of professional-level governance, and the investing organization faces competitive disadvantage for doing what the commons requires.
A national government develops a regulatory framework for AI use in professional contexts — disclosure requirements, quality standards, accountability mechanisms. The regulations are designed at a level of abstraction appropriate for legislation but inappropriate for the specific conditions of practice in diverse professional domains. A disclosure requirement that is sensible for legal briefs may be meaningless for collaborative software development. A quality standard designed for medical diagnostics may be counterproductive for creative writing. The societal-level governance is incongruent with the conditions at the professional and organizational levels, and the regulations produce compliance behaviors — formal adherence to the letter of the rule without substantive engagement with its purpose — rather than the genuine governance they were designed to provide.
In each case, the problem is not the governance at any single level but the absence of institutional linkages between levels. Information does not flow effectively upward from practitioners to regulators or downward from policy-makers to teams. Rules developed at one level are not calibrated to the conditions at other levels. The feedback that would allow governance at each level to learn from governance at other levels is interrupted or absent.
A 2025 study in Global Public Policy and Governance, applying Ostrom's design principles to AI governance among the United States, China, and the European Union, found that the governance failures they documented were predominantly coordination failures rather than capacity failures. Each jurisdiction had developed AI governance arrangements of varying sophistication. The failures occurred at the interfaces — where the governance arrangements of different jurisdictions interacted, overlapped, or contradicted each other. The researchers concluded that a polycentric multilevel arrangement of governance mechanisms would be more effective than any single centralized mechanism, provided that the arrangement included the coordination infrastructure that polycentricity requires.
The same conclusion applies within domestic governance systems. The intelligence commons does not need a single governance authority. It needs governance at every relevant scale, connected by institutional linkages that enable coordination. Those linkages take specific forms: information channels through which practitioner experience flows to organizational and professional governance bodies; feedback mechanisms through which the effects of higher-level policies on ground-level practice are tracked and reported; coordination forums in which representatives of governance at different scales can negotiate the adjustments needed to maintain congruence; and bridging organizations that span the boundaries between governance levels and facilitate communication across them.
Indiana University's Ostrom Workshop has been extending precisely this nested-governance model to digital commons, including AI training data and algorithmic decision systems. Their research indicates that the most significant governance breakdowns occur not within any single level of governance but at the interfaces between levels — where organizational policies meet professional standards, where professional standards meet regulatory requirements, where regulatory requirements meet the actual conditions of practice. The analytical focus, consistent with Ostrom's framework, is on the linkages rather than the levels.
The temporal dimension of nesting deserves particular attention. The intelligence commons requires governance at multiple time scales simultaneously. At the shortest scale — the scale of daily practice — governance concerns immediate quality: review processes, attribution protocols, tool-selection decisions. At the medium scale — the scale of organizational quarters and professional development cycles — governance concerns the maintenance of skill pipelines, the calibration of quality standards, the management of transition from pre-AI to AI-augmented practice. At the longest scale — the scale of decades and generations — governance concerns the preservation of the cognitive infrastructure on which the intelligence commons ultimately depends: deep expertise, evaluative traditions, institutional memory, the capacity for original thought.
Governance at each time scale must be nested within governance at the others. Short-term optimization that undermines long-term sustainability is a time-scale coordination failure — the temporal equivalent of an organizational policy that overrides individual professional judgment. The organization that maximizes quarterly output by encouraging uncritical AI use is making this error: optimizing short-term productivity at the expense of long-term institutional capacity. Ostrom documented analogous temporal coordination failures in natural resource commons where short-term extraction was permitted to exceed long-term regeneration rates, producing precisely the collapse that Hardin's model predicted — not because the community was incapable of self-governance, but because the governance arrangements failed to coordinate across time scales.
The nested governance of the intelligence commons must be designed for resilience — the capacity to absorb disturbance without losing essential structure and function. Each level must be robust enough to maintain the commons' critical conditions while flexible enough to adapt to change. The nesting must ensure that governance at each level reinforces governance at the others, creating a system more resilient than any single level could be alone. This is the institutional achievement that effective commons governance produces, and it is the achievement the intelligence commons most urgently requires.
---
There is a dimension of the intelligence commons that Ostrom's early work on small-scale, relatively egalitarian communities did not fully address but that her later research, particularly her engagement with large-scale commons and political ecology, confronted directly: the distorting effect of power asymmetries on governance processes. A commons in which all participants have roughly equal stakes, roughly equal information, and roughly equal capacity to influence governance decisions is a commons in which Ostrom's design principles can operate with maximum effectiveness. A commons in which power is concentrated — in which a small number of actors control the resource, set the terms of access, and dominate the governance process — is a commons in which the design principles must contend with structural constraints that can render them inoperative.
The intelligence commons is characterized by power concentration of an extraordinary degree. A small number of corporations — fewer than a dozen, arguably fewer than five — control the AI models on which the builder community depends. These corporations determine the capabilities of the tools, the pricing of access, the terms under which the tools may be used, and the training data from which the models learn. Their decisions about model architecture, training methodology, safety constraints, and deployment strategy shape the environment in which the entire intelligence commons operates. The community of practitioners has, in most cases, no formal mechanism for influencing these decisions and no institutional recourse when the decisions produce outcomes that damage the commons.
This power concentration manifests at every level of the governance challenge. At the boundary level, corporations control who can access AI tools through pricing and terms of service, effectively determining the composition of the commons community. At the rule-making level, corporate policies function as de facto governance rules that practitioners must follow, regardless of whether those rules were developed with practitioner input or serve practitioner interests. At the monitoring level, corporations control the data about how their tools are used, making independent assessment of the tools' effects on the commons dependent on corporate willingness to share information. At the enforcement level, corporations can unilaterally modify or revoke access, imposing sanctions without the procedural protections that Ostrom's framework requires.
The training-data question brings the power dimension into sharpest focus, because it involves the appropriation of a resource that was produced collectively but is being governed unilaterally. The text, images, code, and other data on which large language models are trained was generated by millions of individuals — writers, programmers, artists, researchers, ordinary people posting on social media — under institutional arrangements that did not contemplate this use. The norms of the open internet, the terms of service of social platforms, the licensing frameworks of academic publishing — these were the governance arrangements under which the data was originally contributed. They were designed for a world in which the data's primary use was human consumption: reading, reference, learning, communication.
The appropriation of this data for AI training represents what Ostrom's framework identifies as a regime change in the commons — a fundamental shift in the terms under which the resource is used, undertaken without the participation of the community whose contributions constitute the resource. The herders did not consent to having the pasture enclosed. The fishers did not agree to the sale of commercial licenses. The contributors to the training-data commons did not participate in the decision to use their contributions for purposes that the original governance arrangements did not address.
The conventional response to this appropriation follows Hardin's dichotomy. One camp argues for privatization: establish clear property rights over data, create licensing regimes, require compensation for contributors. The other argues for state regulation: mandate data governance, require algorithmic auditing, impose transparency obligations. Both responses have merit and both have limitations. The privatization approach encounters the practical difficulty that the data in question was not produced as property — it was produced as communication, as expression, as participation in a shared informational environment — and retroactively imposing property frameworks on data that was generated under different institutional assumptions creates distortions that may be worse than the problem they address. The regulatory approach encounters the enforcement challenges that attend any attempt to regulate global digital systems through national legal frameworks.
Ostrom's framework suggests a third approach: the development of governance arrangements by the communities whose contributions constitute the resource. The Mozilla Foundation, collaborating with scholars at the Ostrom Workshop at Indiana University, has developed a practical framework for applying Ostrom's design principles to data commons governance. The framework identifies the specific institutional features that data governance arrangements require: clear definitions of the data commons' boundaries (what data is included, who has contributed it, who may access it), congruent rules for data use (different rules for different types of data and different contexts of use), collective-choice mechanisms (processes through which contributors participate in governance decisions), and monitoring systems (methods for tracking how data is used and whether use conforms to governance rules).
The practical challenges are significant. The contributors to the training-data commons number in the millions and are distributed across every jurisdiction on earth. They have no pre-existing organizational structure, no shared identity, no institutional infrastructure through which to exercise collective governance. The data they contributed has already been incorporated into models that cannot easily disaggregate individual contributions. The corporations that hold the data have both the legal position and the economic incentive to resist governance arrangements that would constrain their use of it.
These are real obstacles, but they are not unprecedented. Ostrom documented cases in which communities with no pre-existing organizational structure, no shared identity, and no institutional infrastructure developed effective governance arrangements in response to resource crises. The formation of self-governing institutions is itself an institutional achievement — one that requires entrepreneurial individuals, catalytic events, and the recognition among potential community members that their individual interests are better served by collective action than by continued isolation. Whether the conditions for this kind of institutional formation exist in the training-data context is an empirical question that cannot be answered by theory alone.
The power analysis also extends to the exclusion of entire populations from the intelligence commons. AI capabilities are distributed unevenly across the global population, and the unevenness follows the familiar contours of global inequality. Those with access to the most powerful models, the fastest connections, and the most supportive institutional environments — disproportionately located in wealthy nations — have qualitative advantages over those with less access. This inequality affects not just individual outcomes but the structure of the commons itself. If the most powerful AI tools are accessible primarily to users in wealthy countries, the knowledge those tools produce reflects the perspectives, languages, and priorities of wealthy countries, and the intelligence commons is impoverished by the absence of perspectives that exclusion prevents.
Ostrom's framework identifies equity of access as both a justice concern and a governance-effectiveness concern. A commons that excludes significant portions of its potential community is systematically deprived of the knowledge, perspectives, and adaptive capacity that excluded members would have contributed. The irrigation communities Ostrom studied were most successful when all members had a meaningful voice in governance, because the diversity of perspectives produced better rules and more robust institutions. The intelligence commons, which depends even more critically on cognitive diversity than natural resource commons depend on biodiversity, has an even stronger institutional interest in ensuring broad access.
The most urgent exclusion in the current intelligence commons is not geographic but participatory. As a scholar at Duke's Sanford School argued, the people contributing the most value to the AI ecosystem — users whose data trains models, workers whose displacement subsidizes automation, communities whose cultural production is appropriated for training — are the people with the least voice in governing how AI is developed and deployed. The exclusion is structural: the governance processes are designed by and for the actors who control the technology, and the communities most affected by the technology have no seat at the table where governance decisions are made.
Ostrom's design principles do not resolve power asymmetries. They are not designed to. They identify the institutional conditions under which commons governance can function effectively, and among those conditions is a sufficient degree of power balance to allow collective-choice arrangements to operate without being captured by dominant actors. When power asymmetries are severe, the precondition for effective commons governance is the reduction of those asymmetries — through coalition building, diversification of dependencies, legal protections for community governance rights, and the strategic cultivation of the community's political capacity.
The intelligence commons cannot be governed effectively while power over its most critical resources — the AI models, the training data, the computational infrastructure — remains concentrated in a small number of corporate actors. The design principles identify what governance must look like. The power analysis identifies what must change before that governance becomes possible. The institutional work ahead, which the final chapter addresses, requires both: the construction of governance arrangements consistent with Ostrom's principles and the political effort to create the conditions under which those arrangements can function.
Polycentricity was the concept that most fully captured Ostrom's understanding of how complex governance actually works. The term was coined by her husband and intellectual partner, Vincent Ostrom, in collaboration with Charles Tiebout and Robert Warren in their 1961 analysis of metropolitan governance, but Elinor Ostrom developed it into a comprehensive framework applicable across an extraordinary range of institutional contexts. A polycentric system is one in which multiple centers of authority coexist, overlap, and interact without any single center exercising comprehensive control. It is neither anarchy — the absence of governance — nor hierarchy — the subordination of all authority to a single apex. It is something more complex and, the evidence suggests, more effective than either: a system in which governance emerges from the interaction of multiple, partially autonomous, partially overlapping centers of decision-making.
The concept is counterintuitive for minds shaped by the two dominant governance models of the twentieth century. The state model reduces governance to hierarchy: decisions made at the top, implemented at the bottom, effectiveness dependent on the quality of centralized judgment and the fidelity of subordinate execution. The market model reduces governance to individual choice: decisions made by atomized actors responding to price signals, effectiveness dependent on signal accuracy and actor rationality. Both models treat complexity as a problem to be reduced rather than a resource to be leveraged.
Polycentricity rejects both reductions. In a polycentric system, governance is distributed across multiple centers, each with its own authority, constituency, information base, and decision-making processes. The centers interact — they compete, cooperate, conflict, and learn from each other — but they are not subordinated to a single authority. The system's governance capacity emerges not from the quality of any single decision center but from the interaction among them.
The advantages are structural, documented across Ostrom's comparative research with sufficient consistency to constitute empirical regularities rather than theoretical claims.
Resilience: when governance is concentrated in a single center, the failure of that center is catastrophic. When governance is distributed, the failure of any single center can be compensated by the continued functioning of others. The system degrades gracefully rather than collapsing catastrophically.
Adaptiveness: multiple centers experimenting with different approaches generate more information about what works than a single center implementing a single approach. The experiments are conducted at smaller scale, so the cost of failure is lower, and successes can be observed, evaluated, and adopted by other centers.
Responsiveness to local conditions: a single governance center cannot possess detailed knowledge of diverse conditions across the entire system. Multiple centers, each embedded in its own context, have the contextual knowledge that effective governance requires.
Democratic accountability: when governance is distributed, people can participate in the arrangements most relevant to their circumstances. The combination of voice and exit that polycentricity makes possible produces governance more responsive to the governed than monocentric alternatives.
The AI governance landscape is already polycentric in fact, even if it is not recognized as such in theory. National governments regulate AI through legislation and executive action. International bodies coordinate across jurisdictions. Corporations govern through terms of service, pricing, and model design. Professional communities govern through standards, norms, and credentialing. Builder communities govern through informal practices and collaborative protocols. Individual practitioners govern through personal disciplines and evaluative habits.
These governance centers overlap, interact, and conflict. A national regulation may contradict a corporation's terms of service. A professional standard may be unenforceable within an organization that has adopted different internal policies. A community norm may conflict with an individual practitioner's assessment of best practice. The conflicts are not aberrations. They are the inevitable consequence of polycentric governance, and they serve a useful function: they reveal points at which different governance arrangements are incompatible, which is precisely the information the system needs to develop more coherent coordination.
The problem with the current polycentric governance of AI is not that it is polycentric but that it is uncoordinated. The multiple centers operate largely independently, without the institutional linkages that effective polycentricity requires. The result is a governance landscape that is fragmented rather than coherent, competitive rather than cooperative, and characterized by gaps and contradictions that leave critical governance challenges unaddressed.
A 2025 study in Global Public Policy and Governance, applying Ostrom's framework to AI governance among the United States, China, and the European Union, reached precisely this conclusion. The governance failures the researchers documented were predominantly coordination failures rather than capacity failures. Each jurisdiction had developed governance arrangements of varying sophistication. The breakdowns occurred at the interfaces — where arrangements of different jurisdictions or institutional types interacted, overlapped, or contradicted each other. The researchers concluded that a polycentric multilevel arrangement would outperform any single centralized mechanism, provided it included the coordination infrastructure that polycentricity requires.
Christos Makridis, writing in The Review of Austrian Economics in 2025, extended this analysis to the knowledge-problem dimension of AI governance. Drawing explicitly on Ostrom's polycentric framework, Makridis argued that the knowledge required for effective AI governance is dispersed across millions of practitioners, organizations, and communities, and that no single governance center can aggregate this knowledge with sufficient fidelity to produce effective centralized decisions. The polycentric alternative — in which governance decisions are made by the centers closest to the relevant knowledge, and the centers coordinate through institutional linkages rather than hierarchical command — is better suited to the informational structure of the AI ecosystem.
Moving from uncoordinated to coordinated polycentricity requires specific institutional infrastructure. Forums for communication between governance centers — spaces where corporate policy-makers, government regulators, professional standard-setters, and practitioner communities can exchange information about what they are doing, what is working, and what is failing. Mechanisms for conflict resolution when governance arrangements at different centers contradict each other — processes through which incompatibilities can be identified and negotiated before they produce the kind of governance gaps that leave important challenges unaddressed. Processes for mutual learning — institutional channels through which successful governance experiments can be identified, evaluated, and disseminated across the polycentric system. And frameworks for coordination — not centralized mandates but shared parameters within which individual centers exercise their authority, ensuring that the diversity of approaches remains compatible with the system's overall coherence.
The development of this infrastructure is itself a polycentric process. No single institution can design a comprehensive coordination system for the entire intelligence commons. The system must emerge from the interaction of multiple institutions, each contributing its own capacity for learning and sharing. The process is iterative, experimental, and partially unpredictable — but it is also the process through which the most effective governance systems in Ostrom's comparative database were built. The Swiss alpine communities did not design their governance in a single deliberation. The Japanese forest commons did not emerge from a master plan. They were built through decades of experimentation, conflict, resolution, and gradual institutional learning — the same process that the intelligence commons requires, compressed by the urgency of the moment into a timeline that demands more deliberate effort than historical commons governance typically required.
One of polycentricity's most powerful advantages — and the one least explored in the current AI governance discourse — is its capacity for institutional learning. In a monocentric system, learning occurs at a single center, and lessons are transmitted through directives and mandates. The system can learn no faster than the center can process information. In a polycentric system, learning occurs at multiple centers simultaneously, and lessons learned at each center are available to all the others. The system learns faster than any single center because the learning is distributed and the experiences are diverse.
This distributed learning capacity is particularly valuable given the extraordinary diversity of governance challenges the AI transition generates. The challenges facing an open-source developer community differ from those facing a community of freelance designers. The challenges of a large corporation integrating AI into existing workflows differ from those of a startup built on AI from inception. A monocentric system would have to address all of these from a single center, inevitably lacking the specialized knowledge each requires. A polycentric system allows each center to develop solutions tailored to its own challenges, and the diversity of solutions enriches the entire system's governance repertoire.
The coordination of distributed learning requires what organizational theorists call absorptive capacity — the ability of one center to recognize the value of knowledge generated elsewhere, assimilate that knowledge, and apply it to its own governance challenges. This capacity is not automatic. It requires institutional investment in the channels through which knowledge flows between centers: conferences, publications, professional networks, collaborative projects. The builder community has significant but unevenly distributed absorptive capacity. Within specific sub-communities — particular open-source projects, particular professional networks — the capacity for shared learning is high. Between sub-communities, it is low. Developing mechanisms that increase absorptive capacity across the builder community is a priority for the polycentric governance of the intelligence commons, and it is a priority that can be pursued immediately, without waiting for the resolution of the larger structural questions that the final chapter addresses.
---
This final chapter turns from analysis to prospect, from description of what commons governance requires to assessment of what must be built. Ostrom was constitutionally averse to utopian prescription. She did not design ideal institutions and advocate for their implementation. She studied actual institutions, identified the principles that distinguished effective ones from ineffective ones, and argued that communities should be empowered to design their own governance arrangements in light of those principles. The institutional work ahead in the intelligence commons must proceed in the same spirit: not as the implementation of a blueprint but as a process of collective experimentation, guided by principles, informed by evidence, and driven by the community of people whose cognitive lives depend on the health of the commons they share.
The first task is constitutional. The intelligence commons needs foundational agreements that define the community, establish governance structures, and set parameters within which operational rules are made. Who belongs to the community? What rights and responsibilities does membership entail? How are governance decisions made? How can the foundational rules themselves be changed when circumstances require? These questions have not been systematically addressed. The builder community has operational rules in abundance — coding conventions, review practices, quality standards — but lacks the constitutional-level agreements that give operational rules legitimacy and coherence. Without them, operational rules are ad hoc and fragile, subject to unilateral revision by powerful actors and lacking the legitimacy that comes from collective deliberation.
A constitutional process for the intelligence commons need not begin with a grand convention. Ostrom documented cases in which constitutional-level agreements emerged incrementally — from a series of smaller agreements, each addressing a specific governance challenge, that gradually accumulated into a coherent institutional framework. The huerta tribunals were not designed at a single meeting. They evolved over centuries through the accumulation of precedents, agreements, and institutional innovations, each responding to a specific governance need. The builder community can follow a similar path: beginning with agreements about specific governance challenges — quality standards for AI-augmented output, attribution norms, developmental expectations — and building outward from these specific agreements toward a more comprehensive governance framework.
The second task is monitoring infrastructure. The intelligence commons cannot be governed without mechanisms for assessing its condition. The invisible-degradation problem documented in Chapter 4 makes this task both urgent and difficult, but not impossible. Practitioner-led quality assessment, structured mentoring that tracks developmental trajectories, community forums that aggregate observations about the commons' health — these mechanisms exist in embryonic form. What they lack is institutional support: the organizational time, the professional recognition, and the cross-community coordination that would make them systematic rather than incidental.
A 2025 study published in Artificial Intelligence, building computational models of Ostrom's Institutional Analysis and Development framework, demonstrated that monitoring rules can be formalized as part of the institutional architecture itself — embedded in the structure of interactions rather than appended as an afterthought. The principle is directly applicable: monitoring should not be an overhead cost imposed on productive work but a constitutive feature of the governance system through which productive work is organized. Organizations that treat code review, peer assessment, and mentoring as core governance functions rather than administrative burdens are building monitoring infrastructure. Organizations that treat them as costs to be minimized are degrading the commons.
The third task is conflict-resolution infrastructure. The intelligence commons is generating governance conflicts faster than it is developing mechanisms to resolve them. Disputes about appropriate AI use, about attribution, about quality standards, about the allocation of costs and benefits — these conflicts are multiplying, and most are being resolved badly or not at all. The development of accessible, low-cost, contextually informed conflict-resolution mechanisms — forums, processes, mediating institutions — is among the most immediately actionable governance priorities.
The fourth task is the cultivation of social capital. Social capital — the networks of trust, reciprocity, and shared norms that enable collective action — is the institutional raw material from which governance is built. The builder community has significant social capital within specific sub-communities but sparse connections between them. Building cross-community relationships — through shared projects, cross-domain forums, collaborative governance experiments — is essential for developing the collective-action capacity that governance at scale requires.
The fifth task is educational. Educational institutions must develop the capabilities that the ascending friction thesis identifies as essential: judgment, evaluative discipline, the capacity for independent assessment of AI-augmented output. These capabilities develop through extended engagement with difficulty — through friction-rich developmental experiences that AI tools, used carelessly, tend to eliminate. Educational institutions must resist the pressure to integrate AI in ways that reduce developmental friction, and instead develop pedagogies that use AI to relocate friction upward, challenging students to develop the higher-order capabilities that the intelligence commons needs.
The sixth task is political. The power asymmetries documented in Chapter 8 cannot be addressed through institutional design alone. They require political action: organized advocacy for the recognition of community governance rights, participation in regulatory processes, coalition building among practitioner communities, and the strategic deployment of the expertise that practitioners uniquely possess. The Ostrom Workshop at Indiana University, the Mozilla Foundation's data-commons initiative, and the emerging scholarly literature on polycentric AI governance all represent institutional resources that the builder community can leverage. But leveraging them requires organized effort — the willingness to invest time and energy in governance work that has no immediate payoff but that is essential for the long-term health of the commons.
These six tasks are not sequential. They must be pursued in parallel, because each depends on the others. Constitutional agreements require the social capital that makes collective deliberation possible. Monitoring infrastructure requires the conflict-resolution mechanisms that can adjudicate disagreements about monitoring standards. Educational reform requires the political capacity to influence educational policy. The tasks are systemic, and the institutional development must be correspondingly systemic — not a series of independent projects but an integrated effort to build the governance infrastructure that the intelligence commons requires.
The evidence from Ostrom's research suggests that this kind of integrated institutional development is possible. Communities with far fewer resources, far less education, and far less access to information than the AI builder community have developed governance arrangements of extraordinary sophistication and durability. The Swiss alpine villages. The Japanese forest commons. The Philippine irrigation communities. The Maine lobster fisheries. Each of these communities faced governance challenges that seemed overwhelming at the outset — challenges that the dominant paradigms of their time declared unsolvable without either privatization or state control — and each developed institutional solutions that neither paradigm had predicted.
The intelligence commons is the most complex resource system that human civilization has yet attempted to govern. Its governance requires institutional arrangements of corresponding complexity — not the simplistic complexity of a vast bureaucracy but the organic complexity of a polycentric system in which multiple governance centers, each adapted to its own context, interact, coordinate, and learn. The polycentric approach does not promise governance perfection. It promises governance that is as diverse, adaptive, and resilient as the commons it serves.
The tragedy of the commons is not inevitable. It never was. But avoiding it requires the work — the patient, institutional, often unglamorous work — of building and maintaining the governance arrangements that keep a shared resource productive for the community that depends on it. That work is the builder community's next project. The design principles are identified. The precedents are documented. The urgency is clear.
The institutional work begins now.
---
Eight principles on a whiteboard. That is what stays with me.
Not the grand theoretical architecture — the polycentric governance, the nested enterprises, the Institutional Analysis and Development framework. Those matter. They matter enormously. But what keeps returning, what I find myself reaching for at two in the morning when Claude and I are deep in something and the question surfaces of whether anyone is tending to what we are all doing, is the image of a woman in Bloomington, Indiana, writing eight design principles derived not from philosophy but from watching what actual communities actually did when they needed to keep something alive.
Clearly defined boundaries. Rules that fit local conditions. Collective choice. Monitoring. Graduated sanctions. Conflict resolution. Recognition of rights to organize. Nested governance.
These are not abstractions. They are the institutional equivalent of what I described in The Orange Pill as sticks and mud — the materials from which dams are built. Ostrom spent forty years proving that communities can build dams that hold. Not sometimes. Not under ideal conditions. Routinely, across cultures, across centuries, in contexts that the experts insisted would produce only tragedy.
The five resource flows that this book identifies — knowledge, skills, attention, trust, institutions — are the waters I swim in every day. Every prompt I type draws on the knowledge commons. Every hour I spend with Claude instead of writing by hand affects the skills commons. Every page this book adds to the world competes for space in the attention commons. Every claim I make about the collaboration between human and machine either deposits trust or erodes it. And every governance experiment my team runs at Napster — every quality review, every attribution conversation, every debate about when to use the tool and when to put it down — contributes to or draws from the institutional commons.
The concept that arrested me most was invisible degradation. Not because it was new — I described the Deleuze fabrication in The Orange Pill, the moment I discovered that Claude had produced a passage so polished and so confident that I nearly published a reference to a philosopher's work that does not exist. But Ostrom's framework gave that moment an institutional name and an institutional weight. The degradation is invisible because it is masked by surface quality. The community loses the capacity to detect the loss at precisely the rate at which the loss accelerates. The fish stocks are declining, but the nets are getting better, so the catch stays the same until one morning the ocean is empty.
That image keeps me up. Not because I think the intelligence commons is doomed — Ostrom proved that tragedy is not destiny — but because I know that avoiding it requires work that nobody is incentivized to do. The monitoring, the mentoring, the slow construction of norms and standards and conflict-resolution mechanisms. This is institutional plumbing. It is unglamorous. It does not demo well. No one posts about it at three in the morning.
And yet.
The Swiss alpine villages have been governing their commons since 1517. The Valencia irrigation tribunals have been adjudicating disputes since the medieval period. What those communities built, with far fewer resources and far less information than the builder community possesses, is proof that institutional creativity is a renewable resource — that human beings, when they understand what is at stake and have the freedom to organize, can build governance arrangements of astonishing durability and sophistication.
The AI transition is not a technology story. It is an institutional story. The technology has already arrived. The institutions have not. And the gap between the two — the distance between the power of the tools and the maturity of the governance arrangements through which the tools are directed — is the most dangerous gap in the current landscape.
Ostrom showed that the gap can be closed. Not by markets. Not by governments. By communities — messy, contentious, imperfect communities of people who share a resource and choose, deliberately, to build the arrangements that keep it alive.
That is the work. The design principles are on the whiteboard. The precedents are in the record. The question is whether we will do what the Swiss villagers did, what the Valencia irrigators did, what the Maine lobster fishers did — not because they were saintly or unusually cooperative, but because they understood, with the clarity that comes from depending on a shared resource for your livelihood and your children's livelihoods, that the alternative to governance is loss.
We understand that now. The question is what we build next.
The AI revolution runs on shared resources nobody agreed to share. The training data came from millions of contributors who never consented. The skills pipeline is thinning as entry-level work disappears. The knowledge base is being contaminated by confident, polished, AI-generated misinformation. And the dominant governance debate — market versus state, deregulate versus regulate — offers exactly two options for a problem that requires a third. Elinor Ostrom spent forty years proving that third option exists. Communities worldwide have governed shared resources for centuries without privatization or central control. Her eight design principles, derived from fieldwork across six continents, describe what works — and what collapses — when people share something they all depend on. This book applies her institutional framework to the intelligence commons with the specificity the moment demands. The tragedy is not inevitable. It never was. But avoiding it requires governance that no one is building yet. — Elinor Ostrom

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Elinor Ostrom — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →