The AI commons names the shared resource base from which the AI industry was constructed: decades of publicly funded research, open-source code repositories, the creative commons scraped for training data, the collective intellectual labor of millions of writers, coders, artists, and scholars. The models that generate AI capability were built on this commons. The value extracted from the commons has flowed, disproportionately, to private firms.
Recognizing AI as commons-based is itself a conceptual intervention. The conventional framing positions AI as the product of corporate innovation — something built by Anthropic, OpenAI, Google, Meta. The commons framing positions AI as the product of collective human effort — something Anthropic and its peers harvested, processed, and monetized. Both framings are partially true. The commons framing makes visible the labor and knowledge inputs that the corporate framing erases.
Elinor Ostrom's work on commons governance, recognized with the 2009 Nobel in economics, demonstrated that commons can be sustainably managed when the user community has mechanisms for setting rules, monitoring use, and sanctioning violations. The AI commons currently lacks these mechanisms. Training data is extracted without consent; governance of the resulting models is exclusively corporate; the commons continues to be consumed without replenishment mechanisms.
Proposals for AI commons governance include open-weight model requirements (releasing model weights under permissive licenses), mandatory licensing for training data use, public compute infrastructure (analogous to public libraries), and data trusts that return commercial value to source communities. None of these has yet been implemented at scale.
For Raworth's distributive design, the AI commons is the structural test case. A doughnut-compatible AI economy would treat the commons as a commons — collectively owned, democratically governed, replenished rather than depleted. The current economy treats it as an extraction frontier. Redirecting the trajectory requires institutional infrastructure that does not yet exist.
The concept of the commons has roots in medieval common lands and in the open-source software movement of the late twentieth century. Ostrom's work demonstrated the conditions under which commons governance succeeds. Lawrence Lessig's The Future of Ideas (2001) extended the framework to digital commons. The AI-specific application has emerged since 2023, with growing attention as the scale of training data extraction became legible.
Collective provenance. AI was built from a commons of public research, open code, and creative labor — not from pure corporate innovation.
Extraction without replenishment. The commons is being consumed faster than it is being replenished, with value captured privately.
Ostrom conditions. Sustainable commons governance requires specific institutional mechanisms the AI commons currently lacks.
Distributive design frontier. Governing the AI commons as commons is the central structural test of the doughnut's applicability to AI.