Eric Raymond's 1999 The Cathedral and the Bazaar described open-source development as a novel form of large-scale coordination through generalized reciprocity. Thousands of loosely affiliated contributors produced software of extraordinary quality without command-and-control structures, relying instead on norms — transparency, meritocracy, attribution, the obligation to contribute back — that participants internalized and enforced socially rather than legally. The system worked because the norms were self-reinforcing: each contribution modeled the behavior, reinforced the expectation, and added to the public good.
Elinor Ostrom's framework for commons governance provides the analytical foundation. Successful commons require clear boundaries (who is a community member?), collective decision-making, monitoring, graduated sanctions, and the recognition that outsiders respect community rules. Open-source communities developed all of these: contributor guidelines, maintainer hierarchies, code of conduct enforcement, the social sanction of being blocked from contributing. The governance was informal, distributed, and effective — as long as participation remained above the threshold that made community norms self-enforcing.
AI-generated code bypasses the entire governance structure. The developer who uses Claude to produce a function that would have been imported from an open-source library receives the productive benefit without participating in the community that produced the knowledge the model was trained on. No contribution to the repository. No engagement with maintainers. No adherence to community norms. The developer is a consumer of knowledge infrastructure, not a participant in its production or maintenance. The shift from participant to consumer is invisible in the individual case and catastrophic in the aggregate.
The feedback loop is particularly troubling. Large language models are trained on open-source code, documentation, and the accumulated knowledge-sharing of millions of developers. If AI tools reduce the incentive for developers to contribute to open-source, the quality and volume of future training data decline, which eventually degrades AI output quality, which increases pressure on the remaining human knowledge-sharing to fill gaps. The commons that AI extracts from is the commons AI is simultaneously eroding — a model collapse risk at civilizational scale.
The open-source movement emerged in the 1980s and 1990s as a principled alternative to proprietary software, crystallizing around figures like Richard Stallman (Free Software Foundation, 1985) and formalized in licenses like the GPL. The dotcom era and GitHub's 2008 launch democratized participation, producing the explosion of repositories and contributors that characterized the 2010s. The erosion pattern became visible in 2023–2024 as AI coding assistants reached widespread adoption: declining contribution rates, aging maintainer populations, and the quiet withdrawal of the casual contributors who had sustained the long tail of the ecosystem.
The commons depends on contribution, not consumption. Open-source works because enough people contribute to sustain the infrastructure everyone consumes. When AI shifts the ratio toward pure consumption, the commons collapses.
Norms are practiced or lost. The norm of reciprocal knowledge-sharing is reinforced every time a developer contributes. It weakens every time a developer receives without contributing. AI accelerates the weakening.
Training data extraction is not participation. AI models trained on open-source code benefit from the commons without contributing to its maintenance — a form of use that Ostrom's framework identifies as extractive rather than sustainable.
Recovery requires structural redesign. Voluntary appeals to contribute more will fail under productivity pressure. Sustainable participation requires embedding contribution in professional expectations, compensating maintainers, and designing AI tools that facilitate rather than replace community engagement.