The degradation mechanism is distinct from anything Ostrom observed in natural-resource commons. In a fishery, overuse is visible — fewer fish, smaller fish, declining catches per unit of effort. In the knowledge commons, degradation is invisible because it is masked by the surface quality of AI-generated output. The scholarly literature accumulates citations to sources that an AI confabulated. Search results become less reliable. The cost of verification rises for every user, including those who contributed no AI-generated material themselves.
Model collapse — the phenomenon in which AI systems trained on AI-generated content degrade in capability — is the technical manifestation of knowledge-commons degradation affecting the AI systems themselves. The commons impoverishes not just the humans who depend on it but the models that learned from it.
The training data question — who contributed the material, under what consent, and with what expectations about its future use — is inseparable from knowledge commons governance. The contributors constituted the commons under one set of governance assumptions; the extraction for AI training occurred under a different set, without the participation of the community whose contributions constitute the resource.
The knowledge commons has been studied for decades as a distinct category — Hess and Ostrom's 2007 volume Understanding Knowledge as a Commons established the framework. The AI transition has transformed the nature of the degradation mechanism from access restriction (the historical concern) to quality contamination (the current concern), requiring adaptation of the framework to novel conditions.
Degradation through contamination. The knowledge commons erodes through the introduction of unreliable content, not through extraction.
Invisible to surface inspection. AI-generated errors are masked by fluent prose and plausible structure.
Rising verification cost. Every user pays more to find reliable information, including those who contributed no AI content themselves.
Recursive threat. Model collapse extends the degradation back to the AI systems themselves, impoverishing both human and machine cognition.