Between 1750 and 1850, English Parliament passed thousands of Enclosure Acts, converting common land into private property. The productivity gains were real. The landowners prospered. The displaced commoners — the people who had depended on common land for grazing, fuel, and subsistence — became wage laborers in the factories that the enclosure-driven agricultural surplus made possible. They became, in E.P. Thompson's formulation, the English working class. The Lessig–On AI volume argues that the parallel to AI training is not metaphorical but structural. The accumulated text of human civilization — the intelligence commons — is being ingested by AI companies, processed through proprietary architectures, and returned as commercial products that compete with the works that constituted the commons. The inputs are treated as free raw material. The outputs are treated as proprietary product. The value flows in one direction.
Earlier enclosures of the intellectual commons were partial. Copyright law enclosed specific works for specific periods. Patent law enclosed specific inventions for specific terms. The boundaries of enclosure were visible and contestable. You could identify what was enclosed. You could wait for the term to expire. You could navigate around the enclosure through the public domain.
The AI enclosure is total. The training data for a large language model includes effectively the entire digitized corpus of human text. The enclosure does not respect the boundaries previous intellectual property regimes established. It does not distinguish between copyrighted and public domain, between recent and ancient, between the perspectives of the powerful and the marginalized. It ingests everything and processes it into a proprietary system whose internal workings are opaque.
Lessig's position on the copyright dimension is nuanced and has surprised observers. On training, he has argued that 'using creative work to learn something, whether you're a machine or not, should not be a copyright event.' Learning is not copying. On outputs, however, his position shifts. He has argued that AI-generated works should be copyrightable — but only on the condition that the AI system register the work with provenance metadata sufficient to identify the generator, the training corpus, and the human direction involved. Copyright is available, but transparency is mandatory.
The deeper enclosure is not of specific works but of the capacity to think. When the accumulated knowledge of civilization is processed into proprietary models, the models become the gateway through which that knowledge is accessed. Users asking Claude about history, science, philosophy, or law are accessing the intelligence commons through a private intermediary whose architecture determines what they find, how it is framed, what is emphasized, and what is omitted. Intermediation is the enclosure that matters most, because it converts not just content but cognition into a privately controlled channel. See also cognitive commons enclosure.
Lessig founded Creative Commons in 2001 and published Free Culture in 2004 — both institutional responses to the enclosure of digital culture. The explicit application of the enclosure framework to AI training emerged in his 2023–2024 lectures and interviews, informed by Elinor Ostrom's work on commons governance and by E.P. Thompson's historical accounts of the original enclosure. The Lessig–On AI volume synthesizes these threads into a structural diagnosis of AI training as enclosure requiring multi-modal governance response.
Structural parallel, not metaphor. The enclosure of common land and the enclosure of the intelligence commons operate through identical structural logic: conversion of shared resource into private property through a process that benefits the enclosers and displaces those who depended on open access.
Total rather than partial enclosure. Unlike copyright or patent, AI training does not create navigable boundaries within a shared domain; it ingests everything.
Learning is not copying. Training on publicly available material is not a copyright event in Lessig's view — the act of pattern extraction is categorically different from reproduction.
Output copyright should require provenance. AI-generated works should be copyrightable on the condition that the system registers the work with transparent metadata about generation.
The deeper enclosure is cognitive. What is enclosed is not merely specific works but the channel through which users access the accumulated thought of civilization.
Lessig's position occupies uncomfortable ground on multiple sides. Copyright maximalists object that training without licensing is theft. Open-access advocates object that requiring provenance on outputs creates a new form of enclosure. Creator coalitions object that the 'learning is not copying' argument favors large AI companies at the expense of individual artists whose work is extracted without compensation. Lessig's response, consistent across his career, is that the governance question is not whether the commons will be used — it will, inevitably — but under what terms, with what transparency, and with what benefit-sharing mechanisms. The framework is not a resolution but a structure for ongoing negotiation.