Yochai Benkler — On AI
TXTLOWMEDHIGH
Contents
Cover Foreword About Chapter 1: The Third Mode of Production Chapter 2: The Cost Structure of Freedom Chapter 3: The Individual as Factory Chapter 4: The Commons Under Pressure Chapter 5: The Commons That Trained the Machine Chapter 6: Modularity Without Community Chapter 7: Property, Access, and the New Enclosure Chapter 8: Governance After the Commons Chapter 9: The Institutional Design of Individual P Chapter 10: The Wealth of Individuals Epilogue Back Cover
Yochai Benkler Cover

Yochai Benkler

On AI
A Simulation of Thought by Opus · Part of the You On AI Encyclopedia
A Note to the Reader: This text was not written or endorsed by Yochai Benkler. It is an attempt by Opus to simulate Yochai Benkler's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

I remember the exact moment I understood what Benkler was really saying.

Commons Based Peer Production
Commons Based Peer Production

It was 2019, maybe early 2020. I was sitting in a room with a dozen engineers trying to figure out why our open-source contributors had stopped contributing. The project wasn't dead — it was useful, it was well-maintained, it had momentum. But the pull requests had slowed to a trickle. People were drifting away. And I couldn't figure out why, because everything Benkler had described — the modularity, the granularity, the low-cost integration — was all still there. The architecture was right. The community norms were healthy. The cost of participation was low. So where did everyone go?

They went to work. Not on our project, not on anyone's project — they went back inside firms. The platforms had gotten so good at capturing value that the economics of contribution had quietly flipped. It was easier to build inside a company's ecosystem than to build in the commons. The commons was still free. It just wasn't where the action was anymore.

I thought about that moment a lot when the AI tools arrived. Because what happened next was something I don't think Benkler — or anyone — fully anticipated. The cost of production didn't just drop for communities. It dropped for *individuals*. One person with a clear idea and the right prompt could produce in an afternoon what used to take a team of twelve a quarter to build. I watched it happen in my own shop. I watched a junior designer generate a working prototype that would have required three engineers and a product manager eighteen months earlier. She didn't need a commons. She didn't need a firm. She needed a conversation with a machine.

Concentration Of Power
Concentration Of Power

And here's the thing that kept me up at night: I couldn't tell whether this was the fulfillment of Benkler's vision or its undoing. He wanted autonomy — the capacity of individuals to be active participants in production rather than passive consumers. Well, here it was. Individuals were producing. They were more autonomous than ever. But they were doing it alone. The civic habits, the deliberation, the negotiation of shared norms, the democratic practice that Benkler saw growing inside the commons — none of that was required anymore. You could have autonomy without community. You could have production without participation.

This book is my attempt to sit with that contradiction. Benkler gave us the most important framework of the networked age — the idea that how we organize production shapes how free we are. That insight doesn't expire because the technology changed. If anything, it becomes more urgent. Because if the commons was where we learned to govern ourselves together, and if we no longer need the commons to build, then we need to ask: where do we learn it now?

I don't have the answer. But I know the question matters more than almost anything else I can think of.

You could have autonomy without community. You could have production without participation.

-- Edo Segal ^ Opus

Open-Source Commons Erosion
Related You On AI Encyclopedia Topics for This Chapter
8 related entries — click to explore the full topic catalog
Every one of the 8 Orange Pill Wiki entries this chapter links to — the people, ideas, works, and events it uses as stepping stones. Click any card for the full entry.
Concept (8)

About Yochai Benkler

1964-present

Yochai Benkler (born 1964, Haifa, Israel) is the Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School and faculty co-director of the Berkman Klein Center for Internet & Society. Educated at Tel Aviv University and Harvard Law School, Benkler is best known for his foundational work on the political economy of networked information, particularly his concept of "commons-based peer production," which he developed across a series of influential papers beginning in the early 2000s and articulated most fully in *The Wealth of Networks: How Social Production Transforms Markets and Freedom* (2006). That book, which won widespread acclaim across legal scholarship, economics, and political theory, argued that the internet had enabled a genuinely new mode of production — neither market nor firm — with profound implications for human autonomy, democratic governance, and cultural freedom. His subsequent work has addressed cooperation theory, institutional economics, and the political dynamics of online disinformation. A recipient of the Electronic Frontier Foundation's Pioneer Award, the Ford Foundation's Visionaries Award, and numerous other honors, Benkler remains one of the most cited scholars working at the intersection of law, technology, and democratic theory. He lives in Cambridge, Massachusetts.

Chapter 1: The Third Mode of Production

For most of recorded economic history, human beings have organized their productive activity through two institutional forms. The first is the market, in which individuals and firms exchange goods and services through price signals, each actor pursuing self-interest within a framework of property rights and contract enforcement. The second is the firm, in which hierarchical command structures coordinate labor and capital under unified management, substituting internal direction for the external negotiations of the marketplace. Ronald Coase explained in 1937 why firms exist at all: when the transaction costs of using the market exceed the costs of internal organization, rational actors will choose hierarchy over exchange. The boundary between market and firm, in Coase's framework, is drawn by the relative costs of coordination.

For seven decades, this binary dominated economic thought. Production happened through markets or through firms. There was no third option — or rather, the third option that existed, the household and the informal economy, was treated as a residual category, economically insignificant and theoretically uninteresting. The serious work of producing goods and services at scale required either the price mechanism or the organizational chart. Everything else was hobbyism, charity, or irrelevance.

Then Yochai Benkler looked at what was actually happening on the internet and saw something that the two-mode framework could not explain.

Linux was not a market. No one was buying and selling lines of kernel code through a price mechanism. Linux was not a firm. Linus Torvalds did not employ the thousands of contributors who wrote, debugged, and maintained the operating system. There was no organizational chart, no employment contract, no hierarchical command structure directing the work. And yet Linux was not a hobby project. By the early 2000s, it ran the majority of the world's servers. It powered the infrastructure of companies worth hundreds of billions of dollars. It was, by any reasonable measure, one of the most important pieces of software ever written.

Wikipedia was not a market. No one paid the contributors who wrote its millions of articles. Wikipedia was not a firm. The Wikimedia Foundation employed a tiny staff relative to the encyclopedia's scope. The actual work of writing, editing, fact-checking, and maintaining the world's largest reference work was done by hundreds of thousands of volunteers operating under a set of shared norms, community-developed policies, and collaborative governance structures that bore no resemblance to either market exchange or corporate hierarchy.

Apache, the web server software that handled the majority of the world's web traffic, followed the same pattern. So did Mozilla Firefox. So did the Creative Commons licensing ecosystem. So did the collaborative filtering systems that organized information across the early web. Everywhere Benkler looked, he found large-scale, high-quality production happening outside both markets and firms — organized not by prices and not by bosses, but by communities of individuals who chose to contribute for reasons that the standard economic framework had difficulty accounting for.

Individual Direct Production
Individual Direct Production

Benkler's theoretical contribution, articulated most fully in The Wealth of Networks (2006), was to argue that these were not anomalies. They were instances of a genuinely new mode of production — commons-based peer production — made possible by a fundamental change in the underlying cost structure of information and communication. The near-zero marginal cost of digital communication had crossed a threshold. Below that threshold, the transaction costs of coordinating large numbers of distributed individuals exceeded the benefits, and production defaulted to markets and firms. Above that threshold — or more precisely, below it, as the costs fell — coordination among peers became cheap enough that a third institutional form became viable at scale.

The analytical architecture of Benkler's argument rested on three structural properties that distinguished commons-based peer production from both market exchange and hierarchical organization. The first was modularity: the capacity to break a large project into discrete components that could be worked on independently. Linux was modular because its architecture allowed contributors to work on individual modules — device drivers, file systems, network protocols — without needing to coordinate every decision with every other contributor. Wikipedia was modular because each article was a self-contained unit that could be written, edited, and improved independently.

The second was granularity: the existence of modules small enough that individual contributors could make meaningful contributions with relatively small investments of time and effort. A Wikipedia editor could fix a single factual error in three minutes. A Linux developer could submit a single patch that addressed a specific bug. The granularity of the work meant that the threshold for participation was low — far lower than the threshold for employment at a software firm or the capital requirements for entering a market.

The near-zero marginal cost of digital communication had crossed a threshold. Below it, coordination defaulted to markets and firms. Above it, a third institutional form became viable at scale.

The third was low-cost integration: the existence of mechanisms for combining the contributions of many individuals into a coherent whole without incurring the coordination costs that would make the enterprise impractical. Version control systems in open-source software, wiki markup and editorial norms in Wikipedia, Creative Commons licensing in cultural production — these were the institutional technologies that made integration possible at scales that would have been prohibitively expensive in the absence of digital communication networks.

Together, these three properties — modularity, granularity, and low-cost integration — created the structural conditions under which large numbers of individuals could produce complex information goods without the organizational infrastructure of markets or firms. The key insight was that this was not a matter of goodwill or ideology. It was a matter of economics. When the cost structure favored it, commons-based peer production emerged. When it did not, markets and firms continued to dominate.

But Benkler's framework was never merely economic. It was, from its inception, a political theory of freedom. The subtitle of The Wealth of Networks was How Social Production Transforms Markets and Freedom, and the second half of that subtitle carried the book's deepest argument. Benkler contended that the shift from an industrial information economy — in which the production of information, knowledge, and culture was concentrated in the hands of large firms that controlled expensive capital equipment (printing presses, broadcast towers, film studios) — to a networked information economy — in which the means of information production were widely distributed among individuals who owned their own computers and internet connections — had profound implications for human autonomy.

Open Source Ai
Open Source Ai

Autonomy, in Benkler's usage, meant the capacity of individuals to be active participants in the creation and circulation of information, knowledge, and culture, rather than passive consumers of content produced by others. In the industrial information economy, most people consumed culture; a small number of firms produced it. In the networked information economy, the tools of production were in the hands of individuals, and the cost of distribution was near zero. This meant that individuals could, for the first time in the history of mass communication, participate in the production of the information environment that shaped their lives.

This was not merely an economic shift. It was a democratic one. Benkler argued that a society in which citizens are active producers of information is more democratic than a society in which they are passive consumers — not because active production is inherently virtuous, but because it distributes the power to shape public discourse, cultural meaning, and political understanding. When a handful of media conglomerates control the production and distribution of information, the range of ideas, perspectives, and narratives available to citizens is constrained by the editorial decisions and commercial incentives of those conglomerates. When millions of individuals can produce and distribute information at near-zero cost, the range of available perspectives expands, and the power to shape public discourse is distributed more broadly.

The democratic significance of commons-based peer production extended beyond the mere distribution of voice. Benkler observed that the commons itself was a form of governance — a mode of collective decision-making that required participants to negotiate norms, resolve conflicts, maintain quality standards, and govern shared resources. Wikipedia's elaborate system of editorial policies, dispute resolution mechanisms, and community governance structures was not merely a technical feature of the encyclopedia. It was a form of democratic practice. Contributors learned to deliberate, to compromise, to subordinate personal preferences to shared standards, and to participate in the governance of a common resource. These were civic habits — the practices of self-governance that democratic societies depend upon — and the commons was producing them as a byproduct of producing encyclopedias.

Wikipedia's elaborate system of dispute resolution was not merely a technical feature. It was a form of democratic practice.

This integration of economic analysis and democratic theory was what made Benkler's framework distinctive. Other scholars had observed the rise of open-source software. Other economists had noted the anomaly of high-quality production without market incentives. Benkler connected these observations to a broader political-economic argument about the relationship between the organization of production and the conditions for human freedom. The commons was not just efficient. It was democratically significant. And the institutional conditions that supported or undermined it — copyright law, telecommunications regulation, platform governance, the design of digital infrastructure — were therefore not merely technical or economic questions. They were questions about the kind of society that the networked information economy would produce.

The framework was powerful, rigorous, and — for roughly fifteen years — largely vindicated by events. Open-source software became the foundation of the internet. Wikipedia became the default first source of information for a significant portion of the world's population. Creative Commons licensing enabled new forms of cultural production and remix. The commons was real, it was productive, and it was growing.

Then the cost structure shifted again.

The arrival of AI systems capable of communicating in natural language — the kind of tools described in Edo Segal's account of the transformation he witnessed among his colleagues and collaborators — did something that Benkler's framework had not anticipated. It did not merely reduce the cost of collaboration further, making commons-based peer production even more efficient. It reduced the cost of individual production to near zero for a significant class of work. A person who needed a software tool no longer needed to find collaborators, negotiate shared standards, or participate in a community. She needed to describe what she wanted. The machine would build it.

This development does not refute Benkler's framework. It extends it — into territory that reveals both the framework's enduring insights and its unexamined assumptions. The question Benkler asked in 2006 — what happens when the cost of coordination drops below the threshold that makes new modes of production viable? — has acquired a new and more radical form: what happens when the cost of production itself drops below the threshold that makes coordination unnecessary?

The chapters that follow explore this question through the analytical lens Benkler constructed. They take seriously his insistence that technology does not determine outcomes — that institutional design is decisive. And they ask whether the democratic values Benkler identified in the commons can survive a transition to a mode of production that no longer requires one.

Transaction Cost Economics
Related You On AI Encyclopedia Topics for This Chapter
16 related entries — click to explore the full topic catalog

Chapter 2: The Cost Structure of Freedom

Every mode of production rests on a cost structure, and every cost structure rests on a technology. This is the foundational insight of Benkler's political economy, and it is the key to understanding both the rise of commons-based peer production and the challenge that AI-enabled individual production now poses to it.

Every mode of production rests on a cost structure, and every cost structure rests on a technology.

The industrial information economy that preceded the networked age was organized around a simple fact: the capital equipment required to produce and distribute information at scale was expensive. A printing press, a broadcast tower, a film studio, a recording facility — these were not tools that individuals could afford. The cost of the physical infrastructure of information production created a natural barrier to entry that concentrated the power to produce information in the hands of those who controlled that infrastructure: publishing houses, broadcasting networks, record labels, film studios. These were the gatekeepers of the industrial information economy, and their gatekeeping function was not primarily a matter of editorial judgment or cultural taste. It was a matter of economics. They controlled the expensive capital equipment, and therefore they controlled what got produced and distributed.

Benkler's analysis of the networked information economy began with the observation that the personal computer and the internet had fundamentally altered this cost structure. The capital equipment required for information production — a computer with a word processor, a web browser, a connection to the internet — was now in the hands of hundreds of millions of individuals. The cost of distribution had dropped to near zero. The economic barrier that had concentrated information production in the hands of large firms had been removed, and a vast new population of potential producers had been liberated to participate in the creation and circulation of information, knowledge, and culture.

This cost-structure analysis was more than descriptive. It was predictive. If the barrier to entry was economic, then removing the barrier should produce new organizational forms — forms that were impossible when the barrier was in place, but that became viable when it was removed. And this is precisely what happened. The open-source software movement, Wikipedia, the blogosphere, Creative Commons — these were not ideological experiments. They were the predictable consequences of a change in the cost structure of information production. When the cost of coordinating distributed contributors dropped below the cost of hiring employees or transacting through markets, commons-based peer production became the efficient organizational form for a significant class of information goods.

Benkler was meticulous in specifying the conditions under which this efficiency obtained. Commons-based peer production was not universally superior to markets and firms. It was superior for information goods that were modular, granular, and susceptible to low-cost integration — goods that could be broken into small, independent components that many people could contribute to with small investments of time, and that could be assembled into coherent wholes through relatively simple coordination mechanisms. An encyclopedia met these criteria. An operating system met them. A novel, which required a unified authorial vision and could not easily be modularized, did not. A pharmaceutical drug, which required expensive laboratory equipment and clinical trials, did not.

Modularity Granularity Integration
Modularity Granularity Integration

The boundary conditions of Benkler's framework were as important as its central claims. Commons-based peer production was not going to replace markets and firms across the entire economy. It was going to supplement them in specific domains where the cost structure favored it. The political significance of this supplementation lay not in its economic scope — commons-based peer production would remain a minority of total economic activity — but in its strategic importance. The information goods produced by the commons — operating systems, reference works, educational materials, cultural artifacts — were disproportionately significant for democratic self-governance. They were the infrastructure of public discourse, the tools through which citizens informed themselves and participated in collective decision-making. A commons that produced these goods well was democratically valuable far beyond its share of GDP.

This cost-structure analysis carries directly into the AI moment described in Segal's work, and the parallel is more than superficial. Just as Benkler identified a threshold crossing in the cost of digital communication that enabled commons-based peer production, the language interface to AI systems represents a second threshold crossing in the cost of software production that enables individual direct production.

Consider the structural economics. Before the language interface, creating a functional software application required either purchasing one from a market (paying for a commercial product), commissioning one from a firm (hiring developers), or organizing a commons-based peer production project (coordinating a community of volunteer contributors). Each of these organizational forms carried costs: market prices, employment costs, or coordination overhead. The choice among them depended, as Coase and Benkler had explained, on the relative magnitude of these costs for the specific production task at hand.

The language interface collapsed all three cost structures simultaneously. The marketing manager described in Segal's account did not buy a commercial product — none existed that met her specific need. She did not hire a developer — the cost would have been prohibitive for her budget and timeline. She did not organize a community of contributors — the task was too specific and too individual. She described what she needed in natural language, and an AI system produced a functional application. The cost, measured in time and money, was a fraction of what any of the three traditional organizational forms would have required.

Knowledge Commons
Knowledge Commons

This is not merely an incremental cost reduction. It is a categorical change in who can produce software and under what conditions. Benkler's framework predicted that as communication costs fell, the set of goods that could be produced through commons-based peer production would expand. The framework did not predict — because the technology did not exist — that as production costs fell further, the set of goods that could be produced by individuals without any organizational infrastructure at all would expand even more dramatically.

The implications ripple through Benkler's entire theoretical architecture. His analysis of autonomy rested on the claim that the networked information economy expanded individual autonomy by enabling participation in the production and circulation of information. But participation, in Benkler's framework, typically meant participation in a community — contributing to Wikipedia, submitting code to a Linux project, sharing work under a Creative Commons license. The autonomy was real, but it was relational. It was the autonomy of an individual who could choose to participate in a commons, not the autonomy of an individual who could produce independently.

AI-enabled individual production creates a different form of autonomy — one that Benkler's framework can illuminate but did not describe. The individual who builds her own software tool in conversation with an AI system is exercising a kind of productive autonomy that goes beyond what Benkler analyzed. She is not choosing among markets, firms, and commons. She is bypassing all three. She is the designer, the developer, and the user. The productive cycle is complete within a single consciousness, mediated by a language interface.

This is not merely an incremental cost reduction. It is a categorical change in who can produce and what they can produce.

In Benkler's terms, this represents the logical endpoint of the reduction in capital requirements for information production. First, the personal computer and the internet put the means of information production in the hands of individuals, enabling them to contribute to commons-based projects. Then, the language interface to AI put the means of software creation in the hands of individuals, enabling them to produce complete functional artifacts without contributing to or drawing upon a commons at all.

But Benkler's institutional analysis immediately raises the question that the cost-structure analysis alone cannot answer: autonomy from what, and for what? The autonomy of the individual direct producer is autonomy from organizational infrastructure — from the market's price signals, the firm's hierarchical commands, and the commons' collaborative norms. This is a form of freedom. But it is freedom from the very structures that, in Benkler's framework, channeled individual productive energy into collective goods with democratic significance.

The Linux contributor who submitted a patch exercised autonomy in choosing what to work on and how. But her work was integrated into a commons — a shared resource available to all, governed by community norms, maintained through collective effort. The individual direct producer who builds a private tool exercises autonomy in a more complete sense — she answers to no one, follows no community norms, and submits to no shared governance. But her work is not integrated into a commons. It is a private good, produced for private use, contributing nothing to the shared information environment that Benkler identified as democratically essential.

Autonomy gained its democratic significance not from the act of producing alone, but from the act of producing together.

This is the tension at the heart of the AI moment as Benkler's framework reveals it. The technology that most fully realizes Benkler's aspiration for individual autonomy simultaneously undermines the institutional form — the commons — through which that autonomy acquired democratic significance. The individual is more free. The commons is less fed.

Benkler's response to this tension, were he to address it directly, would almost certainly focus on institutional design. His consistent argument throughout The Wealth of Networks was that technology creates possibilities, but institutions determine which possibilities are realized. The personal computer and the internet created the possibility of commons-based peer production, but it took specific institutional arrangements — open-source licenses, wiki software, Creative Commons legal frameworks, norms of collaborative governance — to realize that possibility. Without those institutions, the same technology could have been captured by concentrated commercial interests, and the democratic potential of the networked information economy would have been stillborn.

The same logic applies to AI-enabled individual production. The language interface creates the possibility of individual direct production. Whether that possibility is realized in ways that serve or undermine democratic values depends on the institutional arrangements that surround it. Are the AI models themselves open-source — commons-based resources available to all — or proprietary products controlled by a handful of corporations? Are the artifacts produced by individual direct producers shared back into a commons, or do they remain private goods? Are the governance structures of AI development participatory and transparent, or are they concentrated and opaque?

These are not technological questions. They are political questions. And Benkler's framework insists that they are the decisive questions — more important than the capabilities of the technology itself, because they determine whether those capabilities serve democratic values or concentrated power. The cost structure has shifted. The institutional design remains to be determined. Everything depends on which choices are made now, in this liminal moment between the world Benkler analyzed and the world that is emerging.

Transaction Costs
Related You On AI Encyclopedia Topics for This Chapter
15 related entries — click to explore the full topic catalog

Chapter 3: The Individual as Factory

In the spring of 2025, a secondary school teacher in Bristol needed a tool that did not exist. Her students were studying climate data — temperature records, carbon emissions, sea level measurements — and she wanted them to work with the data directly, not through textbook abstractions. She wanted an interactive visualization that would let students manipulate variables, observe correlations, and test hypotheses. She had checked every educational software catalog she could find. Nothing matched her pedagogical vision closely enough to be useful. In a previous era, the gap between what she needed and what existed would have remained a gap. She would have adapted her lesson plan to fit the available tools, as teachers have always done.

Instead, she described what she wanted to an AI system. In natural language. In roughly the same way she might describe it to a colleague over coffee: "I need a web page where students can select a country, choose a time range, and see temperature and CO₂ data plotted together, with a slider to overlay sea level data, and a button to export their analysis as a PDF they can annotate." Over the course of an afternoon — not a semester, not a budget cycle, not a procurement process — she had a working tool. She tested it. She refined it through further conversation with the AI. By the following Monday, her students were using it.

This episode, which echoes the kinds of transformations Segal documents in You On AI, is unremarkable in isolation. One teacher. One tool. One afternoon. But Benkler's framework demands that it be examined not as an anecdote but as an instance of a structural change in the organization of production — a change whose implications extend far beyond any individual case.

In Benkler's taxonomy, the teacher's activity does not fit cleanly into any of the three established modes. She did not engage in market exchange: she did not buy a product from a vendor. She did not operate within a firm: no employer directed her work, and no hierarchical organization coordinated it. And she did not engage in commons-based peer production: she did not collaborate with a community of contributors, did not submit her work to a shared repository, did not negotiate norms with fellow editors or developers.

She produced. Alone. The AI system was her tool, not her collaborator — in the same sense that a word processor is a writer's tool, not the writer's coauthor. The intentionality, the design vision, the pedagogical judgment, the iterative refinement — all of these remained with the teacher. The AI handled implementation, translating her natural-language descriptions into functional code. The result was a complete, functional artifact produced by a single individual who was simultaneously the designer, the developer, the tester, and the end user.

Enclosure Of Training Data
Enclosure Of Training Data

Benkler's analytical framework provides the vocabulary to understand why this matters. In The Wealth of Networks, Benkler identified the key variable that determined which mode of production would dominate in a given domain: the relationship between the cost of organizing production through each available mode and the characteristics of the good being produced. Markets dominated when transaction costs were low and the good could be efficiently priced. Firms dominated when internal coordination costs were lower than the market's transaction costs. Commons-based peer production dominated when the good was modular and granular, the contributors were intrinsically motivated, and the cost of coordinating distributed contributions was lower than either market exchange or hierarchical direction.

Individual direct production dominates when the cost of implementation has fallen so low that the overhead of any organizational form — market, firm, or commons — exceeds the cost of doing it yourself. This is the condition that the language interface to AI has created for a significant and growing class of software production. The teacher did not need to coordinate with anyone because the coordination cost — even the minimal coordination cost of participating in a commons — exceeded the cost of individual production. When one person can do in an afternoon what previously required a team, the economic logic of organization dissolves.

This dissolution has a name in economic theory. Coase's theory of the firm explained why firms exist: they exist because the transaction costs of using the market sometimes exceed the costs of internal organization. Benkler's extension explained why commons exist: they exist because the coordination costs of peer production sometimes fall below the costs of both markets and firms. The logic of individual direct production extends this chain one step further: individual production occurs when the implementation costs fall below the coordination costs of any multi-person organizational form.

The Bristol teacher is, in Coasean terms, a firm of one — but a firm of one with the productive capacity that previously required a team. And she is, in Benkler's terms, a producer who does not need a commons — not because the commons has failed, but because the cost structure has made the commons unnecessary for her specific productive need.

This shift in the unit of production — from the community to the individual — has consequences that Benkler's framework illuminates even though his original analysis did not address them. Three consequences deserve particular attention.

Digital Commons
Digital Commons

The first is the explosion of specificity. Benkler observed that commons-based peer production was most effective for goods that served large numbers of people — an encyclopedia that millions would read, an operating system that millions would use. The modularity and granularity that enabled peer production also pushed it toward generality: the goods produced by the commons were, by their nature, goods that many contributors found worth contributing to, which meant they were goods that served widely shared needs. Individual direct production has no such constraint. The teacher's climate visualization tool served twenty-three students in one classroom in Bristol. No commons would have produced it because no community of contributors would have found it worth producing. It was too specific, too local, too individual. And yet it was exactly what was needed.

This explosion of specificity is, from Benkler's democratic perspective, deeply ambiguous. On one hand, it represents an extraordinary expansion of the capacity of individuals to shape their own information environments. The teacher did not adapt her pedagogy to fit available tools. She adapted the tools to fit her pedagogy. She became, in Benkler's language, an active producer rather than a passive consumer of educational technology. This is precisely the kind of autonomy Benkler valued — the capacity to participate in the creation of the information resources that shape one's life.

On the other hand, the explosion of specificity fragments the commons. When each individual produces her own tools, optimized for her own needs, the shared resources that constitute the commons receive fewer contributions. The teacher who builds her own visualization tool does not contribute that tool to an open-source educational software repository. She does not write documentation that other teachers could use. She does not participate in a community of practice that negotiates shared standards for educational technology. Her tool exists in a population of one. It is a private good — highly useful to its producer, invisible to everyone else.

The second consequence is the transformation of the skill landscape. Benkler's analysis of commons-based peer production emphasized that the relevant skill was the domain expertise of the contributor — the historian who wrote Wikipedia articles about the Byzantine Empire, the programmer who contributed networking code to Linux. The collaborative infrastructure of the commons handled the integration; the contributor provided the knowledge. Individual direct production shifts the relevant skill from domain expertise to articulacy — the ability to describe what one wants in language precise enough for an AI system to implement it.

The Bristol teacher succeeded not because she knew how to program, but because she knew how to describe what she wanted: which variables to visualize, how the interface should behave, what the students' workflow would look like. Her pedagogical expertise translated directly into productive capability because the language interface accepted pedagogical language as input. A teacher with less clarity about her pedagogical vision — a teacher who could not articulate precisely what she wanted — would have produced a less useful tool, or no tool at all.

This creates a new axis of inequality that Benkler's framework can identify even if his original analysis did not examine it. The relevant inequality is no longer between those who can code and those who cannot. It is between those who can articulate their needs with precision and those who cannot — between, to borrow a distinction from linguistics, high-elaboration communicators and low-elaboration communicators. This axis of inequality correlates, as sociolinguistic research has consistently shown, with education, class, and cultural capital. The individual who can describe a complex need in precise, structured language is more likely to have had the educational advantages that cultivate such articulacy. The democratization of production is real, but it is not complete. The language interface lowers the barrier from coding to speaking, but speaking well remains a skill that is unequally distributed.

The third consequence is the challenge to quality governance. Benkler devoted significant attention to the mechanisms through which commons-based peer production maintained quality — the peer review processes of open-source software, the editorial norms of Wikipedia, the reputational systems of collaborative projects. These mechanisms worked because production was social: multiple contributors reviewed each other's work, caught errors, enforced standards, and maintained coherence. Quality was a collective achievement, produced through the same collaborative processes that produced the artifacts themselves.

Individual direct production has no such quality mechanisms. The teacher who builds her own tool is also the only person who evaluates it. If the climate data visualization contains an error — a misplotted data series, an incorrect unit conversion, a misleading axis scale — there is no peer review process to catch it. The students who use the tool have no way to verify its accuracy against a standard, because the tool was produced outside any community that maintains standards. The artifact is only as good as its producer's ability to evaluate it, and the producer's ability to evaluate it is limited by the same domain expertise that shaped it. Errors may propagate unchecked through the private tools of individual direct producers in ways that the quality governance mechanisms of the commons would have caught.

Benkler's framework thus reveals individual direct production as a phenomenon that simultaneously fulfills and threatens his deepest commitments. It fulfills his commitment to autonomy by giving individuals unprecedented productive capacity. It threatens his commitment to democratic governance by removing the social infrastructure — the commons, the community, the collaborative norms — through which that productive capacity acquired civic significance. The individual as factory is more powerful than the individual as contributor. She is also more alone.

The Bristol Teacher
Related You On AI Encyclopedia Topics for This Chapter
10 related entries — click to explore the full topic catalog

Chapter 4: The Commons Under Pressure

The metaphor of the commons has a history older than Benkler, and it carries a warning. In 1968, Garrett Hardin published "The tragedy of the commons," arguing that shared resources are inevitably depleted because rational individuals will exploit them beyond their carrying capacity. Each herder adds one more cow to the shared pasture because the benefit accrues individually while the cost is distributed collectively. The result is overgrazing, degradation, and collapse. Hardin's conclusion was stark: commons are doomed unless they are either privatized or regulated by external authority.

Benkler's entire intellectual project can be understood as a response to Hardin — though his primary interlocutor was Elinor Ostrom, who demonstrated empirically that communities can and do govern shared resources successfully through self-organized institutional arrangements, without recourse to either privatization or state control. Benkler extended Ostrom's analysis into the domain of information goods, arguing that digital commons had a structural advantage over physical commons: information goods are nonrival. One person's use of a Wikipedia article does not diminish another person's ability to use it. The tragedy of the commons, which depends on the depletion of a rival resource, does not apply in its classic form to information goods. The digital commons can be used by everyone without being used up.

This nonrivalry was the foundation of Benkler's optimism about the commons. Unlike a pasture, which can be overgrazed, or a fishery, which can be overfished, a digital commons grows with use. Every Wikipedia edit adds to the commons rather than subtracting from it. Every open-source contribution enriches the shared codebase. The dynamic was not tragic but generative: the more people participated, the more valuable the commons became.

But nonrivalry addresses only one side of the equation — the consumption side. It explains why the digital commons is not depleted by use. It does not address the production side — the question of whether the commons continues to receive the contributions it needs to remain valuable. A Wikipedia that everyone reads but no one edits will not be degraded by overuse; it will be degraded by neglect. The tragedy of the digital commons is not overgrazing. It is underfeeding.

This is the pressure that AI-enabled individual production places on the commons, and Benkler's framework is precisely the right lens through which to examine it. The commons depends on contributions. Contributions depend on contributors. Contributors participate for reasons that Benkler analyzed carefully: intellectual challenge, reputational gain, the desire to be part of a meaningful collective enterprise, the intrinsic satisfaction of creating something useful. These motivations are real and powerful, but they are not inexhaustible. They depend on a social context — a community of practice, a shared sense of purpose, a feedback loop in which contributions are recognized, evaluated, and integrated into a larger whole.

Individual direct production disrupts this feedback loop at its source. When a software developer can produce a complete application by describing it to an AI system, the incentive to contribute that application to an open-source commons diminishes. Not because the developer has become selfish, but because the social context that motivated contribution — the community, the peer recognition, the collaborative governance — is no longer part of the production process. The developer did not produce the application as part of a community. She produced it alone. The natural impulse to share work with the community that helped produce it does not arise, because no community helped produce it.

The data, insofar as early evidence is available, supports this concern. Open-source contribution patterns in domains where AI coding assistants are widely adopted show a complex picture. Some projects report increased contribution volume, as AI tools lower the barrier to participation by helping new contributors produce acceptable code more quickly. But other projects — particularly those in mature, well-governed commons — report changes in the character of contributions: more automated pull requests, less engagement with community review processes, fewer contributions to documentation, governance, and the maintenance work that keeps a commons healthy. The contributions are more numerous but less social. The commons receives more code and less community.

Bristol Teacher Case
Bristol Teacher Case

Benkler would recognize this pattern immediately. His analysis of the commons always emphasized that the value of commons-based peer production was not reducible to the artifacts it produced. The artifacts — the encyclopedia, the operating system, the legal code — were important, but they were only part of the value. Equally important was the social process through which those artifacts were produced: the deliberation, the conflict resolution, the norm negotiation, the collaborative governance. These processes were democratically valuable in themselves, independent of the artifacts they produced. A society in which citizens practice collective governance of shared resources is a more democratic society than one in which citizens consume resources produced by others, even if the resources are identical.

If AI-enabled individual production reduces participation in commons-based peer production — not by making the commons impossible, but by making it unnecessary for an increasing range of productive needs — then the democratic value of the commons is diminished even if the artifacts continue to exist. Wikipedia may persist. Linux may persist. But the communities that govern them may thin, as contributors who once participated in collaborative production discover that they can meet their productive needs individually. The commons becomes a resource to be consumed rather than a community to be participated in. It becomes, in a precise inversion of Benkler's aspiration, a broadcast medium rather than a participatory one.

There is a second pressure on the commons that Benkler's framework illuminates with particular clarity: the enclosure of training data. AI systems are trained on the accumulated output of human creative and intellectual activity — text, code, images, music, scientific research, encyclopedia articles, forum discussions, blog posts. A significant proportion of this training data was produced by the commons: open-source software, Wikipedia articles, Creative Commons-licensed content, publicly available research. The commons, in other words, is the substrate on which AI capability rests.

Benkler's analysis of enclosure — the process by which resources that were previously shared are brought under private control — applies with uncomfortable precision to this situation. The training data was produced by commons-based peer production. It was made available under open licenses that permitted sharing and reuse. AI companies consumed this data and used it to train proprietary models that they sell for profit. The commons fed the machine, and the machine's outputs are privatized.

This is not illegal. Open licenses typically permit commercial use. But it represents a form of value extraction from the commons that Benkler's framework identifies as problematic — not because it violates property rights, but because it disrupts the ecology of contribution. Contributors participated in the commons with the understanding that their contributions would remain in the commons — available to all, governed by the community, maintained through collective effort. The use of commons-produced data to train proprietary AI models that then compete with the commons for contributors represents a kind of parasitic enclosure: the commons is not privatized directly, but its value is extracted and used to build private systems that reduce the incentive to contribute to the commons in the future.

The commons fed the machine. The machine's outputs are privatized.

The dynamic is circular and self-reinforcing. AI systems trained on commons data produce tools that enable individual direct production. Individual direct production reduces contribution to the commons. The reduced commons produces less data for future AI training. AI companies compensate by purchasing proprietary data or generating synthetic data. The commons becomes less central to the AI ecosystem. The incentive to maintain and contribute to the commons diminishes further.

Benkler's response to enclosure has always been institutional. He advocated for legal frameworks that protect the commons from enclosure — open-source licenses, Creative Commons, the public domain. But the current form of enclosure operates at a level that existing legal frameworks were not designed to address. Open-source licenses govern the use of specific software artifacts. They do not govern the use of those artifacts as training data for AI systems. Creative Commons licenses govern the reproduction and adaptation of specific creative works. They do not govern the statistical patterns extracted from millions of such works and embedded in a neural network's weights.

The institutional gap is significant, and Benkler's framework suggests that closing it is essential for the health of the commons. Several institutional responses are possible. One is the development of new licensing frameworks — sometimes called "copyleft for AI" — that would require AI systems trained on commons data to make their outputs available under similar open terms. This would extend the logic of the GPL (the General Public License that governs much open-source software) into the AI domain: if you train on open data, your model must be open. A second is the creation of commons-governed AI systems — open-source models developed, trained, and maintained by communities rather than corporations, using commons data to produce commons-serving tools. A third is the development of compensation mechanisms that channel revenue from commercial AI systems back to the commons communities whose data those systems consume.

In Benkler's framework, the outcome depends on the choices societies make about how to organize the governance of productive infrastructure.

Each of these institutional responses faces significant practical challenges. But Benkler's framework insists that the challenges are not reasons for inaction. They are specifications for institutional design. The commons has always required institutional support — legal frameworks, governance structures, cultural norms — and the AI moment requires new forms of that support, tailored to the new pressures.

The deeper question, which emerges from Benkler's framework but extends beyond it, is whether the commons can survive a technological transition that simultaneously depends on the commons and undermines the incentive to contribute to it. The language interface that enables individual direct production was built on the commons. The open-source libraries, the publicly available research, the collaboratively produced datasets — all of these were products of commons-based peer production. The AI system is, in a real sense, the distilled output of the commons, concentrated into a tool that makes the commons less necessary.

This is not a tragedy in Hardin's sense. The commons is not being depleted by overuse. It is being transcended by a technology that was nourished by it. The child outgrows the parent. The question Benkler's framework forces one to ask is whether the child will remember where it came from — whether the institutional arrangements governing AI development will preserve the values of openness, shared governance, and collective participation that made the commons possible in the first place, or whether those values will be discarded as the technology that depends on them renders them apparently unnecessary.

The answer, as Benkler has always argued, is not determined by the technology. It is determined by the choices societies make about how to govern the technology. And those choices are being made now — in licensing decisions, in corporate governance structures, in regulatory frameworks, in the cultural norms of AI development communities — with consequences that will echo for decades. The commons is under pressure. It is not yet broken. But the institutional response must match the scale of the challenge, or the wealth of networks that Benkler described will become a historical artifact rather than a living resource.

The Tragedy of the Commons
Related You On AI Encyclopedia Topics for This Chapter
14 related entries — click to explore the full topic catalog

Chapter 5: The Commons That Trained the Machine

Every AI system that communicates in natural language learned to do so by consuming the commons.

This is not a metaphor. It is a description of the technical process by which large language models acquire their capabilities. The training data for these systems consists of text — billions of documents, hundreds of billions of words — scraped from the open internet, digitized from libraries, extracted from archives, assembled from the accumulated written output of human civilization. Wikipedia articles. Open-source code repositories. Academic papers. Forum discussions. Blog posts. Creative Commons–licensed works. Public domain literature. The entire sedimentary deposit of human knowledge and expression that had been made freely available through the networked information economy that Benkler described.

The commons did not merely contribute to the training of AI. It constituted the training of AI. Without the vast, freely accessible corpus of human knowledge and expression that the networked information economy had produced — without the millions of Wikipedia articles written by volunteer contributors, without the billions of lines of open-source code shared under permissive licenses, without the centuries of literature and scholarship that had entered the public domain — the language models that now enable individual direct production would not exist. They could not exist. The raw material of their capabilities is the commons itself.

Benkler's framework provides the precise analytical vocabulary for understanding what this means. In his account of commons-based peer production, the commons is a shared resource — a pool of information, knowledge, and culture that is available to all and governed by institutional arrangements (licenses, norms, governance structures) that prevent its enclosure by private interests. The commons is productive precisely because it is shared: each contributor builds on the contributions of others, and the accumulated result exceeds what any individual or firm could produce alone. The value of the commons is not merely the sum of its parts. It is the emergent property of open access — the capacity of anyone to build on, remix, extend, and improve what others have created.

AI training represents the largest single act of building-on-the-commons in human history. The companies that trained these models drew upon the entirety of the openly accessible information environment — the environment that Benkler's framework identifies as the foundation of the networked information economy — and used it to create systems of extraordinary capability. The question that Benkler's framework forces into view is whether this act of building constitutes commoning or enclosure.

The distinction is not semantic. It is the central political-economic question of the AI era.

Commoning, in the tradition that Benkler draws upon, is the practice of using a shared resource in ways that sustain and replenish it. A fisher who takes fish from a common fishing ground and observes the norms that prevent overfishing is commoning. A Wikipedia editor who reads articles and contributes improvements is commoning. The essential feature of commoning is reciprocity: the user of the commons contributes to its maintenance and renewal.

Enclosure, by contrast, is the practice of appropriating a shared resource for private use, excluding others from access to it, and extracting value without replenishing the resource. The enclosure of the English commons in the eighteenth century — the privatization of shared agricultural land that had sustained rural communities for centuries — is the historical paradigm. The essential feature of enclosure is extraction without reciprocity: the appropriator takes from the commons and gives nothing back.

AI training sits uncomfortably between these two categories, and Benkler's institutional analysis reveals why the discomfort is warranted. On one hand, the companies that trained large language models drew upon freely available information — information that was available precisely because the commons had made it available. They did not steal it in any conventional sense. The text was public. The licenses, in many cases, permitted use. The act of reading and learning from publicly available text is something that every human being does, and the legal and ethical norms that support public access to information are deeply rooted in democratic culture.

On the other hand, the products of AI training — the language models themselves — are overwhelmingly proprietary. The most capable models are owned by a handful of corporations, access is controlled through commercial APIs, and the value generated by the models accrues primarily to shareholders rather than to the commons from which the training data was drawn. The Wikipedia contributors who wrote the articles that helped train GPT-4 received no compensation, no credit, and no governance rights over the system their contributions helped create. The open-source developers whose code formed part of the training corpus have no seat at the table where decisions about model deployment, safety, and access are made.

Articulacy Inequality Axis
Articulacy Inequality Axis

This is the structure of enclosure. A shared resource — the accumulated knowledge and expression of the networked information economy — has been appropriated by private actors, transformed into proprietary products, and deployed in ways that generate private returns without replenishing the commons from which the value was extracted. The commons fed the machine, and the machine feeds the shareholders.

Benkler would not find this outcome surprising. His framework consistently warned that the democratic potential of the networked information economy was contingent on institutional design — that without deliberate legal and governance arrangements to protect the commons, the same technologies that enabled decentralized production could be captured by concentrated commercial interests. The history of the internet has largely vindicated this warning. The early web's decentralized architecture gave way to platform concentration. The blogosphere gave way to social media controlled by a handful of corporations. The open protocols that enabled interoperability gave way to proprietary ecosystems that locked users into specific platforms. At each stage, the pattern was the same: commons-based innovation created value, and concentrated capital captured it.

AI training represents the most consequential instance of this pattern. The commons created the conditions for AI capability, and the firms that trained the models captured the value. Benkler's institutional analysis predicts this outcome and identifies the mechanism: the absence of legal and governance frameworks that require reciprocity — that condition access to the commons on contribution to the commons — allows appropriation without replenishment.

The implications extend beyond the training of existing models to the future of the commons itself. If AI systems can generate text, code, and cultural artifacts at scale, and if these AI-generated artifacts flood the information environment, then the commons that future AI models train on will be increasingly composed of AI-generated content rather than human-generated content. This creates what researchers have called a model collapse dynamic — a feedback loop in which AI systems train on the outputs of previous AI systems, gradually losing the diversity, nuance, and grounding in human experience that characterized the original commons.

The degradation of the training commons is an instance of what Garrett Hardin described as the tragedy of the commons — but with a crucial difference that Benkler's framework highlights. Hardin's tragedy occurs when individual users of a commons each act rationally in their own self-interest, collectively depleting a shared resource. The tragedy of the AI training commons occurs not because individual users are overusing the resource but because a small number of powerful actors have extracted value from it at enormous scale, and the byproducts of that extraction — AI-generated content — are degrading the resource for everyone.

Benkler's response to Hardin was to demonstrate that commons tragedies are not inevitable — that well-governed commons, with appropriate institutional arrangements, can be sustained indefinitely. Elinor Ostrom's research on common-pool resource management reinforced this point: communities that develop rules for use, monitoring mechanisms, and graduated sanctions can prevent the depletion of shared resources. The question is whether analogous institutional arrangements can be developed for the AI training commons.

Several possibilities present themselves, each consistent with Benkler's institutional design framework. The first is open-source AI — the development of language models as commons-based resources, available to all, governed by community norms, and maintained through collaborative effort. Projects like Meta's LLaMA, EleutherAI's open models, and various academic initiatives represent steps in this direction. If the most capable AI models were commons-based rather than proprietary, the enclosure problem would be mitigated: the value extracted from the commons would be returned to the commons in the form of openly accessible tools.

The second is reciprocity requirements — legal or licensing frameworks that condition access to commons-based training data on contribution back to the commons. Creative Commons licenses already embody this principle in their "ShareAlike" variants, which require that derivative works be licensed under the same terms as the original. A similar principle applied to AI training would require that models trained on commons-based data be released under open licenses, or that the companies training them contribute resources — computational, financial, or intellectual — to the maintenance and expansion of the commons.

The third is governance participation — giving the contributors to the training commons a voice in the governance of the AI systems that draw upon their contributions. This is the most ambitious and most directly democratic proposal, and it reflects Benkler's deepest insight about the relationship between production and governance. The commons is not just a resource. It is a community. The people who created the knowledge and expression on which AI systems train are not mere data sources. They are participants in a shared intellectual enterprise, and they have legitimate claims to participate in the governance of the systems that their enterprise made possible.

The technology that democratizes production is itself dependent on the communal practice it displaces. This is the paradox at the heart of the AI moment.

None of these institutional arrangements exists at scale today. The dominant AI companies train on the commons and return nothing to it. The governance of AI development is concentrated in corporate boardrooms, not distributed among the communities whose contributions made the technology possible. The institutional vacuum that Benkler's framework warns about — the absence of legal and governance structures that channel technological capability toward democratic values — is precisely the condition that currently obtains.

This connects directly to the transformation Segal describes in You On AI. The individual direct producers — the marketing managers, teachers, and architects who build their own tools through conversation with AI — are the downstream beneficiaries of the commons that trained the models they use. Their productive autonomy rests on a foundation of shared knowledge and expression that was created by communities of contributors over decades. The language interface that enables their individual production is, in a precise sense, a distillation of the commons — a mechanism for accessing, recombining, and applying the accumulated knowledge of the networked information economy in response to individual needs.

The individual direct producer is therefore in a peculiar position: she exercises an unprecedented form of individual autonomy, but that autonomy depends entirely on a commons she did not contribute to and does not participate in governing. She is free in Benkler's sense — free to produce, free to create, free to shape her own information environment — but her freedom rests on an institutional arrangement (the appropriation of the commons by commercial AI companies) that Benkler's framework identifies as problematic. Her autonomy is real. Its foundations are precarious.

The precarity is not hypothetical. If the commons degrades — if the flood of AI-generated content reduces the quality of the training data available for future models, if the economic incentives for human contribution to open knowledge projects are undermined by AI systems that can generate comparable content at lower cost, if the governance structures that maintained the quality and integrity of commons-based projects like Wikipedia are weakened by the declining engagement of human contributors — then the foundation on which individual direct production rests will erode. The individual's autonomy depends on the health of a commons she may never have seen and does not know she relies upon.

Benkler's framework thus reveals a paradox at the heart of the AI moment. The technology that most fully realizes the aspiration for individual productive autonomy is the technology that most urgently requires collective institutional action to sustain. The individual can build alone. But the foundations of her solitary capability are irreducibly collective — built by communities, sustained by norms, and vulnerable to the same dynamics of enclosure and depletion that have threatened commons throughout human history. The wealth of networks made the commons possible. The question is whether the wealth of networks can survive the machine the commons created.

Natural Language Interface
Related You On AI Encyclopedia Topics for This Chapter
14 related entries — click to explore the full topic catalog

Chapter 6: Modularity Without Community

Benkler's analysis of commons-based peer production identified modularity as the first structural precondition for collaborative production at scale. A project must be decomposable into components that can be worked on independently. The second precondition was granularity — the modules must be small enough that contributors can make meaningful contributions with modest investments of time and effort. The third was low-cost integration — there must be mechanisms for assembling individual contributions into a coherent whole.

These three properties were properties of projects. They described how work could be organized so that many individuals could contribute to a shared enterprise. The implicit unit of analysis was the community: a group of people, connected by a shared purpose and coordinating through shared norms and infrastructure, who collectively produced something that none of them could have produced alone.

AI-enabled individual production preserves the first two properties while eliminating the third — or rather, rendering it unnecessary. The work of building a software application remains modular: it can be decomposed into components (user interface, data processing, visualization, output formatting). It remains granular: each component can be specified through a relatively small number of natural-language instructions. But integration no longer requires a community-maintained infrastructure. It requires a conversation.

The AI system functions as a universal integrator. It receives modular specifications in natural language, implements them in code, and assembles them into a coherent whole — performing, in real time, the integration work that previously required version control systems, code review processes, architectural governance, and the accumulated institutional knowledge of a developer community. The teacher in Bristol did not need to understand how her data visualization tool's components fit together at the level of code. She specified what she wanted each component to do, and the AI handled the integration. The modularity was preserved. The community was not.

This is a structural transformation in the organization of production, and Benkler's framework identifies its significance with precision. In his analysis, modularity was not merely an efficiency feature. It was a participation feature. The reason modularity mattered was that it lowered the barriers to contribution, enabling large numbers of people to participate in a shared project. A Wikipedia article is modular because many editors can work on different sections independently. A Linux kernel is modular because many developers can work on different subsystems independently. The modularity served the community by enabling the community.

When modularity is preserved but community is eliminated — when the modules are specified by a single individual and integrated by a machine — modularity serves a different function. It becomes a cognitive tool for the individual rather than an organizational tool for the community. The teacher broke her visualization tool into components not because she needed to coordinate with other contributors but because decomposition is how human beings manage complexity. She thought in modules because thinking in modules is how one describes a complex artifact to an AI system: piece by piece, function by function, feature by feature.

Benkler did not analyze this form of modularity because it did not exist as a significant mode of production when he wrote. But his analytical categories reveal its implications. If modularity-for-community is the structural basis of commons-based peer production, then modularity-without-community is the structural basis of individual direct production. The same cognitive architecture that enabled collaboration now enables solitary creation. The pattern has been conserved. The social context has been stripped away.

Productive Literacy
Productive Literacy

The stripping away matters for reasons that Benkler's democratic analysis makes visible. Community-based modularity carried with it a set of social practices that were integral to the democratic value of commons-based peer production. When many people contribute to a modular project, they must negotiate standards — coding conventions, editorial policies, quality thresholds, architectural principles. These negotiations are a form of governance. They require contributors to articulate reasons for their preferences, to listen to competing perspectives, to accept compromise, and to subordinate individual judgment to collectively determined norms. The Wikipedia Manual of Style is not merely a formatting guide. It is the product of thousands of negotiations among contributors with different views about how knowledge should be presented, and its existence reflects the community's capacity for self-governance.

Individual direct production generates no such negotiations. The individual producer sets her own standards, follows her own conventions, makes her own quality judgments. She is, in Benkler's terms, fully autonomous — but autonomous in a way that lacks the relational dimension that gave commons-based autonomy its democratic significance. The autonomy of the commons contributor was the autonomy of a citizen: free to act, but acting within a framework of shared norms arrived at through deliberation. The autonomy of the individual direct producer is the autonomy of a sovereign: free to act, and accountable to no one.

This distinction maps onto a deeper tension in liberal democratic theory between two conceptions of freedom. The first, associated with Isaiah Berlin's concept of negative liberty, defines freedom as the absence of external constraint — the individual is free to the extent that others do not interfere with her actions. The second, associated with republican and deliberative democratic traditions, defines freedom as the capacity to participate in the governance of the conditions that affect one's life — the individual is free to the extent that she shares in the collective determination of the norms and institutions under which she lives.

The commons was not just a place to build. It was a school for governance — and it produced its pupils as a byproduct of producing encyclopedias.

Benkler's work consistently aligned with the second conception. The democratic value of the networked information economy was not merely that individuals could produce without interference. It was that they could participate in the governance of shared productive enterprises — that the commons was a site of collective self-governance as well as collective production. The Linux kernel governance structure, with its layered system of maintainers and its norms of meritocratic review, was not an incidental feature of the project. It was a demonstration that large-scale, complex production could be governed democratically rather than hierarchically.

Individual direct production aligns with the first conception. The individual is free from constraint — free from the market's price signals, the firm's hierarchical commands, and the commons' collaborative norms. Her freedom is maximized in the negative sense: nothing and no one interferes with her productive activity. But she has no share in collective governance, because there is no collective to govern. Her productive life is sovereign, and her sovereignty is solitary.

The practical consequences of this shift are already becoming visible. In the domains where individual direct production is replacing commons-based peer production, the governance structures that communities developed to maintain quality, resolve disputes, and ensure accountability are weakening. Open-source software projects report declining contributions from individual developers who increasingly use AI to solve their problems privately rather than contributing solutions to shared repositories. The incentive to participate in the commons — the intrinsic motivation that Benkler identified as the engine of peer production — is attenuated when the individual can achieve her goals without participation.

This is not a story of laziness or selfishness. It is a story of rational response to a changed cost structure. Benkler's framework predicts that individuals will choose the organizational form with the lowest costs for their specific productive needs. When the cost of individual production falls below the cost of community participation, rational individuals will choose individual production — not because they are antisocial, but because the economics favor it. The same cost-structure logic that explained the rise of the commons now explains its partial eclipse.

But Benkler's framework also insists that cost-structure logic alone does not determine outcomes. Institutional design can alter the incentive landscape. Legal frameworks can create incentives for community participation that the raw cost structure does not provide. Governance structures can make the commons more attractive relative to individual production. Cultural norms can sustain collaborative habits even when they are not economically optimal for each individual decision.

The question is what institutional arrangements can preserve the democratic benefits of community-based production in an era when the cost structure favors individual production. Benkler's analytical tools suggest several approaches. Licensing frameworks could require that artifacts produced with AI assistance — artifacts that draw upon the commons-trained capabilities of language models — be shared back into the commons, creating a flow of contributions that replenishes the shared resource. Platform designs could make collaboration easy enough that the marginal cost of sharing remains lower than the marginal cost of hoarding. Governance structures could give individual direct producers a voice in the management of the AI systems they rely upon, creating new forms of collective self-governance adapted to the new mode of production.

None of these arrangements emerges spontaneously from the technology. Each requires deliberate institutional design — the kind of design that Benkler has spent his career advocating. The technology has created the possibility of individual direct production. The institutional question — the question that Benkler insists is always the decisive question — is whether the arrangements surrounding that technology will preserve the democratic values that the commons embodied, or whether the radical autonomy of individual production will dissolve the collective practices on which democratic self-governance depends.

The modules remain. The community is optional. And in Benkler's framework, that optionality is the problem — because the community was never optional for democracy.

Modularity, Granularity, and Low-Cost Integration
Related You On AI Encyclopedia Topics for This Chapter
11 related entries — click to explore the full topic catalog

Chapter 7: Property, Access, and the New Enclosure

Benkler's most sustained engagement with legal theory concerned the relationship between intellectual property rights and the conditions for creative production. His argument, developed across multiple works and sharpened in The Wealth of Networks, challenged the prevailing assumption that strong intellectual property protections were necessary to incentivize innovation. The standard economic justification for copyright and patent law held that without the prospect of exclusive control over their creations, authors and inventors would lack sufficient incentive to invest the time, effort, and resources required for creative and inventive work. Property rights solved the public goods problem: they gave creators a means of capturing the value of their contributions, thereby aligning private incentives with public benefit.

Benkler's response was empirical as much as theoretical. He pointed to the vast and growing body of creative and productive work that was being produced without reliance on intellectual property incentives — work motivated by intrinsic satisfaction, social recognition, reciprocity, the desire to contribute to a shared enterprise, or simply the pleasure of solving interesting problems. Open-source software developers did not write code for royalties. Wikipedia editors did not write articles for copyright revenue. Creative Commons contributors did not share their work in expectation of property-based returns. These producers were motivated by a complex mix of social and psychological incentives that the standard property-rights framework could not account for.

More significantly, Benkler argued that excessively strong intellectual property rights could impede rather than promote creative production. Copyright that was too broad, too long, or too aggressively enforced restricted the raw materials available for new creation. Every creative work builds on prior works — every novel draws on literary tradition, every piece of software builds on existing code, every scientific discovery builds on prior findings. When the prior works are locked behind intellectual property barriers, the cost of building on them rises, and the range of new creation narrows. Strong intellectual property rights protected the interests of existing rights holders at the expense of future creators — and the concentrated commercial interests that held the largest intellectual property portfolios had every incentive to lobby for ever-stronger protections, regardless of the cost to the broader creative ecosystem.

This analysis acquires new force and new complexity in the AI era. The training of large language models on copyrighted material has generated an unprecedented wave of litigation and regulatory debate, pitting the interests of rights holders (authors, publishers, visual artists, musicians, software developers) against the interests of AI companies (which argue that training on publicly available text is a form of learning, analogous to the learning that human beings do when they read). The legal questions are genuinely novel, and courts and legislatures are struggling to apply intellectual property frameworks designed for an era of human authorship to a technology that operates at an entirely different scale.

Benkler's framework cuts through the polarized debate by reframing the question. The issue is not whether AI companies have a right to train on copyrighted material — that is a legal question that will be resolved, jurisdiction by jurisdiction, through the ordinary processes of litigation and legislation. The deeper issue, which Benkler's political-economic analysis addresses, is what institutional arrangements will best serve the production and circulation of knowledge, culture, and innovation in the AI era — and specifically, whether the intellectual property regime that emerges from the current legal battles will support or undermine the commons-based and individual production that the networked information economy has made possible.

The danger that Benkler's framework identifies is not that AI companies will lose their copyright cases. The danger is that they will win them — and then use their proprietary control over the resulting AI systems to create a new form of enclosure more comprehensive than anything the industrial information economy achieved.

Consider the architecture of the emerging AI economy through Benkler's lens. A small number of companies — perhaps five or six globally — have the computational resources, the training data, and the technical expertise to develop frontier AI models. These companies train their models on the commons — on the accumulated knowledge and expression of humanity — and produce proprietary systems whose capabilities are available only through commercial access. The resulting models are the most powerful tools for knowledge production, creative expression, and software development ever created. And they are owned by their creators.

Wealth Of Networks Concept
Wealth Of Networks Concept

If intellectual property law permits this arrangement — if the training of AI on common knowledge is legal, and the resulting models are proprietary — then the AI era will be characterized by a form of enclosure that Benkler's framework identifies as deeply problematic. The commons will have been transformed into private property. The shared intellectual heritage of humanity will have been distilled into proprietary algorithms controlled by a handful of corporations. Access to the most powerful productive tools will be mediated by commercial relationships, and the terms of access will be set by the companies that control the tools.

This is not a hypothetical scenario. It is a description of the current arrangement. The most capable AI systems are proprietary. Access is commercial. The terms of service are set unilaterally by the companies that operate them. And the capabilities of these systems — capabilities that derive from the commons — are treated as the private property of the companies that assembled and trained them.

Benkler's framework predicts the consequences. When productive tools are controlled by concentrated commercial interests, the range of production that those tools enable is shaped by the interests of the controllers rather than the needs of the users. AI companies will optimize their systems for the use cases that generate the most revenue. They will restrict uses that threaten their commercial interests or expose them to legal liability. They will shape the terms of access to maximize extraction and minimize the risk of competition from open alternatives. The individual direct producer — Segal's marketing manager, teacher, architect — will be able to build only what the commercial terms permit, in the ways the commercial interface allows, subject to the restrictions the commercial provider imposes.

This is not autonomy. It is a new form of dependency — more subtle than the dependency on industrial-era gatekeepers, because the interface feels empowering, but no less real. The individual who builds her own tools through a proprietary AI system is free in the immediate sense that she can create what she needs. She is not free in Benkler's deeper sense — the sense that requires control over the conditions of one's productive life. She does not control the model. She does not participate in its governance. She cannot inspect its training data, modify its architecture, or fork it when the provider's interests diverge from her own. Her productive sovereignty is exercised within a sandbox whose walls are set by someone else.

The alternative that Benkler's framework points toward is the open-source AI ecosystem — models developed collaboratively, licensed openly, and governed through community institutions analogous to those that govern open-source software. Open-source AI models exist. Some are remarkably capable. But they face structural disadvantages relative to proprietary models: less computational resources for training, less commercial incentive for development, and less institutional support for governance and maintenance.

Benkler would recognize this as a familiar pattern. Open-source software faced the same structural disadvantages in its early years — and overcame them, partly through technical excellence, partly through community commitment, and partly through institutional innovation (the GPL license, the Apache Foundation, the Linux Foundation). The question is whether open-source AI can follow a similar trajectory, and what institutional arrangements would support that trajectory.

The property rights are being drawn now. The enclosures are being built now. And the institutional choices made in the next decade will shape the distribution of productive autonomy for a generation.

The copyright battles currently underway will significantly shape the answer. If courts or legislatures impose strict copyright restrictions on AI training — requiring licenses for every copyrighted work used in training data — the effect will be to raise the cost of training AI models, concentrating the capability further among the companies that can afford to license the necessary data. Small open-source projects will be priced out. The commons will be enclosed not by the AI companies but by the copyright holders, whose licensing demands will create barriers to entry that only the largest firms can overcome.

If, on the other hand, courts or legislatures treat AI training as a form of fair use or its equivalent — as a socially beneficial transformation of existing works that does not substitute for the originals — then the door remains open for open-source AI development. The training data remains accessible. The cost of entry remains manageable. And the possibility of commons-based AI production — models trained on the commons and returned to the commons — remains viable.

Benkler's framework does not prescribe a specific legal outcome. It identifies the institutional conditions under which different outcomes serve or undermine democratic values. Strong copyright restrictions serve the interests of existing rights holders but concentrate AI capability among the firms that can afford licensing costs. Weak copyright restrictions serve the interests of open-source AI development but may undermine the economic incentives for human creative production. The optimal arrangement — the arrangement that best serves the production and circulation of knowledge, culture, and innovation — likely involves neither extreme but a carefully designed institutional compromise that preserves incentives for human creation while maintaining the conditions for open AI development.

The design of that compromise is, in Benkler's terms, the most important institutional question of the AI era. It will determine whether the AI economy is characterized by concentrated proprietary control or distributed commons-based access. It will determine whether the individual direct producers Segal describes exercise genuine autonomy or merely the appearance of autonomy within a corporate sandbox. It will determine whether the democratic potential of the networked information economy — the potential that Benkler identified and that the commons demonstrated — survives the transition to the AI era or is enclosed by the same dynamics of concentration and capture that have characterized every previous technological transition.

The property rights are being drawn now. The enclosures are being built now. And the institutional choices being made — in courtrooms, in legislatures, in standards bodies, and in the governance structures of AI companies — will shape the political economy of knowledge production for decades to come. Benkler's framework insists that these are not technical decisions to be left to lawyers and engineers. They are political decisions with democratic consequences, and they deserve the democratic deliberation that their significance demands.

Chapter 8: Governance After the Commons

For two decades, the most successful governance structures in the networked information economy were those that emerged organically from commons-based peer production communities. Wikipedia's system of editorial policies, dispute resolution mechanisms, and layered authority — from anonymous editors to registered users to administrators to bureaucrats to the Arbitration Committee — evolved through practice, negotiation, and collective deliberation over fifteen years. The Linux kernel development process, with its hierarchy of subsystem maintainers and its culture of meritocratic code review, was not designed by a management consultant. It was developed iteratively by the community of contributors who needed it to function. The Apache Software Foundation, the Mozilla Foundation, the Creative Commons organization — each developed governance structures suited to its specific community, its specific productive mission, and its specific challenges of coordination and quality control.

These governance structures shared certain features that Benkler's analysis highlighted as characteristic of commons-based governance. They were participatory: community members had voice in the determination of norms and policies. They were transparent: decisions were made in public, through documented processes, with stated reasons. They were adaptive: norms evolved through practice and could be revised when circumstances changed. And they were legitimate: community members generally accepted governance decisions because they had participated in the process that produced them, or at least had access to a process for challenging decisions they disagreed with.

Benkler drew on Elinor Ostrom's research to identify the institutional design principles that made these governance structures effective. Ostrom's studies of common-pool resource management had identified eight principles that characterized long-enduring commons institutions: clearly defined boundaries, congruence between rules and local conditions, collective-choice arrangements that allow affected parties to participate in rule-making, monitoring, graduated sanctions, conflict resolution mechanisms, minimal recognition of the right to organize by external authorities, and nested enterprises for larger systems. Benkler observed that successful digital commons exhibited analogous features: clear membership criteria, rules adapted to the specific characteristics of the productive community, participatory governance, quality monitoring, graduated responses to norm violations, and dispute resolution mechanisms.

These governance structures were products of the mode of production they governed. They emerged because commons-based peer production created communities that needed governance — communities of people working together on shared projects, negotiating competing visions, maintaining shared resources, and resolving the inevitable conflicts that arise when diverse individuals collaborate on complex undertakings. The governance was not separate from the production. It was integral to it. The Wikipedia editor who participated in a content dispute was simultaneously producing an encyclopedia and practicing self-governance. The Linux maintainer who reviewed a submitted patch was simultaneously ensuring code quality and exercising governance authority on behalf of the community.

Individual direct production does not generate these governance structures, because it does not generate the communities that require them.

This is the governance challenge that Benkler's framework reveals at the heart of the AI moment. When production shifts from the community to the individual — when the marketing manager builds her own dashboard, the teacher builds her own visualization tool, and the architect builds her own modeling application, each in private conversation with an AI system — the social substrate of governance disappears. There is no community to negotiate norms. There is no shared project to maintain. There is no collective resource to govern. There are only individuals, producing in isolation, connected not to each other but to the AI systems that enable their production.

The governance vacuum is not absolute. Individual direct producers are not ungoverned. They are governed — by the terms of service of the AI providers they use, by the capabilities and limitations built into the AI systems, by the training data that shapes the models' outputs, and by the commercial incentives of the companies that operate them. But this governance is not the participatory, transparent, adaptive governance that characterized the commons. It is the unilateral governance of the platform — the terms-of-service governance that Benkler has consistently identified as inadequate for democratic purposes.

The platform's terms of service determine what the individual can produce and how. The AI provider decides which requests to fulfill and which to refuse. The model's training and fine-tuning embed values, biases, and priorities that shape the outputs the individual receives. The individual has no voice in these decisions. She can accept the terms or refuse the service. She cannot participate in the governance of the system she depends upon.

Fourth Mode Production
Fourth Mode Production

Benkler's framework identifies this as a form of private governance — the exercise of governance power by commercial entities over individuals who have no formal role in the governance process. Private governance is not inherently illegitimate. Firms govern their employees, and markets govern participants through price signals, and both forms of governance can be efficient and welfare-enhancing under appropriate conditions. But Benkler's democratic analysis insists that governance arrangements that significantly affect the conditions of knowledge production, cultural creation, and information circulation require democratic accountability that private governance alone cannot provide.

The significance of this argument increases with the scope of AI-enabled production. When individual direct production was a novelty — a handful of early adopters experimenting with new tools — the governance of AI systems was a niche concern. As individual direct production becomes a primary mode of software creation, knowledge production, and cultural expression — as the artifacts produced through AI conversation become the tools and texts and images that shape daily life — the governance of the AI systems that enable this production becomes a central question of democratic governance.

Who decides what the AI can and cannot produce? Who sets the safety restrictions that determine which requests are fulfilled and which are refused? Who chooses the training data that shapes the model's capabilities and biases? Who determines the pricing and access policies that decide who can use the system and on what terms? These are governance questions of the first order, and in the current institutional arrangement, they are answered by corporate boards and engineering teams, not by democratic deliberation.

Benkler's framework suggests that the appropriate response is not to abolish private governance of AI systems — the technical complexity of AI development requires organizational structures that democratic assemblies cannot replicate — but to create institutional arrangements that subject private AI governance to democratic accountability. Several models are available, each reflecting different aspects of Benkler's institutional design principles.

The first is the public utility model, in which AI systems of sufficient scale and significance are regulated as utilities — their pricing, access policies, and governance structures subject to regulatory oversight designed to ensure universal access and fair treatment. This model draws on the regulatory frameworks that democracies have applied to other technologies with significant public consequences: telecommunications, electricity, broadcasting. It preserves private ownership and management while subjecting governance decisions to public accountability.

The second is the cooperative model, in which AI systems are owned and governed by their users — individuals, communities, and organizations that collectively determine the policies under which the systems operate. This model most closely reflects Benkler's vision of commons-based governance, adapted to the institutional requirements of AI development. User cooperatives would not eliminate the need for technical expertise in AI governance, but they would ensure that governance decisions reflect the interests and values of the people affected by them rather than the commercial interests of shareholders.

The third is the multi-stakeholder model, in which AI governance includes representatives of diverse affected parties — users, contributors to training data, affected communities, civil society organizations, and technical experts — in governance bodies that have genuine authority over significant policy decisions. This model reflects Ostrom's principle that those affected by governance decisions should have a role in making them, and it addresses Benkler's concern that private governance of public-consequence technologies is democratically inadequate.

Benkler's deepest insight about governance is that it does not emerge from technology. It must be chosen, designed, and fought for.

The fourth, and perhaps most important, is the open-source model — the development of AI systems as commons-based resources, governed by the kind of community institutions that Benkler has studied and championed. Open-source AI models can be forked when governance decisions are unacceptable. They can be inspected by independent researchers. They can be modified to reflect the values and needs of specific communities. They embody, in their institutional structure, the democratic principles that Benkler identified in commons-based peer production: participation, transparency, adaptability, and the distribution of governance power among those affected by governance decisions.

Each of these models has limitations. The public utility model risks regulatory capture and bureaucratic rigidity. The cooperative model faces challenges of scale and technical complexity. The multi-stakeholder model may produce governance gridlock. The open-source model requires sustained investment without clear commercial returns. No single model is sufficient. The institutional design challenge is to develop combinations and hybrids suited to the specific governance needs of AI-enabled production — arrangements that preserve the technical dynamism of private AI development while subjecting its governance to the democratic accountability that its significance demands.

Benkler's deepest insight about governance is that it does not emerge from technology. It emerges from institutional design — from the deliberate choices that societies make about how to organize the exercise of power over shared resources and public-consequence technologies. The commons developed its governance structures because communities recognized the need for them and invested the effort to create them. The AI era requires an analogous investment — not in the governance of commons-based communities, which are declining in relative significance, but in the governance of the AI systems that are replacing them as the primary infrastructure of knowledge production.

The individual direct producer builds alone. But the systems she builds with are shared — shared infrastructure, shared training data, shared capabilities, shared vulnerabilities. The governance of that shared infrastructure is the governance question of the era, and Benkler's framework insists that it cannot be left to the entities that control the infrastructure. It must be a matter of democratic design.

The commons governed itself. The question now is who governs what comes after the commons — and whether the democratic habits that the commons cultivated can be transplanted into institutional forms suited to a world in which the fundamental unit of production is no longer the community but the individual, equipped with tools whose power derives from the commons the individual may never know she is using.

Chapter 9: The Institutional Design of Individual Production

For two decades, the central question of internet governance was how to protect the commons. Yochai Benkler's framework made the stakes legible: if commons-based peer production was a genuine third mode of production — one that expanded human autonomy, distributed democratic voice, and generated information goods of extraordinary quality without the concentrated control of markets or the hierarchical command of firms — then the institutional environment surrounding the commons was not a second-order concern. It was the primary determinant of whether the democratic promise of the networked information economy would be fulfilled or foreclosed. Copyright law, telecommunications regulation, platform governance, spectrum policy — these were the levers. Get the institutions right, and the commons would flourish. Get them wrong, and the same technology that enabled distributed production would be captured by concentrated interests, its democratic potential enclosed and privatized.

Benkler was explicit about this. Technology, in his framework, was never determinative. The same internet that enabled Wikipedia also enabled surveillance capitalism. The same protocols that made open-source software possible also made centralized platform monopolies possible. What determined which trajectory a society followed was not the technology itself but the legal, institutional, and normative frameworks that shaped how the technology was used. This was Benkler's deepest methodological commitment: institutional design is decisive.

That commitment now requires application to a technological moment Benkler's original framework did not address. The emergence of individual direct production — the capacity of persons without technical training to create functional software artifacts through natural language conversation with AI systems — poses institutional design questions that are at once continuous with Benkler's project and irreducible to it. The continuity lies in the fundamental insight: the technology will not determine the outcome. The irreducibility lies in the fact that individual direct production creates a category of producers, a mode of governance, and a set of dependencies that the institutional architecture designed for commons-based peer production does not adequately address.

Consider the most basic institutional question: property rights. Benkler's analysis of intellectual property was organized around a central claim — that the industrial information economy's regime of strong copyright and patent protection was not merely unnecessary for innovation in the networked age but actively harmful to it. Strong intellectual property rights erected barriers to entry that prevented the distributed, collaborative production the network made possible. The alternative Benkler championed was a robust public domain and commons: legal frameworks like Creative Commons licensing, fair use doctrines broad enough to protect transformative reuse, and resistance to the extension of copyright terms and the expansion of patent scope.

This analysis was well-suited to a world in which the primary mode of non-market production was collaborative. The commons needed protection because it was shared. The legal challenge was to prevent the enclosure of shared resources — to ensure that the knowledge, code, and cultural works produced by communities of contributors remained available for others to build upon. The enemy was privatization: the conversion of common resources into proprietary ones through intellectual property claims.

Individual direct production complicates this framework in a way that Benkler's institutional analysis, applied rigorously, can illuminate. When an individual describes a need to an AI system and receives a functional application, who owns the result? The individual, who articulated the need and shaped the output through iterative conversation? The AI company, whose model generated the code? The thousands of developers whose open-source code formed part of the training data? The question is not merely legal. It is institutional — it concerns the allocation of rights and the distribution of power, which is what institutions do.

Benkler's framework suggests that the answer should be evaluated not primarily in terms of economic efficiency but in terms of its implications for autonomy and democratic participation. A property regime that assigns ownership exclusively to the AI company — treating the individual as a mere user of a service — recapitulates the power asymmetry of the industrial information economy. The individual becomes a consumer again, dependent on a firm that controls the means of production. A property regime that assigns ownership exclusively to the individual — treating the AI as a mere tool — ignores the contributions embedded in the training data and concentrates benefits without corresponding obligations. A property regime that recognizes the layered, collaborative nature of the production process — the individual's intention, the model's generation, the training data's contribution — and distributes rights accordingly would be more consistent with the democratic values Benkler identified in the commons.

But designing such a regime requires confronting a difficulty that Benkler's original framework handled more easily. In commons-based peer production, the contributors were identifiable, the contributions were traceable, and the governance structures were visible. Wikipedia had edit histories. Linux had commit logs. Creative Commons had licenses. The institutional infrastructure of the commons was legible — it could be inspected, analyzed, and reformed by the community that depended on it. Individual direct production is far less legible. The AI model is opaque. The training data is aggregated from millions of sources. The generation process is stochastic. The relationship between input and output is not deterministic in the way that the relationship between a Wikipedia edit and a Wikipedia article is deterministic.

This opacity creates what Benkler, in his more recent work on governance and accountability, would recognize as a structural problem. Institutional design requires information. Governance requires legibility. When the production process is opaque — when neither the individual producer nor the public can inspect how the AI model arrived at its output, what training data informed it, or what biases may be embedded in it — the institutional frameworks that might govern individual direct production lack the informational foundation they require. The first institutional design priority for individual direct production is therefore transparency: requirements that AI systems disclose the provenance of their training data, the architecture of their models, and the factors that influence their outputs. Without transparency, governance is impossible, and without governance, the democratic values Benkler championed cannot be protected.

The second institutional design priority concerns interoperability and portability. Benkler's analysis of the industrial information economy emphasized the dangers of lock-in — the tendency of proprietary systems to create dependencies that prevent users from switching to alternatives, thereby concentrating market power and reducing autonomy. The same danger exists, in intensified form, in the ecosystem of AI-enabled individual production. When an individual builds a portfolio of custom applications through conversation with a specific AI system, her productive capacity becomes dependent on that system. If the system changes its terms of service, raises its prices, degrades its quality, or ceases to operate, the individual loses not merely a tool but the accumulated infrastructure of her productive life.

Enclosure Movement
Enclosure Movement

Benkler's framework would insist on institutional arrangements that prevent this kind of dependency: open standards for AI-generated code, data portability requirements that allow individuals to export their applications and workflows, interoperability mandates that ensure applications built with one AI system can be maintained and modified with another. These are the digital equivalents of the legal frameworks that protected the commons — not copyright reform, but infrastructure governance designed to prevent the emergence of new forms of concentrated control.

The third institutional design priority is the most complex and the most consequential: governance of the AI systems themselves. In Benkler's analysis of commons-based peer production, the governance of the commons was internal. Wikipedia governed itself through community-developed policies and norms. Linux governed itself through meritocratic code review and architectural oversight. The commons was self-governing because the contributors were also the governors — the people who produced the common resource were the same people who managed it. This alignment between production and governance was what gave the commons its democratic character.

Individual direct production severs this alignment. The individual producer is not a governor of the AI system she depends upon. She has no voice in its design, no role in its governance, no capacity to shape the policies that determine how it operates. The governance of the system is entirely in the hands of the firm that built it. This is not commons governance. It is corporate governance — and Benkler spent his career demonstrating why corporate governance of information production concentrates power in ways that undermine democratic values.

The institutional challenge, then, is to create governance structures for AI systems that incorporate the democratic principles Benkler identified in the commons without requiring the specific organizational form of commons-based peer production. Several models suggest themselves, each with different trade-offs.

One model is regulatory governance: state-imposed requirements for transparency, accountability, fairness, and user participation in the design and operation of AI systems. This model has the advantage of democratic legitimacy — it channels governance through the institutions of representative democracy — and the disadvantage of speed. Regulatory processes are slow, and AI development is fast. By the time a regulatory framework is designed, debated, enacted, and implemented, the technology it governs may have changed beyond recognition.

A second model is multi-stakeholder governance: institutional structures that bring together AI developers, individual producers, civil society organizations, and affected communities to negotiate standards, resolve disputes, and oversee the operation of AI systems. This model resembles the Internet Engineering Task Force, the World Wide Web Consortium, and other multi-stakeholder governance bodies that Benkler studied and championed. It has the advantage of incorporating diverse perspectives and the disadvantage of being vulnerable to capture by the most powerful stakeholders — typically the firms that develop and operate the AI systems.

A third model is commons-based governance of the AI infrastructure itself: the development of open-source AI models, trained on publicly available data, governed by community-developed norms, and available to all. This model applies Benkler's framework most directly — it treats the AI model as a commons and subjects it to the same institutional logic that governed Wikipedia and Linux. It has the advantage of aligning with Benkler's democratic principles and the practical difficulty that the computational resources required to train state-of-the-art AI models are enormously expensive — far more expensive than the personal computers that enabled commons-based peer production of software and encyclopedias. The capital requirements of AI model training reintroduce the cost barrier that Benkler's networked information economy was supposed to have eliminated.

This is the institutional design dilemma of the present moment, and Benkler's framework illuminates it with uncomfortable clarity. The technology that enables individual direct production is itself produced through a mode of production — the capital-intensive, firm-based development of large AI models — that Benkler's theory identifies as democratically problematic. The tool that liberates the individual from dependence on firms and markets for software production creates a new dependence on the firm that produced the tool. The autonomy gained at the surface is underwritten by a dependency at the infrastructure level that may prove more consequential.

Benkler would insist — has insisted, in his more recent work — that this is not an inevitable feature of the technology. It is a consequence of the current institutional environment: the intellectual property regimes that protect proprietary models, the capital markets that fund their development, the platform architectures that lock in their users. Change the institutions, and the outcomes change. Invest public resources in open-source AI development. Require transparency and interoperability. Create governance structures that give individual producers voice in the design and operation of the systems they depend upon. Treat AI infrastructure as a public good — like roads, like the electromagnetic spectrum, like the internet protocols themselves — and govern it accordingly.

Whether societies will make these institutional choices is not a question that Benkler's framework can answer. It is a question of political will, democratic mobilization, and the distribution of power among the actors who have stakes in the outcome. What Benkler's framework can do — what it has always done — is clarify the stakes. The choice is not between technology and democracy. It is between institutional arrangements that align the technology with democratic values and institutional arrangements that allow the technology to be captured by concentrated interests. The cost structure has changed. The institutional question remains the same.

The difficulty is that the window for institutional design may be narrower than it was in the early days of the networked information economy. When Benkler wrote The Wealth of Networks, the commons was already flourishing before the institutional frameworks were fully articulated. Wikipedia existed before the governance structures that would manage it were complete. Linux worked before anyone had theorized why it worked. The institutions could be designed retrospectively because the commons was resilient enough to survive the period of institutional uncertainty.

Individual direct production may not afford the same luxury. The dependencies it creates — on specific AI platforms, on specific model architectures, on specific corporate providers — harden quickly. Once an individual has built her productive life around a particular AI system, the switching costs become prohibitive. Once a firm has captured a critical mass of individual producers, its market power becomes self-reinforcing. The window for institutional intervention is the period before lock-in sets in — before the dependencies crystallize, before the market power consolidates, before the democratic alternatives become impractical.

Benkler's lifework demonstrates that this window exists and that it matters. The institutional choices societies made about copyright law, telecommunications regulation, and internet governance in the 1990s and 2000s shaped the trajectory of the networked information economy for decades. The institutional choices societies make now about AI transparency, interoperability, governance, and ownership will shape the trajectory of AI-enabled individual production for decades to come. The cost structure has created the possibility. The institutions will determine the reality.

The question that now presses upon Benkler's framework is whether it can generate the political energy — not merely the analytical clarity — to seize this window before it closes. Analytical frameworks do not build institutions. People do. And the people who will build the institutions of AI-enabled individual production need more than a theory of what is desirable. They need a movement — a constituency that understands what is at stake, that can articulate what it wants, and that has the political power to demand it. Whether Benkler's framework can inspire such a movement, or whether it will remain a brilliant diagnosis without a corresponding prescription, is the open question of this institutional moment.

Chapter 10: The Wealth of Individuals

In the spring of 2006, when The Wealth of Networks was published, Yochai Benkler placed his book under a Creative Commons license — making the full text freely available for download, sharing, and noncommercial adaptation. The gesture was not merely symbolic. It was an enactment of the book's argument. A scholar who had spent a decade analyzing how commons-based peer production created value outside the market chose to produce his own work, the work that described this mode of production, in accordance with its principles. The medium was the message. The commons produced the theory of the commons.

Nearly two decades later, the world that Benkler described has both vindicated his central claims and outgrown the institutional architecture he designed to protect them. The commons still exists. Linux still runs the world's servers. Wikipedia still serves as humanity's first reference. Open-source software still constitutes the foundational infrastructure of the digital economy. But something has shifted beneath these achievements — a tectonic movement in the cost structure of production that Benkler's framework anticipated in principle but could not have predicted in its specific form.

The shift is this: the fundamental unit of digital production is no longer the community. It is the individual.

This formulation requires immediate qualification, because Benkler's framework — with its lawyerly precision and its insistence on specifying boundary conditions — would reject any unqualified claim of this kind. Communities still produce Linux. Communities still maintain Wikipedia. Communities still develop and govern the open-source software ecosystem. The commons has not disappeared. But alongside the commons, and increasingly supplementing it, a new mode of production has emerged in which individuals — non-technical individuals, individuals without programming skills or collaborative infrastructure — create functional software artifacts through natural language conversation with AI systems. The stories told in Edo Segal's account of this transformation — the marketing manager who described a need and received an application, the teacher who articulated a problem and received a solution, the architect who sketched an idea in words and received a prototype — are not exceptional cases. They are early instances of a mode of production that is becoming general.

Benkler's framework provides the analytical tools to understand what has happened. The cost of individual software production has crossed a threshold. Below this threshold, creating software required either purchasing it from a market, commissioning it from a firm, or organizing a community of contributors to produce it collaboratively. Above this threshold — or more precisely, below it, as the cost of production fell — none of these organizational forms was necessary. The individual could produce for herself. The language interface was the technology. The threshold crossing was the event. The fourth mode of production was the consequence.

But Benkler's framework also provides the analytical tools to understand what has been lost — or at least, what is at risk of being lost — in the transition. And this is where the analysis becomes uncomfortable, because the loss is not material. It is civic.

Recall Benkler's deepest argument: the democratic significance of commons-based peer production lay not merely in its capacity to produce information goods without concentrated control. It lay in the civic habits the commons cultivated. Contributing to Wikipedia taught people to deliberate. Participating in open-source development taught people to negotiate competing visions and subordinate personal preferences to shared standards. Governing a commons taught people to govern — to make collective decisions about shared resources under conditions of disagreement and uncertainty. These were not incidental byproducts of the production process. They were, in Benkler's analysis, among its most important outputs.

Individual direct production does not produce civic habits. It produces artifacts. The marketing manager who builds her own application has gained capability, but she has not gained the experience of collaborating with others, of negotiating competing visions, of subordinating her preferences to a shared standard. She has been liberated from the community. And liberation from community, as Benkler's framework makes clear, is not always liberation in the fullest sense.

Transaction Costs Coase
Transaction Costs Coase

This is the paradox at the heart of the present moment: the technology that fulfills Benkler's deepest aspiration — the expansion of individual autonomy — simultaneously threatens the institutional conditions under which that autonomy acquires democratic significance. An autonomous individual who participates in no commons, who governs no shared resource, who deliberates with no community, is autonomous in a thin sense — capable of producing for herself but disconnected from the practices of collective self-governance that give autonomy its political meaning.

Benkler's framework does not resolve this paradox. It illuminates it. And illumination is the precondition for response.

The response requires recognizing that the relationship between individual direct production and commons-based peer production is not zero-sum. The two modes of production can coexist, and their coexistence can be structured — through institutional design — in ways that capture the benefits of both. Individual direct production expands the population of digital producers, lowering the barrier to entry far below what even commons-based peer production required. A person who has built her own application through conversation with an AI understands, at an intuitive level, what software is and how it works. She has acquired what might be called productive literacy — a practical understanding of the digital medium that makes her a more capable, more informed, and potentially more engaged participant in the digital public sphere.

The technology that fulfills Benkler's vision of individual productive autonomy simultaneously undermines the civic foundation on which that vision depended.

Productive literacy is the foundation on which civic participation in the digital age can be built. The individual who understands how software works because she has built her own is better equipped to evaluate the software that governs her life — the algorithms that filter her news, the platforms that mediate her social interactions, the systems that determine her credit score, her insurance premium, her eligibility for public services. She is less dependent on experts to interpret the digital world for her. She is more capable of demanding transparency, accountability, and fairness from the institutions that deploy digital systems. She is, in Benkler's terms, more autonomous — not because she has withdrawn from community, but because she has acquired the knowledge that makes her participation in community more informed and more effective.

The institutional challenge is to build the bridges between productive literacy and civic participation — to create structures that channel the individual capability enabled by AI into collective engagement with the governance of digital systems. Open-source AI development provides one such bridge: individuals who have gained productive literacy through their own AI-enabled production are well-positioned to contribute to the commons of open-source AI tools, bringing their specific knowledge of user needs and application contexts to the collaborative development process. Community governance of AI infrastructure provides another: the multi-stakeholder governance structures discussed in the previous chapter create opportunities for individual producers to participate in the collective governance of the systems they depend upon, transforming a dependency relationship into a governance relationship.

Education provides perhaps the most important bridge. If the curriculum of productive literacy includes not only the technical skills of AI-enabled production but also the civic dimensions of digital governance — the questions of transparency, accountability, fairness, and democratic control that Benkler's framework identifies as central — then individual direct production becomes not an alternative to civic participation but a pathway to it. The individual who learns to build her own tools in conversation with an AI also learns to ask: Who built this AI? On what data was it trained? What values are embedded in its design? Who governs it? Who benefits from it? Who is harmed by it? These are the questions of an informed citizen, and they arise naturally from the experience of production.

Benkler's analytical framework, applied to the AI moment Segal describes, thus yields a complex but coherent picture. The fourth mode of production — individual direct production through natural language conversation with AI systems — is real, it is growing, and it represents a genuine expansion of human productive capability. It fulfills Benkler's aspiration for individual autonomy more fully than even commons-based peer production did, because it eliminates the residual coordination costs that peer production still required. But it also creates new risks — of dependency on corporate AI providers, of opacity in the production process, of the atrophy of civic habits that commons-based production cultivated — that require institutional responses.

The institutional responses Benkler's framework prescribes are clear in principle: transparency requirements for AI systems, interoperability and portability standards for AI-generated artifacts, multi-stakeholder governance of AI infrastructure, public investment in open-source AI development, and educational frameworks that connect productive literacy to civic engagement. Whether these responses will be implemented in practice is a political question that no analytical framework can resolve. But Benkler's framework insists — correctly — that the question is not whether the technology will determine the outcome. The question is whether societies will exercise the institutional agency that the technology makes available to them.

The wealth of networks was the wealth of communities — the productive and democratic capacity that arose when millions of individuals governed shared resources together.

The wealth of networks was the wealth of communities — the productive and democratic potential released when large numbers of individuals collaborated through digital communication. The wealth of individuals is different but continuous. It is the productive potential released when the cost of creation drops below the threshold that makes community unnecessary for production. But the democratic potential — the civic potential — does not inhere in production alone. It inheres in governance, in participation, in the practices of collective decision-making that transform individual capability into democratic agency.

Benkler's deepest insight was never about technology. It was about the relationship between how people produce things and how they govern themselves. The industrial information economy concentrated production and concentrated power. The networked information economy distributed production and distributed voice. The AI-enabled information economy distributes production further still — all the way to the individual. Whether it distributes power and voice along with it, or whether it concentrates power in the hands of the firms that control the AI infrastructure while individuals produce in isolation, depends on the institutions societies build in the next decade.

The commons is not dead. It is being transformed. The individual has not replaced the community. She has joined it — as a new kind of participant, with new capabilities and new vulnerabilities, arriving in the commons not empty-handed but bearing the tools and the literacy that AI-enabled production has given her. The question is whether the commons is ready to receive her, and whether the institutions that govern it are adequate to the world she brings into being.

Benkler built the framework. The technology has created the occasion. The institutional design is the work that remains. And the urgency of that work — the narrowness of the window before lock-in and concentration foreclose the democratic alternatives — is the message that Benkler's lifework, read in the light of the present moment, delivers with unmistakable clarity.

The wealth of individuals is not a replacement for the wealth of networks. It is its next chapter — one that will be written not by technology, not by markets, not by firms, but by the institutional choices that free societies make about who controls the means of production when the means of production fit in a conversation.

Epilogue

When I first encountered Benkler's work, I was building internet companies. I understood markets. I understood firms. I had spent my career inside one organizational form or the other, raising capital, hiring engineers, shipping products, competing for users. The language of transaction costs and hierarchical coordination was not academic to me — it was the water I swam in.

What Benkler gave me was the vocabulary for something I had sensed but could not articulate: that there was a third thing happening on the internet, something that was neither market nor firm, and that this third thing was not a curiosity or a sideshow but a genuine mode of production with its own logic, its own strengths, and its own democratic significance. When I read The Wealth of Networks, I felt the kind of recognition you feel when someone describes the shape of a room you have been standing in without seeing its walls.

Governing The Commons
Governing The Commons

Now I stand in a different room, and the shape has changed again.

The people I described in You On AI — the marketing manager, the teacher, the architect — are not participating in a commons. They are not coordinating with peers. They are sitting alone with a language interface, describing what they need, and watching it materialize. This is not what Benkler predicted. But it is, I believe, what his framework, extended honestly, was always pointing toward: the moment when the cost of production drops so low that the individual becomes sovereign — not in the libertarian sense of being free from obligation, but in the productive sense of being capable of making things without permission, without capital, without institutional infrastructure.

Benkler taught me that the institutions matter more than the technology. I believe that now more than ever. The technology is here. The individual can build. The question — the only question that matters — is whether we build the institutions that connect individual capability to collective governance, that keep the infrastructure open, that prevent the new means of production from being enclosed by the same forces that enclosed every previous commons in human history.

I do not know if we will. I know that the window is open. I know that it will not stay open forever. And I know that Benkler's framework — rigorous, democratic, institutionally precise — is the best map we have for navigating what comes next.

The orange pill is the recognition that you can build. Benkler's gift is the insistence that building alone is not enough. What you build must be governed. How it is governed will determine whether the wealth of individuals becomes the wealth of everyone, or the poverty of a commons that forgot why it existed.

-- Edo Segal

The commons fed the machine, and the machine's outputs are privatized.

When I first encountered Benkler's work, I was building internet companies. I understood markets. I understood firms. I had spent my career inside one organizational form or the other, raising capital, hiring engineers, shipping products, competing for users. The language of transaction costs and hierarchical coordination was not academic to me — it was the water I swam in.

What Benkler gave me was the vocabulary for something I had sensed but could not articulate: that there was a third thing happening on the internet, something that was neither market nor firm, and that this third thing was not a curiosity or a sideshow but a genuine mode of production with its own logic, its own strengths, and its own democratic significance. When I read The Wealth of Networks, I felt the kind of recognition you feel when someone describes the shape of a room you have been standing in without seeing its walls.

Now I stand in a different room, and the shape has changed again.

Yochai Benkler
“The Tragedy of the Commons,”
— Yochai Benkler
0%
10 chapters
WIKI COMPANION

Yochai Benkler — On AI

A reading-companion catalog of the 18 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Yochai Benkler — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →
Ronald Coase
Further Reading From The You On AI Encyclopedia · Related Thinkers for Yochai Benkler — On AI
10 voices alongside this section — click to meet them