Collectively governed AI is the institutional model in which AI models are owned, governed, and directed by the communities whose knowledge constitutes their training data, rather than by private corporations that extracted the training data as a business input. The model treats the intelligence commons — the accumulated cognitive output of a profession, a community, or humanity at large — as a genuinely shared resource whose monetization should return value to its producers. Such structures are technically feasible and politically difficult, which is precisely why Noble's framework identifies them as a suppressed alternative worth naming.
The institutional template draws on multiple precedents. Cooperative ownership structures, well-established in agriculture and credit unions, provide one model. Data trusts, as theorized by Jaron Lanier and E. Glen Weyl, provide another. Open-source software foundations like the Apache Foundation or the Linux Foundation demonstrate that significant technological development can occur under collective governance structures. The question is not whether such structures can work but whether they can be established for AI at scale.
The specific institutional design questions are tractable. Who counts as a contributor — every author of training-data text, or some defined subset? How is governance structured — one-person-one-vote, weighted by contribution, delegated to elected representatives? How are commercial revenues distributed — equally, proportionally to contribution, reserved for collective investment in the model's development? Each question has multiple workable answers, and the choice among them is a political rather than a technical question.
The structural obstacles are institutional rather than technical. The AI companies that currently own the training corpora and the trained models have no incentive to transfer ownership to the communities whose knowledge was extracted. The communities themselves typically lack the institutional infrastructure to receive and exercise collective ownership. The regulatory frameworks that might require ownership transfer are weak, captured, or absent. The legal doctrines governing what counts as fair use of training data favor extraction over compensation.
But each of these obstacles is a political arrangement, and political arrangements can be changed. The precedent closest to hand is the labor movement's twentieth-century achievement in establishing collective bargaining, workplace safety regulation, and minimum wage law against the resistance of employers who had previously controlled these matters unilaterally. The institutional innovations required for collectively governed AI are not more difficult than those the labor movement achieved, and they would respond to a structural dispossession at least as significant as the one that motivated the earlier institutional work.
The concept has been developed by multiple contemporary thinkers, including Lanier, Weyl, Elinor Ostrom's intellectual heirs, and organizations like the RadicalxChange foundation. Its relationship to Noble's framework is analytical: Noble's concept of suppressed alternatives provides the vocabulary for naming why collectively governed AI remains marginal despite being technically feasible, politically defensible, and morally compelling.
Ownership follows production. The knowledge that trains the model was produced by identifiable communities; those communities have a reasonable claim on the model's ownership and governance.
Institutional precedents exist. Cooperatives, data trusts, and open-source foundations provide workable templates for collective governance of complex technical artifacts.
Obstacles are political, not technical. The barriers to collective governance are the institutional arrangements that currently concentrate ownership, not engineering limitations of the underlying systems.
Requires organized power. Like the labor movement's twentieth-century achievements, collectively governed AI would require sustained organized pressure against the resistance of those who currently capture the value.
Critics argue that collective governance would slow AI development and reduce innovation. The Noble framework responds that the question is not speed of development but direction of development — that the faster private development currently produces directs the technology toward expertise-replacement for shareholder returns, while slower collective development might direct it toward expert-amplification for broader benefit. The speed preference smuggles in a distributional preference.