The third design principle holds that those affected by governance rules should participate in making and modifying those rules. Ostrom's framing is striking: the principle is not advanced as a democratic aspiration grafted onto a technocratic framework but as an empirical finding. Governance arrangements in which the governed participate in rule-making outperform arrangements in which they do not, across a wide range of institutional contexts and resource types. The reasons are structural rather than moral.
Participants have informational advantages — they know things about the resource and the community that external rule-makers cannot know. They have implementation advantages — rules they helped design are rules they understand, reducing the gap between rules-in-form and rules-in-use. And they have motivational advantages — rules they participated in making are rules they have a stake in maintaining, increasing voluntary compliance and reducing enforcement costs.
The current exclusion of practitioners from AI governance decisions is therefore not merely unjust but inefficient. The practitioners who work with AI tools daily know things about the tools' effects — on quality, on skill development, on collaborative dynamics, on the texture of professional judgment — that no corporate executive, government regulator, or ethics-board member can access from a distance. When a major AI company updates its model in ways that change the behavior practitioners depend on, the practitioners discover the changes through their work, often before the company's own documentation catches up.
Collective choice in the intelligence commons must meet three conditions. First, practitioners must have formal standing in governance bodies — not just consultation but decision-making authority. Second, governance rules must be revisable through processes accessible to the community. Third, the cost of participation must be low enough that practitioners with genuine grievances are not deterred — accommodating those who have day jobs, cannot afford conferences, may face retaliation from employers, and whose expertise is practical rather than theoretical.
The canonical example is the Valencia huerta tribunals, which have adjudicated water disputes every Thursday since at least the tenth century. When the Spanish government attempted to impose a centralized water-management system in the nineteenth century, the tribunals outperformed it on speed, fairness, and compliance — not because the tribunals had better-trained administrators but because their rules had legitimacy derived from collective authorship.
The principle emerged from Ostrom's observation that, across her empirical database, governance arrangements in which the governed participated in rule-making consistently outperformed those in which they did not. The finding challenged technocratic assumptions that expert rule-makers would produce better outcomes than communities of users — assumptions that remain dominant in contemporary AI governance discourse.
Empirical, not aspirational. The principle rests on the observed superiority of participant-authored rules, not on democratic ideology.
Three advantages. Participants bring informational, implementation, and motivational advantages that external rule-makers lack.
Three conditions. Formal standing, revisability, and low cost of participation are required for genuine collective choice.
Practitioners currently excluded. The AI governance landscape concentrates rule-making in corporations and governments that lack ground-level knowledge.