This page lists every Orange Pill Wiki entry hyperlinked from Elinor Ostrom — On AI. 16 entries total. Each is a deeper-dive on a person, concept, work, event, or technology that the book treats as a stepping stone for thinking through the AI revolution. Click any card to open the entry; in each entry, words colored in orange link to other Orange Pill Wiki entries, while orange-underlined words with the Wikipedia mark link to Wikipedia.
The regulatory, institutional, and normative arrangements governing AI development and deployment — reframed through Ostrom's framework as a polycentric governance challenge requiring coordination across multiple scales rather than the mark…
Ostrom's third design principle — the people affected by governance rules should participate in making and modifying them — grounded not in democratic aspiration but in the empirical finding that such arrangements consistently outperform ru…
Ostrom's sixth design principle — disputes about rule interpretation and application must be resolvable quickly, cheaply, and locally — without which disagreements fester, resentments accumulate, and cooperation erodes.
The application of Ostrom's design principles to the shared pool of training data underlying AI systems — a framework developed through the Mozilla Foundation's collaboration with the Ostrom Workshop to identify institutional arrangements t…
Ostrom's fifth design principle — responses to rule violations should be proportional to severity and frequency — which preserves information, maintains relationships, distinguishes error from exploitation, and sustains the voluntary compli…
Vincent and Elinor Ostrom's framework for governance distributed across multiple centers of authority — none exercising comprehensive control, all interacting through institutional linkages — which their empirical research demonstrated prod…
The structural concentration of decision-making authority that distorts governance processes — in the intelligence commons, the domination of a small number of corporations whose control of AI models, training data, and platform access over…
Ostrom's distillation from comparative fieldwork — clear boundaries, congruent rules, collective choice, monitoring, graduated sanctions, conflict resolution, recognition of rights to organize, nested enterprises — that characterizes enduri…
The shared body of governance arrangements, organizational practices, professional norms, and collaborative protocols through which a community manages its relationship to the other resource flows — itself a common-pool resource, produced b…
The shared resource system comprising knowledge, skills, attention, trust, and institutional arrangements on which AI-augmented work depends — a common-pool resource whose five interlocking flows constitute the ecology within which the buil…
Ostrom's fourth design principle — effective governance requires mechanisms for tracking both the resource's condition and community members' behavior — which in the intelligence commons confronts the unprecedented challenge of invisible de…
The distributed pool of professional expertise, craft knowledge, and tacit understanding that sustains high-quality knowledge work — thinned in the AI era as entry-level developmental work is displaced and the pipeline producing deep practi…
The governance regime change in which the accumulated textual, visual, and computational output of millions of individuals was appropriated for AI training under terms their original contribution did not contemplate — the paradigmatic case …
American political economist (1933–2012), first woman to receive the Nobel Memorial Prize in Economic Sciences, whose forty years of fieldwork documenting successful commons governance across six continents established the empirical foundat…
American ecologist (1915–2003) whose 1968 essay "The Tragedy of the Commons" became one of the most influential — and most empirically contested — pieces of policy writing of the twentieth century, inspiring decades of privatization and sta…