Counter-institutions for AI name the organizational forms that might resist the metabolization of AI-era critique by building arrangements that operate according to different grammars of worth than those of the projective city. They include worker cooperatives that own AI tools collectively, data trusts that govern training data on behalf of contributors, algorithmic audits conducted by independent civic bodies, publicly-owned compute infrastructure, and the nascent forms of union organization adapted to AI-mediated work. What distinguishes them from the absorbed AI ethics industry is that they do not ask how existing firms should use AI better; they ask who should own and govern AI in the first place.
The counter-institutional project emerges from the recognition that individual adaptation and organizational reform within existing firms cannot address the structural problems of the AI transition. If the distribution of AI-driven surplus depends on the ownership structure of AI infrastructure, then redistribution requires changing that structure. If the evaluation of AI-mediated work depends on tests administered by the firms that deploy AI, then fair evaluation requires independent evaluators. If the governance of AI depends on the interests of AI companies, then adequate governance requires independent governing bodies.
Several models exist in embryonic form. Platform cooperatives have demonstrated that worker-owned delivery services, transportation networks, and creative labor markets can operate on different economic logics than their venture-backed competitors. Data trusts — legal structures that hold and govern data on behalf of communities — have been proposed and piloted across multiple jurisdictions. Municipal AI — where cities develop and govern AI tools in the public interest — has begun to emerge in places like Barcelona and Amsterdam.
The challenge is not imagining counter-institutions; scholars have imagined many. The challenge is building them at sufficient scale to provide genuine alternatives to the dominant AI ecosystem. This requires capital, political support, and sustained organization over timescales much longer than the typical project-cycle of the projective city. It requires, in other words, the institutional patience that the projective city most systematically destroys.
The historical parallel is not encouraging but not hopeless. Nineteenth-century cooperatives, trade unions, and mutual aid societies faced comparable challenges and, through decades of sustained effort, built institutions that genuinely altered the trajectory of industrial capitalism. They did not overturn capitalism, but they redistributed enough of its surplus and enough of its governance to produce the twentieth-century social compact. A comparable achievement in the AI transition would require comparable work, organized around a recovered social critique adequate to the new conditions.
The concept draws on work by Trebor Scholz, Nathan Schneider, and the platform cooperativism movement; on the data trusts literature developed by the Open Data Institute; on municipal AI experiments in European cities; and on Boltanski's implicit framework of institutional alternatives to metabolized critique.
Ownership as the question. Counter-institutions center the question of who owns and governs AI, not only how AI is used.
Multiple embryonic models. Platform cooperatives, data trusts, municipal AI, worker-owned AI firms — each exists in experimental form.
Scale as the challenge. Imagining alternatives is easier than building them at levels that matter.
Institutional patience required. The projective city destroys the durations that institution-building requires.
19th-century parallel. The nineteenth-century cooperative and union movements provide partial precedent for the scale of work required.