The paper argues that 'actually existing AI'—the dominant paradigm of centralized, proprietary, human-replacing systems—misconstrues intelligence as autonomous rather than social and relational and tends to concentrate power, resources, and decision-making in an engineering elite. The critique is structural rather than moral: the concentration is not the product of bad intentions but the predictable outcome of a development paradigm whose assumptions about intelligence, whose investment requirements, and whose optimization targets all point toward centralization. The authors propose an alternative paradigm based on plurality—technologies that participate in and augment human creativity and cooperation rather than replacing them.
The paper was written in the Carr Center for Human Rights Policy at Harvard and published as part of a broader project on technology and democracy. Its four co-authors represent distinct but complementary traditions. Acemoglu, the economist, brings institutional analysis of how technology choices shape distributional outcomes. Kate Crawford, the AI researcher, brings empirical documentation of how AI systems encode the biases of their training contexts. Weyl, the political economist, brings the framework of mechanism design and plural governance. Allen brings the theory of democratic equality.
The paper's diagnostic move is to distinguish between AI's technical capabilities and the paradigm through which those capabilities are developed and deployed. The technical capabilities are neutral—they can be used to concentrate power or distribute it. The paradigm is not neutral. The dominant approach assumes that intelligence is an autonomous property to be extracted and centralized, that the goal of AI is to replace human cognitive labor, and that the path to better AI runs through ever-larger models controlled by ever-fewer organizations. Each of these assumptions, the authors argue, is both empirically questionable and democratically dangerous.
The positive alternative the paper proposes draws on a tradition of technology design that 'underlies many celebrated digital technologies such as personal computers and the internet.' This tradition treats intelligence as social and relational—as something that emerges from interaction among diverse participants rather than something that can be extracted and concentrated. It treats the goal of technology as augmenting human cooperation rather than replacing it. And it treats the path to better technology as running through broader participation in development rather than tighter control by specialists.
The paper has become a foundational document for what has come to be called the plurality paradigm, institutionalized through Allen's GETTING-Plurality network at Harvard, Weyl's RadicalxChange foundation, and a growing community of researchers working on alternatives to centralized AI development.
'How AI Fails Us' was published in 2021 by the Harvard Carr Center for Human Rights Policy as part of the Carr Center's ongoing work on technology and democratic theory. The paper has been widely cited and has influenced subsequent policy discussions including Allen's 2025 Roadmap for Governing AI.
Paradigm critique. The problem is not AI's capabilities but the development paradigm that concentrates those capabilities in the hands of an engineering elite.
Intelligence as relational. The dominant paradigm's treatment of intelligence as autonomous is empirically questionable and democratically dangerous.
Centralization is structural. Power concentration is the predictable outcome of the paradigm's assumptions, not the product of bad intentions.
Plurality alternative. An alternative paradigm would treat AI as augmenting human cooperation rather than replacing it.
Historical precedent. The plurality tradition underlies many successful digital technologies, including personal computers and the internet.