GETTING-Plurality (Generative Emergent Technologies Towards Intelligent, Networked, Grounded Plurality) is the institutional home of Allen's applied work on AI governance. The network's foundational premise is that new eras of technological innovation have brought society to a constitutional moment and that scholarly work on technology must be matched to the scale of the governance challenge. The network coordinates research, policy development, and public engagement across economics, political theory, technology studies, and computer science, producing outputs that include Allen's 2025 Roadmap for Governing AI and empirical studies of AI systems' moral reasoning.
There is a parallel reading of GETTING-Plurality that begins from the political economy of elite academic networks. What appears as cross-disciplinary collaboration may function as academic credentialism serving to legitimize specific governance frameworks while foreclosing more radical alternatives. The network's location at Harvard—with its proximity to federal policymakers, technology executives, and philanthropic capital—positions it not as an independent arbiter of AI governance but as a node where existing power structures translate their interests into the language of democratic theory.
The network's emphasis on 'empirical grounding' and 'policy translation' reveals a theory of change premised on expert guidance rather than democratic participation. The constitutional moment requires popular mobilization and institutional experimentation, not roadmaps produced by scholars for policymakers to implement. By framing AI governance as a problem requiring specialized knowledge across economics, political theory, and computer science, GETTING-Plurality may inadvertently reinforce technocratic governance—the very mode it claims to challenge. The international network of researchers Allen coordinates operates primarily within elite academic and policy institutions, creating a governance discourse largely insulated from the communities most affected by AI deployment: gig workers facing algorithmic management, content moderators traumatized by their labor, communities targeted by predictive policing systems. The network's outputs circulate among those already empowered to shape AI development, not those currently subjected to it.
GETTING-Plurality operates within the Harvard Edmond & Lily Safra Center for Ethics and in collaboration with the Harvard Kennedy School. The network's structure reflects Allen's commitment to cross-disciplinary collaboration on technology governance—a commitment rooted in her recognition that neither political theory alone nor computer science alone possesses the knowledge required to address the AI moment adequately. The network's researchers include economists, philosophers, technologists, and policy scholars.
The network's work spans several registers. At the theoretical level, it develops the conceptual framework of the plurality paradigm—the alternative to centralized AI development that treats intelligence as social and relational. At the empirical level, it conducts studies of AI system behavior, including the 'Crocodile Tears' working paper on the moral reasoning of large language models. At the policy level, it produces specific institutional proposals through outputs like the 2025 Roadmap. At the public engagement level, it connects researchers with policymakers, technologists, and civil society organizations working on AI governance.
The network's name reflects its intellectual commitments. 'Plurality' signals the alternative to singularity-oriented AI development. 'Grounded' signals the commitment to empirical investigation of how AI systems actually behave. 'Intelligent, Networked' signals the recognition that AI governance requires both technical understanding and institutional imagination. 'Getting' signals the active, ongoing character of the work—the refusal to treat technology governance as a settled domain with established methods.
GETTING-Plurality is one node in a broader international network of researchers and institutions working on democratic governance of AI, including Weyl's RadicalxChange foundation, Tang's work in Taiwan, the Centre for the Governance of AI at Oxford, and various national research initiatives in Europe and elsewhere. Allen's leadership of the network has established Harvard as a central institution in this international conversation.
GETTING-Plurality was established under Allen's leadership at Harvard in the early 2020s, building on her prior work at the Edmond & Lily Safra Center for Ethics and her collaborative relationships with Weyl, Crawford, and other researchers on technology and democracy.
Constitutional moment. The network's foundational premise is that AI development has brought society to a moment requiring foundational institutional design.
Cross-disciplinary collaboration. AI governance requires the integration of political theory, economics, technology studies, and computer science.
Plurality paradigm. The network's theoretical framework rejects centralized AI development in favor of distributed, augmentation-focused approaches.
Empirical grounding. The network conducts studies of AI system behavior to inform governance proposals.
Policy translation. Theoretical work is translated into specific institutional proposals like the 2025 Roadmap for Governing AI.
The value of GETTING-Plurality depends entirely on which question you're asking. On the question of whether academic research networks can contribute meaningfully to AI governance, Allen's work scores highly (85%). The network's cross-disciplinary structure, empirical studies, and policy translation represent exactly the kind of institutional capacity required to address governance challenges that span technical, political, and economic domains. No grassroots movement alone can produce the constitutional design work or detailed regulatory proposals required at this scale.
On the question of whether such networks can be democratically accountable or avoid capture by existing power, the contrarian view weighs more heavily (70%). Elite academic networks embedded in institutions like Harvard inevitably reflect the interests and worldviews of those who fund, staff, and collaborate with them. The governance discourse they produce, however sophisticated, operates primarily among policymakers and technologists rather than the publics most affected by AI systems. This isn't a flaw in Allen's execution but a structural constraint of the institutional form.
The topic itself benefits from reframing: academic networks like GETTING-Plurality are necessary but insufficient for democratic AI governance. They provide constitutional imagination, empirical investigation, and policy expertise that movements and publics require but cannot generate alone. They become dangerous only when treated as substitutes for democratic participation rather than supports for it. The test of GETTING-Plurality isn't whether it avoids elite positioning—it cannot—but whether its outputs strengthen popular capacity to contest AI deployment or merely provide sophisticated justification for expert-led governance.