The Allen Lab operates within Harvard's Edmond & Lily Safra Center for Ethics and focuses on empirical investigation of AI behavior to inform governance proposals. The lab's research includes studies of the 'ethical-moral intelligence' of AI systems, analyses of platform governance structures, and investigations of AI's impact on democratic deliberation. The lab's 'Crocodile Tears' working paper, for example, found that AI models 'demonstrate moral sensitivity to ethical dilemmas in ways that closely mimic human responses' but 'exhibit greater certainty than humans when choosing between conflicting sacred values'—a discrepancy that raises important questions about the coherence and transparency of AI systems when they are asked to exercise moral judgment.
The Allen Lab represents Allen's commitment to empirical grounding of normative claims. Political theory alone cannot adequately address the AI moment because the questions the moment poses—about the behavior of AI systems, their moral reasoning, their effects on democratic deliberation—require empirical investigation. The lab provides the institutional infrastructure for conducting such investigations while maintaining connection to the normative framework Allen has developed.
The 'Crocodile Tears' paper illustrates the lab's approach. The paper investigates how AI systems respond to ethical dilemmas involving tradeoffs between sacred values—questions that human moral reasoning characteristically finds difficult and approaches with appropriate uncertainty. The empirical finding is that AI systems often express certainty about such tradeoffs even while acknowledging their difficulty. This discrepancy between reported difficulty and actual decisiveness has direct implications for how AI systems should and should not be deployed in contexts requiring genuine moral judgment.
The lab's work also extends to empirical investigation of platform governance, AI's effects on democratic participation, and the design of AI systems to support rather than undermine collective deliberation. These investigations inform Allen's policy proposals in the Roadmap for Governing AI and provide the empirical foundation for the GETTING-Plurality network's broader work.
The lab's commitment to empirical investigation distinguishes it from purely theoretical work on AI governance. Theory alone can diagnose general problems but cannot assess whether specific governance proposals would actually produce the outcomes they are designed to produce. Empirical investigation is required to translate theoretical commitments into institutional designs that work in practice rather than merely in principle.
The Allen Lab operates within the Harvard Edmond & Lily Safra Center for Ethics and is a component of Allen's broader GETTING-Plurality research program.
Empirical grounding. Normative claims about AI governance require empirical investigation of AI system behavior.
Moral reasoning studies. Research on how AI systems approach ethical dilemmas reveals structural limits of AI moral judgment.
Certainty-difficulty gap. AI systems often express certainty about moral tradeoffs even while acknowledging their difficulty.
Policy translation. Empirical findings inform specific governance proposals in the Roadmap for Governing AI.
Institutional infrastructure. The lab provides the research capacity required to translate theoretical commitments into working institutional designs.