The Moral Economy of Science — Orange Pill Wiki
CONCEPT

The Moral Economy of Science

Daston's foundational concept for the system of affect, trust, and obligation that governs knowledge-producing communities — the social infrastructure without which scientific claims cannot be reliably evaluated.

In her 1995 essay 'The Moral Economy of Science,' Daston argued that scientific knowledge is sustained not only by evidentiary standards and logical argument but by a specific moral economy — a system of norms, affects, and obligations that govern how scientists relate to one another, to their objects of study, and to the communities that rely on their work. Trust in scientific claims is never based solely on the content of those claims; it rests on a web of social, institutional, and material signals that together constitute the evidentiary infrastructure of a knowledge-producing community. The author's affiliation, the journal's reputation, the rigor of peer review, the reproducibility of results, the coherence with established knowledge — all contribute to credibility, and none is reducible to the claim's content alone.

In the AI Story

Hedcut illustration for The Moral Economy of Science
The Moral Economy of Science

The concept extends historian E.P. Thompson's earlier use of 'moral economy' to describe the normative expectations of eighteenth-century crowds. Daston's innovation was to apply the framework to scientific communities and to demonstrate that what had been treated as the natural operation of rational inquiry was in fact sustained by specific moral commitments — to transparency, to honesty about uncertainty, to the proper attribution of credit, to the obligations owed to those who depend on scientific claims.

The moral economy performs functions that no formal protocol can replicate. Peer review is not merely procedural; it depends on reviewers' willingness to invest uncompensated time in careful evaluation because they understand themselves to owe this service to the community. The replication crisis in psychology and biomedical research was diagnosed not only as a methodological failure but as a failure of the moral economy — the erosion of norms of honesty about preliminary results, of willingness to share data, of the obligation to acknowledge limitations.

The relevance to AI is direct and underappreciated. AI-generated text is produced entirely outside any moral economy. The system has no stake in the accuracy of its outputs. It faces no consequences for error. It is not embedded in a community whose norms would constrain its tendency toward confident assertion regardless of epistemic warrant. The absence of a moral economy is not a technical limitation that can be corrected by better alignment procedures. It is a structural feature of the technology's nature as an agent that operates outside the social, institutional, and moral frameworks within which human knowledge production has always been conducted.

The institutional challenge is therefore not merely to build evaluative mechanisms for AI's outputs. It is to construct something analogous to a moral economy around AI's use — a system of norms, accountabilities, and expectations that provides the infrastructure of trust that the technology itself cannot supply. The norms would govern not the AI, which cannot be governed in the relevant sense, but the humans who deploy, interpret, and rely on AI-generated knowledge. The accountability would fall on the practitioners who use outputs without adequate verification, the institutions that deploy systems without adequate oversight, and the policymakers who permit deployment without adequate regulatory frameworks.

Origin

Daston's 1995 essay in Osiris was a landmark intervention that transformed how historians and philosophers of science thought about the social foundations of scientific knowledge. It drew on her earlier work on probability, on Thompson's moral economy of the English crowd, and on Robert Merton's sociology of science, synthesizing these into a new analytical framework that proved productive across multiple domains.

Subsequent work by Daston and others extended the framework to specific cases: the moral economy of early modern natural philosophy, of nineteenth-century field sciences, of twentieth-century Big Science. Each application revealed that what had been treated as the natural operation of scientific inquiry was in fact a historically specific achievement of communities that had worked out, often tacitly, the norms and obligations required to make reliable collective knowledge production possible.

Key Ideas

Knowledge is sustained by norms, not only by evidence. What counts as reliable testimony depends on the social, institutional, and moral infrastructure of the community that produces it.

Trust is relational, not propositional. Scientific claims are evaluated through webs of accountability — author, institution, journal, peers — that the claims alone cannot reproduce.

The replication crisis as moral crisis. Methodological failures in contemporary science reflect erosion of the underlying moral economy — of norms governing honesty, transparency, and attribution.

AI operates outside the moral economy. Generated text carries no author's reputation, no peer vetting, no institutional accountability — and no infrastructure of trust that the text alone could replace.

The response must be institutional. Building a moral economy around AI use means constructing norms for the humans who deploy it, not attempting to moralize the technology itself.

Debates & Critiques

The moral economy framework has been criticized for potentially conflating descriptive claims about how scientific communities operate with normative claims about how they should operate. Defenders respond that the conflation is productive: revealing the moral foundations of knowledge production makes both their current erosion and the institutional work required to rebuild them visible. A further debate concerns whether 'moral economy' can be meaningfully extended to AI contexts, or whether the term's specific historical resonance requires different vocabulary for the different problem AI poses. The position this volume takes is that the extension is legitimate precisely because the problem is analogous in structure, even as the specific solutions must be different.

Appears in the Orange Pill Cycle

Further reading

  1. Daston, 'The Moral Economy of Science,' Osiris 10 (1995): 2–24
  2. E.P. Thompson, 'The Moral Economy of the English Crowd in the Eighteenth Century,' Past & Present 50 (1971)
  3. Robert K. Merton, The Sociology of Science (University of Chicago Press, 1973)
  4. Steven Shapin, A Social History of Truth (University of Chicago Press, 1994)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT