Regulatory science is Jasanoff's term for the distinctive form of knowledge produced when scientific methods are applied within regulatory contexts. It is not pure science (conducted for the sake of understanding) or applied science (using established knowledge to solve practical problems) but a third category: science conducted under the constraints of institutional decision-making, where the purpose is not discovery but actionable conclusions about safety, efficacy, or risk. Regulatory science operates under different epistemic standards than academic science — certainty must be produced on institutional timelines, evidence must be legible to non-specialists, and conclusions must be defensible in adversarial contexts. The knowledge it produces is real and valuable but is shaped by the regulatory framework as much as by the phenomena being studied. Applied to AI, regulatory science explains why governance frameworks struggle: the consequences that matter most (identity erosion, cognitive atrophy, meaning displacement) resist the quantitative certainty that regulatory institutions require.
Jasanoff's analysis of regulatory science emerged from her study of how the U.S. regulatory system produced knowledge about chemical risks, pharmaceutical safety, and environmental hazards. In The Fifth Branch, she documented that the science produced by regulatory agencies is different from the science produced by universities — not because regulatory scientists are less rigorous but because they operate under different constraints. They must produce conclusions on political timelines. They must use evidence that is available rather than evidence that would be ideal. They must defend their conclusions in adversarial proceedings where opposing experts challenge their methods, data, and interpretations. These constraints shape the knowledge — what questions get asked, what methods get used, what uncertainties get acknowledged or suppressed.
The key insight is that regulatory science is co-produced by scientific methods and institutional contexts. Change the institutional context and you change the science. American pharmaceutical regulation requires randomized controlled trials demonstrating safety and efficacy. European regulation allows greater weight to observational studies and post-market surveillance. The difference is not that one standard is scientifically correct and the other wrong; the difference is that the two regulatory frameworks define what counts as sufficient evidence differently, and the definition shapes what knowledge gets produced.
Applied to AI, regulatory science explains the evidentiary crisis at the heart of governance. The consequences that governance institutions are designed to detect — measurable harms attributable to specific AI applications — are the consequences that existing regulatory science can produce knowledge about. But the most important consequences of AI are not of this kind. They are slow, distributed, emergent, and uncertain. They cannot be captured in the randomized controlled trials, the benchmark tests, or the quantitative risk assessments that regulatory institutions treat as the gold standard of evidence. The Berkeley study came closest by using ethnographic methods over eight months, but eight months is too short to detect consequences that unfold over years, and ethnography is too qualitative to satisfy the evidentiary standards most regulatory institutions impose.
Jasanoff's framework does not argue that regulatory science is bad science. It argues that regulatory science is science under constraint, and the constraints shape the knowledge in ways that governance must acknowledge. When a regulatory framework admits only quantitative evidence, it has already determined that qualitative consequences — however real, however consequential — will not influence decisions. The silent middle that Segal identifies possesses the most accurate knowledge of the AI transition: the experiential knowledge of what it feels like to live inside a transformation no one fully understands. That knowledge is epistemically inadmissible in governance frameworks calibrated for regulatory science, and the inadmissibility is not an oversight but a structural feature of how those frameworks were designed.
The concept emerged from Jasanoff's first book, The Fifth Branch (1990), which examined the role of scientific advisers in American regulatory agencies. The book's central finding was that advisers do not merely inform policy; they make it, through the framing choices they impose and the evidentiary standards they apply. The finding challenged the conventional boundary between science (objective, value-free) and policy (political, value-laden) by showing that the science produced for regulatory purposes is already shaped by the values, priorities, and institutional constraints of the regulatory context.
Regulatory knowledge is hybrid. It is shaped simultaneously by scientific methods and by the legal, political, and institutional contexts demanding actionable conclusions — making it different from academic science in ways that matter for governance.
Constraints determine what can be known. Regulatory timelines, evidentiary standards, and adversarial review processes shape which questions get asked, which methods get used, and which uncertainties get acknowledged or suppressed.
Quantitative bias is structural. Regulatory institutions privilege quantitative evidence not because it is superior but because it is actionable — translatable into rules, standards, and enforceable requirements in ways that qualitative evidence is not.
The most important consequences resist quantification. Identity erosion, cognitive atrophy, the transformation of professional meaning — these are real, consequential, and epistemically inadmissible in frameworks designed for regulatory science.