Risk and uncertainty are not synonyms but categorically different epistemic conditions requiring different governance approaches. Risk refers to outcomes that can be specified in advance and assigned probabilities — bridge collapses, pharmaceutical side effects, algorithmic discrimination. Risk is the domain of prediction and expert assessment. Uncertainty refers to outcomes that cannot be specified because they arise from interactions between systems whose combined behavior is emergent. The most important consequences of AI — what happens to professional identity when expertise is commoditized, what happens to children's cognitive development when intellectual effort can be bypassed, what happens to democratic culture when persuasive content costs nothing to produce — are uncertain in this precise sense. They depend on interactions between technology and social order that have never existed before, producing outcomes no participant can predict. Governance institutions designed for risk management fail under uncertainty because they rely on prediction, and the consequences that matter most cannot be predicted.
Jasanoff's distinction builds on Frank Knight's 1921 Risk, Uncertainty, and Profit but extends it from economics into governance. Knight distinguished measurable uncertainty (risk) from unmeasurable uncertainty (true uncertainty). Jasanoff extends the distinction by showing that uncertainty is not merely wider error bars on a risk estimate but a different kind of phenomenon requiring different institutional responses. Technologies of hubris — quantitative risk assessments, cost-benefit analyses, safety benchmarks — are designed for risk. They fail for uncertainty because they attempt to convert unmeasurable consequences into measurable ones through models that necessarily exclude what the models cannot capture.
The AI safety discourse operates almost entirely in the language of risk. Benchmarks measure the probability of toxic outputs. Alignment protocols quantify the distance between model behavior and designer intentions. The EU AI Act classifies systems by risk level. These governance instruments address the portion of AI's consequences that can be specified in advance. But the twelve-year-old's question — 'What am I for?' — exists in the uncertainty domain. It is not a risk that can be mitigated because the question concerns the meaning of a human life in the presence of capable machines, and meaning is not a variable that risk assessment can quantify.
The distinction has immediate institutional implications. Risk can be managed through technical instruments: standards, benchmarks, audits. Uncertainty requires different institutional capacities: continuous monitoring (not point-in-time assessment), adaptive governance (not fixed regulation), democratic deliberation (not expert determination), and humility (not confidence). The governance gap is not primarily a speed problem — institutions moving too slowly to keep pace with technology. It is an epistemic problem: institutions designed for risk trying to govern a phenomenon characterized by uncertainty.
Segal's honest account of his own experience captures uncertainty with unusual clarity. The three a.m. sessions with Claude, the inability to distinguish flow from compulsion, the Deleuze failure where plausible-sounding output concealed philosophical error — these are not risks that could have been predicted from the properties of the technology. They are emergent consequences of the interaction between a specific human being, with specific values and vulnerabilities, and a specific tool, in a specific institutional and cultural context. Multiply this interaction across a hundred million users and you have genuine uncertainty: an outcome space too large and too complex for any model to predict, requiring governance institutions designed not for prediction but for detection, learning, and adaptive response.
Jasanoff developed the risk-uncertainty distinction across multiple works, most explicitly in The Ethics of Invention (2016) and in her 2003 'Technologies of Humility' essay. The distinction synthesizes Knight's economic framework, Ulrich Beck's risk society thesis, and the science studies tradition's attention to how uncertainty is managed, suppressed, or denied in institutional contexts.
Risk is calculable, uncertainty is not. Risk assessment assigns probabilities to specified outcomes. Uncertainty confronts outcomes that have not been imagined because they emerge from interactions no model captures.
AI's important consequences are uncertain. Not the measurable risks (toxic outputs, privacy violations) but the emergent transformations (identity, cognition, meaning, democratic culture) that unfold slowly and resist quantification.
Different governance for different conditions. Risk requires technical management — standards, benchmarks, enforcement. Uncertainty requires democratic navigation — monitoring, learning, participation, humility.
Certainty is a governance failure. Institutions that present confident predictions about AI's consequences have either limited their analysis to the predictable portion (ignoring what matters most) or have mistaken models for reality.