The cultural theory of risk, developed by Aaron Wildavsky and anthropologist Mary Douglas in Risk and Culture (1982), holds that the identification of dangers is a social process rather than a technical calculation. Different cultural formations — egalitarian, hierarchist, individualist, and fatalist — systematically perceive different risks as salient and propose different responses to them. The theory explains why equally informed people, looking at identical technical data about nuclear power, genetically modified crops, or artificial intelligence, reach opposite conclusions about safety. It relocates risk analysis from the domain of expert calculation into the domain of cultural politics, with profound implications for how societies govern powerful new technologies.
The theory emerged from Douglas's anthropological fieldwork on pollution beliefs and Wildavsky's policy analysis of American environmental regulation. They observed that societies do not simply respond to objective hazards in proportion to statistical danger. Instead, they select which dangers to emphasize based on the social form they are trying to preserve or attack. A society that fears cancer from industrial chemicals is making a statement about what it values and whom it distrusts, not simply reporting epidemiological findings. The same society may be relatively indifferent to statistically larger risks — automobile accidents, household falls — because those risks do not carry the same cultural meaning.
Applied to the AI transition, the framework produces immediate diagnostic power. The egalitarian reading sees AI as a concentration of power in the hands of those who control the algorithms; its risks are distributional and its remedies are redistributive. The hierarchist reading sees AI as a threat to credentialing systems and institutional quality control; its remedies are professional standards and licensing. The individualist reading sees AI as liberation from gatekeepers; its remedies are market competition and minimal interference. The fatalist reading sees the outcome as already determined and disengages. Each is responding to the same technology. None sees the same dangers.
The theory's deepest implication is that the AI alignment conversation cannot be resolved by producing better evidence. More data does not converge opposing cultures toward consensus; it provides more material for each culture to selectively interpret. The path forward is not to win the cultural argument but to build institutions capable of incorporating all four perspectives without privileging any single one. This is the governance challenge the AI moment poses, and it is harder than the technical alignment problem because it operates on the level of shared meaning rather than shared computation.
Wildavsky's framework is often misread as relativist — as if all risk perceptions are equally valid and no adjudication is possible. He rejected this reading. Cultures make predictions that can be tested. The hierarchist who claims that regulation will prevent harm, the egalitarian who claims that equal distribution will produce flourishing, the individualist who claims that markets will self-correct — each makes empirical claims that the historical record evaluates. The framework is descriptive about where risk perceptions come from, and prescriptive about the necessity of institutional pluralism that keeps all four correctives in play.
Wildavsky developed the framework through the 1970s in response to the American environmental movement, which he viewed as predominantly egalitarian and therefore systematically attuned to certain risks (corporate pollution, nuclear contamination) while relatively blind to others (regulatory capture, bureaucratic sclerosis). The collaboration with Mary Douglas brought anthropological rigor to what had been a political-science intuition. The book Risk and Culture (1982) crystallized the theory and remains the foundational text.
The theory was controversial on publication and remains so — partly because it unsettles all four cultural positions simultaneously by describing each as partial. Its contemporary relevance has been heightened by the AI discourse, which has fragmented along exactly the lines Wildavsky's grid-group typology predicted.
Risk perception is cultural. What a society fears reveals its organizational form, not the objective distribution of dangers.
Four cultural positions. Egalitarian, hierarchist, individualist, and fatalist — each produces a distinctive risk portfolio and policy response.
No neutral vantage. Every framing of AI risk — including ostensibly technical framings — carries cultural assumptions about how society ought to be organized.
Institutional pluralism as remedy. Governance arrangements that incorporate all four perspectives are more resilient than those dominated by any single one.
The discourse is the diagnostic. The fragmentation of AI debate into mutually incomprehensible camps is itself the clearest evidence that cultural theory is operative.
Critics argue that the four-fold typology is too rigid, collapsing complex moral positions into caricatures. Defenders respond that the typology is a diagnostic tool rather than a complete ontology — it identifies the structural positions available, not the full texture of any individual's views. A more substantive debate concerns whether cultural theory undermines the possibility of genuine scientific consensus about risk. Wildavsky's answer was that consensus is achievable when the institutions that produce evidence are themselves pluralistic; the problem arises when a single cultural position captures the evidence-producing apparatus.