Searching for Safety is both the title of Wildavsky's 1988 book and the name of his central doctrine: that safety emerges from the capacity to absorb surprises rather than from the fantasy of preventing them. The argument inverts the intuition that safer technology is produced by more careful prediction. Wildavsky demonstrated across domains — medicine, industry, environment — that anticipation strategies consistently fail because no society has ever possessed the predictive capacity they require, while resilience strategies succeed because they build the institutional muscle to correct errors as they emerge. The doctrine has become the intellectual spine of contemporary critiques of the precautionary principle, and applies with particular force to the AI transition, where the speed of capability change outstrips any conceivable anticipatory regime.
The core argument rests on an asymmetry Wildavsky spent decades documenting. Anticipation requires predicting which harms will occur, which requires understanding causal chains that run into the future. Resilience requires only the capacity to detect harms after they emerge and correct them before they compound. The first capacity is rare and unreliable. The second is achievable through institutional design. Societies that invest in resilience therefore outperform societies that invest in anticipation, even when the resilient society experiences more initial harms, because the resilient society recovers while the anticipatory society ossifies.
The doctrine sits uncomfortably with both progressive and conservative instincts about regulation. Progressives typically want to prevent corporate harms through upstream intervention; Wildavsky argued that the prevention apparatus itself produces harms — sclerosis, capture, the suppression of correctives — that often exceed the harms it was designed to prevent. Conservatives typically want to trust markets to self-correct; Wildavsky argued that markets only self-correct when institutional scaffolding supports the correction, and that scaffolding requires deliberate construction. His position was neither libertarian nor statist but institutional: the question is always what combination of mechanisms produces the fastest, fairest feedback.
Applied to AI, the argument cuts against the dominant frames of the AI safety discourse. Wildavsky would have been skeptical of any governance regime that proposed to identify AI harms in advance and prohibit them. He would have been equally skeptical of regimes that proposed to let the market discover the harms without supporting the feedback mechanisms. The resilient path requires rapid detection, rapid correction, rapid institutional learning — which requires deployment, observation, and iteration, not moratorium. The Orange Pill moment in Trivandrum, where engineers learned to direct AI through trial and error, is a microcosm of the doctrine Wildavsky spent his career defending.
The doctrine connects to the beaver's dam metaphor with surprising precision. The dam is not built once and finished; it is built, breached, and rebuilt continuously as the current changes. The beaver's safety is not a state it achieves but a practice it maintains. This is resilience embodied — the organism that cannot predict the river develops instead the capacity to respond to whatever the river does. Wildavsky would have loved the metaphor; it captures his argument more economically than most of his own prose managed.
Wildavsky's 1988 book Searching for Safety consolidated arguments he had developed since the mid-1970s in critiques of nuclear regulation, drug approval, and environmental policy. The immediate target was the U.S. regulatory apparatus that had calcified around the principle that all possible harms must be prevented before any benefits could be realized. Wildavsky argued this principle was itself harmful — it prevented the learning that would have reduced net mortality.
The book's argument proved prophetic. The same logic is now visible in debates about AI governance, in which the precautionary voices demand regulation before deployment while the resilient voices argue that deployment is the only way to learn what regulation is needed. Wildavsky did not live to see the AI transition, but his framework applies to it with eerie precision.
Safety is a process, not a state. Societies that treat safety as an achievable condition ossify; societies that treat it as an ongoing practice adapt.
Anticipation fails. No society has ever possessed the predictive capacity that the precautionary principle requires, and the pretense that it does produces harms larger than the ones prevented.
Resilience succeeds. The capacity to detect and correct errors is more reliably produced than the capacity to predict them in advance.
Feedback is the architecture. Institutional structures that enable rapid detection and correction are the operational substance of safety.
Deployment is learning. Societies that prohibit deployment to prevent harm also prohibit the learning that would identify the real harms.
The doctrine's central vulnerability concerns catastrophic risks — harms so large or irreversible that no recovery is possible. Critics argue that Wildavsky's framework works for ordinary risks but fails for existential ones, where the first error ends the game. Defenders respond that anticipation fails even more severely in exactly these cases, because the predictive burden is highest where the data is thinnest. The debate continues, and it is the most consequential unresolved question in contemporary AI safety.