The resilience strategy is the operational complement to Wildavsky's critique of the precautionary principle. Where anticipation tries to identify and prevent harms before they occur, resilience builds the institutional capacity to absorb, detect, and correct harms as they emerge. The strategy rests on four operational principles: rapid deployment to generate information, transparent observation to detect problems, distributed feedback mechanisms to surface diverse perspectives on what counts as harm, and institutional arrangements that correct quickly when errors are identified. Applied to AI, the strategy inverts most current governance proposals, which assume anticipation is the primary tool and resilience is a backstop.
Resilience is not a passive property — it is an active institutional achievement. A resilient system has fast feedback loops, distributed decision rights, redundant capacity, and the cultural tolerance for error that allows learning to occur. Each of these must be deliberately constructed and continuously maintained. Wildavsky emphasized this construction work because the instinctive regulatory response is to build the opposite: slow processes, centralized authority, single-point-of-failure designs, and zero-tolerance cultures. The resilient institution is counterintuitive, which is why most institutions are not resilient.
The Trivandrum training described in the Orange Pill is a compressed case study in resilience. Twenty engineers deployed the tool rapidly, observed what worked and what broke, adjusted continuously, and produced in a week what a precautionary regime would have taken months to evaluate. The engineers did not anticipate the transformation; they built the capacity to respond to it. This is the resilience pattern at the individual and organizational scale, and it generalizes.
At the societal scale, resilience requires institutional structures that most democratic polities have not built. Fast feedback across national scales requires information infrastructure that the existing regulatory agencies do not possess. Distributed correction requires that errors discovered by one party be usable by all parties, which requires transparency regimes that the frontier AI firms currently resist. Redundant capacity requires that multiple independent governance bodies work on overlapping problems, which is inefficient and therefore resisted by efficiency-minded designers. These are all design problems, not insoluble ones, but they require the deliberate construction work that Wildavsky insisted could not be substituted by precautionary rhetoric.
The critical question for AI governance is whether the timelines of capability change are compatible with the timelines of resilient institutional construction. If capabilities change faster than institutions can learn, resilience fails, and some version of precaution becomes necessary even at its own enormous cost. Wildavsky's optimism rested on the empirical claim that this timeline mismatch is rarer than feared — that institutions can adapt faster than they usually do, given the right incentives. The AI transition is an unusually hard test of this claim, and the verdict is not yet in.
The concept of resilience in the sense Wildavsky used it draws on ecological theory, particularly C.S. Holling's work on ecosystem dynamics in the 1970s. Wildavsky adapted the ecological framework to political and regulatory contexts in Searching for Safety (1988).
The strategy has since been extended into engineering (resilience engineering), public health (pandemic preparedness), and cybersecurity, where the recognition that perfect prevention is impossible has driven attention to recovery capacity. The AI governance discourse is now catching up to what these other domains learned decades ago.
Safety is capacity, not prediction. Resilient systems succeed because they can respond to what happens, not because they predicted it.
Rapid deployment generates information. Use is the only reliable source of harm-relevant data.
Transparency enables detection. Harms discovered by one party must be available to all for correction to occur at scale.
Distributed correction. Multiple independent response paths are more reliable than a single authoritative one.
Error tolerance as cultural precondition. Organizations that punish errors cannot learn from them, and therefore cannot achieve resilience.
The sharpest challenge to resilience strategies concerns irreversible harms, where there is no recovery from the first error. Critics argue that AI has the potential to produce such harms, making resilience inadequate. Defenders argue that the space of irreversible AI harms is narrower than often claimed, and that the precautionary alternative is itself likely to produce catastrophic failures of a different kind — regulatory capture, innovation suppression, concentration of power in incumbent actors.