The precautionary principle holds that when an action or policy has a suspected risk of causing harm to the public or the environment, and scientific consensus on the risk is absent, the burden of proof falls on those taking the action rather than those opposing it. The principle is most developed in European environmental law and forms a cornerstone of EU regulatory practice, including the framework underlying the EU AI Act. Its philosophical parentage includes Jonas's heuristics of fear, though the regulatory formulations tend to be less rigorous than Jonas's original philosophical framing. Where Jonas specified two conditions — plausibility and irreversibility — regulatory applications sometimes drift toward broader precaution that critics argue is either incoherent or weaponizable.
The principle entered international law through the 1992 Rio Declaration and has been incorporated into numerous treaties and regulatory frameworks. Its application to AI remains contested: advocates argue that the scale and potential irreversibility of AI's societal effects warrant precautionary governance, while critics argue that AI does not clearly satisfy Jonas's irreversibility condition and that precautionary regulation risks entrenching incumbent platforms by raising barriers to entry.
The principle's strongest defenders, including philosophers working in the Jonasian tradition, insist that its proper application requires the two-condition test Jonas specified: the feared outcome must be grounded in identifiable mechanisms (not speculative fantasy) and must be irreversible (not merely harmful). Applied with this rigor, the principle targets a narrow class of decisions where standard cost-benefit analysis fails because the worse outcome cannot be corrected by subsequent action.
The principle's intellectual roots lie in German environmental thought of the 1970s — Vorsorgeprinzip — and was strongly influenced by Jonas's work on technological responsibility.
Jonas himself did not use the phrase but articulated the underlying logic more rigorously than most subsequent regulatory applications.
Burden of proof inversion. When consequences are potentially irreversible, those taking action bear the burden of demonstrating safety, not those expressing concern.
Two-condition test. Proper application requires both plausibility of the feared outcome and irreversibility of its consequences.
Regulatory operationalization. The EU's regulatory framework, including the AI Act, relies implicitly on precautionary logic, though enforcement and specification remain contested.