The principle entered international law through the 1992 Rio Declaration and has been incorporated into numerous treaties and regulatory frameworks. Its application to AI remains contested: advocates argue that the scale and potential irreversibility of AI's societal effects warrant precautionary governance, while critics argue that AI does not clearly satisfy Jonas's irreversibility condition and that precautionary regulation risks entrenching incumbent platforms by raising barriers to entry.
The principle's strongest defenders, including philosophers working in the Jonasian tradition, insist that its proper application requires the two-condition test Jonas specified: the feared outcome must be grounded in identifiable mechanisms (not speculative fantasy) and must be irreversible (not merely harmful). Applied with this rigor, the principle targets a narrow class of decisions where standard cost-benefit analysis fails because the worse outcome cannot be corrected by subsequent action.
The principle's intellectual roots lie in German environmental thought of the 1970s — Vorsorgeprinzip — and was strongly influenced by Jonas's work on technological responsibility.
Jonas himself did not use the phrase but articulated the underlying logic more rigorously than most subsequent regulatory applications.
Burden of proof inversion. When consequences are potentially irreversible, those taking action bear the burden of demonstrating safety, not those expressing concern.
Two-condition test. Proper application requires both plausibility of the feared outcome and irreversibility of its consequences.
Regulatory operationalization. The EU's regulatory framework, including the AI Act, relies implicitly on precautionary logic, though enforcement and specification remain contested.