In the final decades of his life, Prigogine made an argument so radical that many physicists refused to accept it. The argument was not about chemistry or dissipative structures. It was about physical law itself. In The End of Certainty (1997), Prigogine proposed that the fundamental equations of physics must be reformulated to incorporate irreversibility and probability at their most basic level. The determinism of Newton and Laplace is not merely a practical impossibility. It is a theoretical one. The information that would determine the future of complex systems does not exist, because the future of far-from-equilibrium systems at bifurcation points is genuinely undetermined at the level of physical law.
If Prigogine is right, the future is not hidden. It is not yet formed. The events that will determine the trajectory of far-from-equilibrium systems at their bifurcation points have not yet occurred, and no amount of data about the present state can substitute for the events themselves. The fluctuation that will tip the system at the next bifurcation is not a piece of information waiting to be discovered. It is a piece of reality waiting to be created.
The relevance to AI is immediate. The entire infrastructure of artificial intelligence is built on the premise that prediction is the highest cognitive achievement. Language models predict the next token. Recommendation systems predict preferences. Financial models predict markets. The implicit worldview is Laplacean: given sufficient data and computational power, the future can be known. The question is only whether the models are sophisticated enough.
Prigogine's framework challenges this at its foundation. In far-from-equilibrium systems near bifurcation — and the sociotechnical system of civilization in the AI age is such a system — prediction fails not because models are insufficient but because the system's future is not a function of its present state. The fluctuation that determines the outcome is not a piece of present state the model missed. It is a future event that has not happened and cannot be anticipated.
The confident predictions dominating AI discourse — ninety percent AI-written code within months, AGI within years, the obsolescence of entire professions within a decade — are extrapolations. They take the current trajectory and extend it forward as if the system were near equilibrium. But the system is not near equilibrium. It is near bifurcation. Extrapolation fails. The predictions may coincidentally match outcomes, but the match would be coincidence, not prediction, because the path depends on bifurcations not yet reached and fluctuations not yet occurred.
The practical consequence is not paralysis. It is the transformation of planning from prediction to preparation. The appropriate response to genuine indeterminacy is the construction of structures robust across multiple possible futures — dams that hold across different currents rather than optimizations for a single predicted flow. This is stewardship translated into thermodynamic terms: the steward does not predict; she prepares. She builds structures that are robust rather than optimal, maintained through continuous attention, responsive to fluctuations no model can predict.
Prigogine's argument against certainty developed across his career but crystallized in The End of Certainty (1997), which proposed the most radical reformulation of fundamental physics — incorporating irreversibility and probability at the microscopic level through Poincaré resonances in unstable systems. The philosophical position drew on Bergson's intuitionism and Whitehead's process philosophy but grounded them in mathematical physics.
The book was controversial in physics and remains so. The mainstream has not accepted Prigogine's reformulation of fundamental dynamics, though his contributions to non-equilibrium thermodynamics are universally acknowledged. The philosophical argument — that the future is genuinely open — has traveled more successfully than the technical proposal.
Certainty is structurally impossible. In far-from-equilibrium systems near bifurcation, the information that would determine outcomes does not yet exist.
Prediction is regime-dependent. Near equilibrium, extrapolation works; near bifurcation, it fails not due to ignorance but due to ontological openness.
AI predictions are extrapolations. Confident forecasts about the technology's trajectory apply near-equilibrium reasoning to a system near bifurcation.
Preparation replaces prediction. The appropriate response is robustness across possible futures, not optimization for a specific predicted one.
The end of certainty is hope, not despair. If the future were determined, choice would be illusory; indeterminacy makes stewardship meaningful.
The technical claim — that fundamental physics itself must be reformulated — remains contested within physics. The philosophical claim — that the future of complex systems is genuinely open — has broader acceptance, though critics argue that irreducible indeterminacy in fundamental law is not required to support planning for multiple scenarios. One can acknowledge practical unpredictability without committing to Prigogine's stronger metaphysical position.