Keynesian uncertainty is the radical kind — distinct from risk, where probabilities can be calculated. 'By uncertain knowledge,' Keynes wrote in 1937, 'I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty... The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.' The distinction demolishes any model that claims to predict genuinely novel outcomes — and exposes the specific blindness of large language models trained on historical frequencies.
Keynes's Treatise on Probability (1921) developed the philosophical foundation: probability is not a frequency ratio but a logical relation between evidence and conclusion, a measure of the rational degree of belief that evidence warrants in a proposition. This framework rejected the dominant frequency interpretation that treated probability as the limit of repeated trials.
The distinction matters structurally for AI. Large language models are, at foundation, frequency machines. They learn conditional probabilities across billions of parameters and produce predictions whose quality depends on the similarity between current situations and the training distribution. When the situation falls within the distribution, predictions are reliable. When the situation is genuinely novel — when it falls outside the distribution, when the relevant variables have no historical precedent — the model's confidence does not decline proportionally. It continues to produce fluent, confident output, because fluency is a property of the generative mechanism, not of the epistemic warrant.
This is confident wrongness at its deepest: not occasional hallucination but structural blindness to the boundary between calculable probability and radical uncertainty. The machine cannot tell us it does not know, because the architecture that produces its facility with the known provides no mechanism for registering the unknown.
The AI transition itself is an instance of radical Keynesian uncertainty. Every forecaster projecting 'forty-seven percent of jobs are at risk' or 'AGI will arrive by 2030' is performing the pseudo-scientific exercise Keynes warned against — dressing extrapolation in the language of probability, lending the numbers authority they have not earned.
Keynes developed the framework in A Treatise on Probability (1921) and elaborated its economic implications in his 1937 Quarterly Journal of Economics article clarifying the General Theory.
Risk versus uncertainty. Risk admits calculable probabilities; uncertainty does not.
Probability as logical relation. The warrant evidence provides for belief, not the frequency of repeated events.
The boundary problem. Economic decisions fall on both sides of the risk-uncertainty boundary, often without warning.
AI's structural blindness. Frequency machines cannot register the shift from calculable to uncalculable.
Judgment under uncertainty. When calculation fails, human judgment is the only available instrument.
Whether radical uncertainty is a permanent feature of economic life (Keynes, Knight, Shackle) or a temporary state that better data and better models can eventually eliminate (the rational expectations tradition).