The argument is simple to state and devastating in its implications. When Aristotle codified virtue ethics in fourth-century Athens, the horizon of human action was bounded by the reach of the human hand, the range of the human voice, the lifespan of the human body. Consequences were local, immediate, and reversible. This bounded condition was not a historical accident; it was the hidden premise of every ethical system the Western tradition produced. Kantian universalization assumed recognizable social outcomes within recognizable timeframes. Utilitarianism assumed calculable horizons. Social contract theories assumed contemporaneous parties. Modern technology shattered these constraints. Nuclear weapons, industrial chemistry, ecological disruption, genetic engineering, and now artificial intelligence have granted human beings the capacity for consequences that are global in scope, indefinite in duration, and potentially irreversible in effect. The inadequacy of inherited ethics is not about needing new rules for old situations. The situations themselves have changed.
The argument requires precision, because the temptation is to treat every new technology as unprecedented. Most are not. The printing press expanded the reach of speech but did not change the nature of speech. The automobile expanded the range of movement but did not change the nature of movement. Each was quantitative amplification of an existing human capacity. What Jonas identified is the rarer phenomenon: a technology that changes the kind of action human beings can perform, not merely its speed or scale.
The AI transition documented in The Orange Pill represents another quantum leap in this expansion. The twenty-fold productivity multiplier Segal observed is not merely speed: it is the restructuring of what individual action can accomplish. A single person, equipped with a natural-language interface to an AI, can produce consequences — working systems, deployed code, products that reach users and reshape markets — that previously required coordinated institutional effort. The imagination-to-artifact ratio has collapsed to the duration of a conversation.
Jonas's framework illuminates the ethical meaning of this collapse. When action was slow, the slowness functioned as ethical governance. The time between conception and execution gave the actor and the actor's community space to evaluate, reconsider, consult, and imagine consequences. Medieval cathedrals required decades of construction, allowing continuous reassessment. Legislative bodies deliberated slowly, allowing affected parties to be heard. The friction was not merely an obstacle. It was a structural feature of a world in which the scope of human power was roughly proportional to the scope of human foresight.
When the imagination-to-artifact ratio approaches zero, the buffer disappears. The structural conditions that made ethical reflection practically possible are eliminated by the same mechanism that makes the action possible. This is not a problem that better ethics education or more thoughtful builders can solve within existing frameworks. The problem is not that builders are thoughtless. The problem is that the temporal architecture of responsible action has been compressed beyond the point at which responsibility, as traditionally conceived, can function.
The thesis is the opening move of The Imperative of Responsibility (1979) and the foundation on which every subsequent element of Jonas's ethics builds. Without the claim that action itself has changed, the demand for new ethical frameworks would be unjustified.
Jonas developed the thesis over two decades of reflection on nuclear weapons, ecological destruction, and genetic engineering. Its application to AI was not anticipated by Jonas himself — he died in 1993 — but the structural pattern he identified fits the AI transition with precision that unsettles the contemporary reader.
Bounded vs. unbounded action. Pre-modern action was bounded by biology and geography; modern action breaks both boundaries. This is a categorical change, not a quantitative one.
Hidden premises of inherited ethics. Every inherited framework — virtue ethics, deontology, utilitarianism, contract theory — implicitly assumed bounded consequences. The frameworks are not wrong for what they addressed; they are insufficient for what now exists.
Slowness as governance. The historical interval between conception and execution functioned as informal ethical infrastructure. AI's compression of that interval removes the infrastructure without replacing it.
The foresight gap. The power to act has expanded exponentially; the ability to foresee has not. The gap is itself the ethical emergency, widening rather than closing as capability increases.
Some critics argue Jonas overstates the discontinuity — that technological change has always outpaced ethical reflection and human beings have always adapted. Jonas's reply: adaptation presupposes that the consequences of maladaptation are recoverable. When consequences become irreversible at civilizational scale, the historical pattern of belated adjustment cannot be relied upon.