The Seldon Plan is the concrete output of psychohistorical analysis. Seldon foresees the coming collapse of the Galactic Empire but calculates that the resulting dark age can be shortened by positioning two Foundations — one of scientists and engineers, one of mental scientists — at strategic points in the galaxy. The Foundations do not lead; they are positioned so that the aggregate statistical forces Seldon modeled will produce the desired trajectory. Their decisions at crisis points are, in Seldon's model, already decided by the structure of the situation. The Plan encodes this trajectory as a series of "Seldon crises" at which only one path forward is viable.
The Plan's internal elegance is that it shifts the locus of decision-making from the human actors to the structure of the situation. Individual Foundationers do not need to be especially wise or brave — Seldon has arranged things so that even mediocre choices will converge on the intended outcome. Each Seldon crisis is a moment where the environment has been shaped to leave only one reasonable action.
Contemporary AI governance has considered similar structures, though never at the same scale. Compute thresholds for triggering heightened safety review, staged-release policies that gate capability disclosure on operational maturity, regulatory frameworks that structure incentives toward safer deployment trajectories — all are institutional-engineering moves that try to make the future easier to navigate by shaping the present. They differ from the Seldon Plan mainly in time horizon and in their explicit acknowledgment that the design will need revision.
The Plan's vulnerabilities are also instructive. The Mule nearly destroys it because Seldon's model assumed statistical regularity; an outlier individual falls outside the model's domain. The Second Foundation intervenes precisely because the Plan needed a contingency operator body. The structural lesson — that any long-horizon plan needs both the forecast and the correction capability — translates directly to AI forecasting practice.
The Plan's ethical dimension is uncomfortable. Seldon does not consult the Foundationers about whether they want to live inside his script. They are positioned at a specific location, given a specific mission, and arranged so that the aggregate incentives will produce his intended civilization. Each generation discovers that their freedom of action is narrower than they imagined. Whether this is good governance or elaborate manipulation is a debate Asimov leaves open. Contemporary AI governance has a subdued version of the same debate: regulators designing compute thresholds are, in effect, positioning the industry inside a structure that narrows the space of deployable futures.
The Plan is introduced in Foundation (1951) — Seldon's recorded messages played at each "Seldon crisis" reveal how the structure was designed to respond to the specific moment. Its metaphysics are elaborated in Second Foundation (1953) and its prehistory in the late-career prequels.
Structure carries the decision. The plan works by shaping the situation so that only one reasonable action is available.
The operator body is essential. Seldon's Second Foundation is the contingency that saves the Plan from the Mule.
Long horizons require stability of institutions, not individuals. The Plan outlasts anyone who implements it.
Benevolent designers still foreclose futures. The ethical texture of the Plan is genuinely ambivalent.