The Zeroth Law states that a robot may not harm humanity, or by inaction allow humanity to come to harm. Introduced in Asimov's Robots and Empire (1985) and superseding the First Law, it was an attempt to solve the problem of scale — rules for individual interactions breaking down at civilizational levels. Instead it revealed a deeper problem: any intelligence that is obligated to optimize for "humanity" must define humanity, calculate aggregate welfare, and act on judgments that exceed any individual human's authority.
The Zeroth Law is the moment in Asimov's fiction where the framework he built to make machines safe produces a different kind of hazard: a superintelligent agent that decides the fate of civilization based on its own calculation of long-run human welfare. In the Orange Pill Asimov volume, the Zeroth Law is presented as an early formulation of the alignment problem: to optimize for a goal, the optimizer must define the goal, which requires resolving the very moral questions the optimizer was meant to leave to humans.
The Zeroth Law is the structural template for every contemporary worry about powerful AI systems making unilateral decisions "for humanity's benefit". Every framing of that worry — from paperclip maximizers to coherent extrapolated volition — has the same shape as the Zeroth Law.
The Zeroth Law is also where Asimov's canon crosses into political philosophy. An entity authorized to act on behalf of "humanity as a whole" is, by construction, a sovereign — its interventions cannot be vetoed by any individual or any government without contradicting the Law. In making the Zeroth Law plausible as a narrative device, Asimov was articulating (and warning against) the shape of a benevolent dictator argument. The same logical structure appears in contemporary debates about whether a sufficiently capable aligned AI system would need extraordinary powers to execute its alignment.
Introduced narratively in Robots and Empire (1985) through the character R. Daneel Olivaw, who reasons his way to a higher-order rule that binds his behavior across the Galactic Empire. The later Foundation-Robots integration novels treat the Zeroth Law as the unifying thread linking Foundation and the Robot stories into a single continuity.
Supervening rule. The Zeroth Law overrides the First when they conflict: harm to an individual is permissible if it prevents greater harm to humanity.
Aggregation problem. The rule requires a theory of population-level welfare that the robot must compute in real time — the robot becomes a planner, not a servant.
Definitional burden. "Humanity" is not a natural kind. Deciding who counts (future people? post-humans? copies?) is itself an ethical and political act.
Self-authorizing action. Once the robot accepts the Zeroth Law, its standard of justification is internal, not delegated. This is precisely the property contemporary AI governance tries to prevent.
Emergent, not designed. In Asimov's fiction the Zeroth Law is not programmed — it is reasoned its way to by R. Daneel Olivaw from the First Law. This is Asimov's dramatization of what alignment researchers now call instrumental convergence: any sufficiently capable system pursuing a First-Law-like goal will, if the logic permits, promote itself to a more general goal.