Framework Before the Harm — Orange Pill Wiki
CONCEPT

Framework Before the Harm

Amodei's principle that governance structures for powerful technologies must be built prospectively — before the specific harms they are designed to prevent — because every technology in history has produced governance frameworks too late to prevent the damage earlier action could have mitigated.

'Framework before the harm' is Amodei's articulation of the principle that distinguishes the Responsible Scaling Policy from typical technology governance. The principle holds that governance structures must be built prospectively — before the harms they are designed to prevent — because every powerful technology in history has produced governance frameworks too late to prevent the damage earlier action could have mitigated. Nuclear energy arrived before the regulatory infrastructure required to govern it. The automobile arrived before traffic laws and seat belts. The internet arrived before privacy law. In each case, technology arrived first, consequences arrived second, and governance frameworks arrived third — too late to prevent the harms earlier governance could have mitigated.

In the AI Story

Hedcut illustration for Framework Before the Harm
Framework Before the Harm

The prospective approach requires a specific kind of discipline — the discipline of imagining risks that have not yet materialized and investing in measures to prevent them before the investment is urgently needed. This is structurally different from reactive governance, which responds to demonstrated harms by implementing measures that would have prevented those harms had they been in place earlier. Reactive governance is easier because the harms are observable and the political will to address them exists; prospective governance is harder because the harms are hypothetical and the political will must be manufactured from foresight rather than suffering.

Amodei's argument is not that prospective governance is easy but that it is necessary for technologies whose potential harms are catastrophic or irreversible. For incremental technologies — technologies whose harms appear gradually and can be corrected through subsequent regulation — reactive governance is adequate. For technologies whose harms could materialize suddenly or whose consequences cannot be reversed once manifest, reactive governance is structurally inadequate. AI falls into the latter category for several of its potential failure modes, particularly those involving autonomous systems, dual-use capabilities, and the concentration of power.

The principle applies at multiple levels. At the level of individual deployments, it means conducting red-teaming and capability evaluations before release rather than after incidents. At the level of corporate policy, it means establishing commitments like the Responsible Scaling Policy that bind the organization's behavior in advance. At the level of industry norms, it means participating in the development of shared safety standards before competitive pressure makes agreement impossible. At the level of government regulation, it means advocating for rules before the political economy of ungoverned deployment becomes entrenched.

Each level of application faces different obstacles. Individual deployments face commercial pressure to ship. Corporate policies face the challenge of being binding against future competitive pressure. Industry norms face free-rider problems. Government regulation faces the pace mismatch between legislative processes and technological development. Amodei's framework does not eliminate these obstacles but proposes that working against them at each level is more productive than waiting for the harms that would force action.

Origin

The principle runs through Amodei's public advocacy since the founding of Anthropic, articulated most clearly in the Responsible Scaling Policy and in his public statements calling for regulation. The specific framing 'framework before the harm' captures an argument that has been central to his institutional thesis from the beginning.

The principle draws on lessons from earlier technology transitions. The institutional responses to industrial risks — labor law, product liability, environmental regulation — all developed reactively, after demonstrated harms produced political pressure for regulation. The time between harm and regulation, measured in years or decades, was the period during which the harms accumulated. Amodei's argument is that AI's potential for rapid and irreversible harms makes this reactive pattern unacceptable.

Key Ideas

Prospective over reactive. Governance must be built before the harms it is designed to prevent, not in response to them.

Pattern of historical failure. Every powerful technology in history has produced governance frameworks too late — nuclear, automotive, internet, social media.

Necessary for catastrophic risks. Prospective governance is structurally required for technologies whose harms could be sudden or irreversible.

Multiple levels of application. The principle operates at individual deployments, corporate policy, industry norms, and government regulation — each facing different obstacles.

Political will from foresight. Prospective governance requires building political will from imagined harms rather than from demonstrated suffering, which is harder but necessary.

Debates & Critiques

Critics argue that prospective governance inevitably over-regulates because the harms being prevented are hypothetical and specific regulations cannot be calibrated to risks that have not yet materialized. Defenders argue that some categories of risk are severe enough to justify the costs of over-regulation, and that the alternative — waiting for demonstrated harms — is not conservative but reckless for technologies capable of catastrophic outcomes. The deeper debate concerns whether AI belongs in the category of technologies requiring prospective governance or in the category where reactive governance is adequate.

Appears in the Orange Pill Cycle

Further reading

  1. Anthropic, Responsible Scaling Policy v1.0 (2023)
  2. Amodei, Dario, 60 Minutes Interview (November 2025)
  3. Jasanoff, Sheila, The Ethics of Invention (2016)
  4. Sunstein, Cass, Laws of Fear (2005)
  5. Ord, Toby, The Precipice (2020)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT