Dual-Use Technology — Orange Pill Wiki
CONCEPT

Dual-Use Technology

The structural property shared by every powerful general-purpose technology — including satellites, nuclear physics, GPS, and modern AI — that the same capability serves civilian and military ends, and the two uses cannot be separated by the technology itself.

Dual-use denotes technologies whose design does not pick out a civilian or a military application; the same artifact, trained in the same way, serves either. The classic cases are atomic energy (reactors and weapons), satellite reconnaissance (weather and surveillance), and GPS (navigation and munitions guidance). AI is the dual-use technology of the present moment. The same language-model capability that summarizes legal filings generates phishing emails; the same image-generation capability that makes marketing assets makes non-consensual deepfakes; the same autonomous-planning capability that runs a logistics pipeline runs a reconnaissance mission. Dual-use is not a property to be eliminated; it is a property to be governed.

In the AI Story

Dual-use technology
One artifact, two uses.

Clarke's satellite papers are the cleanest illustration of what dual-use does to forecasting. The 1945 Extra-Terrestrial Relays paper proposed geostationary relays for civilian communications. Within twenty years the Clarke orbit was host to military early-warning systems (the DSP constellation), signals-intelligence platforms, and navigation constellations that enable both civil aviation and precision strike. The infrastructure that delivered long-distance television also delivered real-time battlefield intelligence. Clarke himself observed in later essays that he had not foreseen the military uses dominating the early years of the technology; the civilian case he was proposing required an economic justification, and military budgets supplied it. The pattern has been consistent across dual-use technologies since: the military case pays for the early infrastructure, the civilian case monetizes the later diffusion.

For AI, the dual-use character is visible in every capability frontier. Large language models trained for helpfulness can be steered to produce operational guidance on cyberattacks, biological-weapon synthesis, and disinformation at scale. The same models, deployed with appropriate safety training and oversight, handle medical documentation, accessibility tools, and educational tutoring. The labs that build frontier models run extensive safety testing explicitly because they know the capability is dual-use; the US government's 2023 executive order on AI, and the subsequent preparedness frameworks from Anthropic, OpenAI, and Google DeepMind, are attempts to make the safety testing legible to external parties and to commit the labs to capability thresholds that trigger additional controls.

The governance debate centers on what to do about dual-use capability at the frontier. Export controls on advanced chips (the October 2022 and subsequent U.S. rules) attempt to keep the training compute needed for frontier models inside a specific set of jurisdictions. Capability evaluations — Apollo, METR, AISI, the UK AI Security Institute — attempt to measure what frontier models can do on dangerous tasks before deployment. Release policies — which weights are public, which API-only, which undeployed — attempt to manage the risk-benefit tradeoff for each major release. None of these tools is new; export controls on nuclear technology, capability evaluations on biological agents (the BWC review processes), and release policies on vulnerability research are all precedents. What is new is the speed and the breadth of the capability space.

The deepest governance question is the one the Baruch Plan raised for nuclear technology in 1946: can a dual-use technology at scale be contained by any combination of unilateral restraint, bilateral treaty, and technical safeguard? The nuclear case gives a cautious answer: containment is possible, treaties can be negotiated, proliferation can be slowed, but the technology itself cannot be uninvented. The AI case is in the early period analogous to 1946–1950: the capability is demonstrated, the governance vocabulary is being constructed, and the question of whether the international regime will be one of competition, coordination, or coercion is genuinely open. Clarke's optimism about space technology as a force for international cooperation was not vindicated in the Cold War; his optimism about the civilian case outlasting the military case was.

Origin

The term "dual-use" in its modern sense emerged from export-control debates in the late Cold War (the Wassenaar Arrangement's 1996 founding formalized the concept). It was applied to biotech in the post-9/11 period (Fink Report 2004, Lemon-Relman 2007) and to cyber capabilities in the 2010s. Its application to AI was prefigured by work on autonomous weapons (Arkin 2009, Sharkey 2011) and became a central frame for frontier-model policy after 2022.

Key Ideas

Dual-use is intrinsic, not bolted on. Restrictions must govern access and deployment because the capability itself cannot be made single-use.

Military funding typically leads civilian deployment. The infrastructure comes online for national-security reasons and is then monetized for consumer applications.

Governance borrows from nuclear and biological precedent. Export controls, capability evaluations, and release policies are the standard toolkit.

The frontier is where stakes concentrate. Dual-use concerns bite most sharply at the capability frontier; broadly diffused capabilities are usually past the point of useful restriction.

Appears in the Orange Pill Cycle

Further reading

  1. Clarke, Arthur C. Extra-Terrestrial Relays. Wireless World (1945).
  2. Baruch, Bernard. The Baruch Plan (1946).
  3. Fink Report. Biotechnology Research in an Age of Terrorism. National Research Council (2004).
  4. Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of AI (2023).
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT