Hughes borrowed the term from military history, where it denotes a section of an advancing front that has fallen behind, creating vulnerability. In large technical systems, a reverse salient is the component whose lag limits the system's overall performance and attracts concentrated innovative effort. The direction of a system's evolution is determined not by its most advanced component but by its most retarded one. When Edison's electrical system had functional generators and distribution networks but lamps that burned out too quickly, the lamp was the reverse salient—Edison's famous filament search was effort to resolve the bottleneck. Once resolved, the reverse salient migrated: the limited range of DC distribution became the new constraint. Systems advance through successive resolution of reverse salients, each resolution revealing the next bottleneck.
The concept is diagnostic rather than prescriptive. It identifies where the system's development is actually constrained, directing attention away from components advancing rapidly (which attract the most public attention and investment) toward components whose lag determines the system's trajectory. This redirection is analytically powerful because it reveals that visible frontier advances often matter less than invisible lags in determining what a system can actually accomplish. A frontier AI model demonstrating stunning capabilities is not the bottleneck if organizations cannot absorb it, workers cannot direct it effectively, regulators cannot govern it, or cultural comprehension cannot make sense of it.
Applied to the AI sociotechnical system, reverse-salient analysis produces uncomfortable findings. The technical core—the models—is not the bottleneck. Models have advanced far beyond what users, organizations, and institutions can absorb. The reverse salients are human and institutional: organizational structures designed for pre-AI workflows, workforce skills calibrated to execution rather than direction, regulatory frameworks lagging the technology's capabilities, cultural narratives failing to capture systemic complexity. The Berkeley study documented organizational reverse salients in granular detail: workers expanding job scope without organizational structures adapting to recognize the expansion, producing mismatches between actual work and formal roles.
Reverse salients migrate as each is resolved. If organizational restructuring successfully adapts to AI-augmented workflows, the bottleneck moves to workforce reskilling. If reskilling programs successfully prepare workers for directorial roles, the bottleneck moves to cultural comprehension of what the transition means. The migration is not a problem to be solved but a structural feature of system development—there is always a reverse salient, because systems are networks of interdependent components operating at different rates of development. The analytical task is identifying which component is currently constraining the system's advance.
The severity of a reverse salient is measured not by the absolute magnitude of the lag but by its systemic consequences. A component that lags slightly but constrains every other component is more severe than a component that lags dramatically but constrains nothing. In the AI system, regulatory lag is less severe than workforce-reskilling lag, because regulatory frameworks can be revised more quickly than workforces can be retrained. The most severe reverse salient is cultural comprehension—the gap between the sociotechnical system forming around AI and the public's mental model of what AI is—because this gap constrains the quality of every decision from individual career choices to national policy.
The concept appears in Hughes's analysis of Edison's filament search and is formalized in Networks of Power. Hughes traced the term to military history—specifically to the Schlieffen Plan and other Great War strategic analyses where reverse salients in advancing fronts created vulnerabilities. The metaphorical transfer from military to technological systems was exact: in both cases, the configuration of the weakest point determines the configuration of the whole, and concentrated effort on strengthening the weak point is the only way to advance the entire front.
Hughes demonstrated reverse-salient dynamics across electrification: when the lamp was resolved, DC's limited range became the reverse salient; when AC solved the range problem, the lack of effective AC motors became the bottleneck; when motors were developed, regulatory uncertainty became the constraint. Each resolution shifted the bottleneck rather than eliminating it, producing a sequence of innovations that looked from the outside like independent breakthroughs but were from the inside successive responses to migrating systemic constraints.
Not the frontier. The most advanced component is not the bottleneck—the system's trajectory is determined by the component that lags behind and constrains overall advance.
Migration pattern. Reverse salients migrate as each is resolved—the bottleneck moves to the next-weakest component, producing a sequence of innovations addressing successive systemic constraints.
AI's human bottlenecks. In the current AI system, reverse salients are organizational absorption, workforce reskilling, regulatory capacity, and cultural comprehension—all human/institutional rather than technical.
Severity by consequence. A reverse salient's severity is measured by systemic effects not absolute lag—cultural comprehension lag is most severe because it constrains the quality of every other decision.
Leverage points. Reverse salients are where intervention has maximum leverage—addressing the actual constraint rather than optimizing components that are not bottlenecks produces systemic advance.