War and crisis distort the historical record of technology in systematic ways. The atomic bomb received more intellectual attention than any technology in history; the Kalashnikov rifle killed more people; the machete killed nearly a million in the Rwandan genocide. The crisis frame directs attention toward the dramatic weapon and away from the mundane weapon, and the allocation of attention has consequences: investment in nuclear nonproliferation dwarfed investment in small-arms control, despite the fact that small arms caused more deaths by orders of magnitude. Edgerton has documented this misdirection across the entire history of military technology and extended the analysis to the AI moment, where the same dynamic is visible in the distortion of attention toward existential risk and dramatic capability claims and away from the mundane, currently occurring effects of AI deployment.
There is a parallel reading that begins not with the distortion of attention but with the material conditions that make crisis frames inevitable. The atomic bomb did receive disproportionate intellectual attention — but it also required the Manhattan Project, which reorganized American science, created the national laboratory system, and established the template for every subsequent large-scale technological mobilization including the current AI buildout. The crisis frame is not merely a narrative overlay; it is the organizing principle that makes certain technological developments possible at all. Without the urgency of existential competition, there would be no NVIDIA H100 clusters, no hundred-billion-dollar training runs, no talent concentration in frontier labs.
The mundane deployment that Edgerton champions depends entirely on the infrastructure created by crisis mobilization. The Kalashnikov killed millions, but its production required the Soviet military-industrial complex built in response to existential threat. Today's AI deployment — the chatbots, the coding assistants, the recommendation systems — runs on hardware designed for the arms race, trained on models developed for the frontier, deployed through platforms built with venture capital betting on winner-take-all dynamics. The crisis frame does not distort technological history; it creates the conditions for technological history to occur. The everyday uses are downstream of extreme development. The machete was a agricultural tool before it was a weapon, but the semiconductor was a military technology before it was a consumer one. In AI, the pattern holds: the extreme scenario drives the investment that creates the infrastructure that enables the mundane application. Remove the crisis, and you remove not just the narrative but the material basis for the technology itself.
The dominant frames for discussing artificial intelligence in 2025–2026 are crisis frames. Existential risk: AI might destroy humanity. Job displacement: AI will eliminate millions of jobs. Civilizational transformation: AI will reorganize every institution within a generation. Arms race: the nation that leads in AI will dominate the twenty-first century. Each frame concentrates attention on the extreme scenario — the worst case, the best case, the case that produces the most dramatic narrative — and renders invisible the ordinary deployment that will affect more people than any extreme scenario.
The crisis frame is structurally distinct from the use-centered frame even when both describe the same underlying reality. The SaaSpocalypse is a crisis narrative; the underlying event is a market repricing — significant, disruptive, painful for those inside it, but not an apocalypse. The jobs discourse is a crisis frame; the underlying reality is more ambiguous — some jobs eliminated, some restructured, some new jobs created, many existing jobs intensified rather than displaced. The ambiguity is not dramatic enough for the crisis frame, which requires clarity: either catastrophe or salvation. The ambiguity is, however, the actual state of the evidence.
The existential risk discourse is the purest expression of the crisis frame. Investment in AI safety research focused primarily on existential risk scenarios has grown rapidly. Investment in understanding the mundane, everyday, already-occurring effects of AI on ordinary work, ordinary education, ordinary attention, and ordinary institutional practice has grown far more slowly. The crisis frame directs resources toward the dramatic scenario and away from the actual one. This is not a failure of intention; it is a structural feature of how crisis narratives allocate attention.
Edgerton's response is not to dismiss crisis concerns but to insist on proportionality. The crisis frame describes the experience from inside the rupture — from the perspective of people closest to the frontier, with the most at stake in the outcome. The use-centered frame describes the same transition from outside the crisis, and the view shows not a five-stage dramatic arc but a far more gradual process: slow adoption, uneven deployment, incremental adjustment, the persistence of older practices alongside newer ones, and the accumulation of small changes over decades rather than seasons.
The framework draws on Edgerton's extensive work in British military and industrial history — particularly England and the Aeroplane and Warfare State — where he documented in detail how war shapes both technology and the historical record of technology, and how crisis distortions persist in popular memory long after the original crisis has passed.
Crisis frames demand drama. The genre of the crisis narrative requires turning points, phase transitions, before-and-after moments that resist the slow, ambiguous patterns of actual technological change.
Attention follows drama, not impact. The most-attended technologies are not the most-impactful; the misdirection produces investment, policy, and education distortions.
Existential risk crowds out everyday harm. The AI safety discourse focused on hypothetical futures has outcompeted attention to documented current effects.
The slow story has been right every time. Across a century of technological transitions, the use-centered analysis has consistently produced more accurate predictions than the crisis frame.
The tension between crisis-driven development and use-centered impact resolves differently depending on which phase of technological deployment we examine. In the research and development phase, the contrarian position dominates (80/20): crisis narratives genuinely do drive the concentration of capital and talent necessary for breakthrough. The Manhattan Project model — whether applied to nuclear weapons, semiconductors, or large language models — requires existential stakes to justify its resource allocation. Edgerton's critique applies more to the historical record than to the historical process itself.
In the deployment and diffusion phase, the weighting reverses toward Edgerton's position (70/30). Once the technology exists, its actual impact does indeed come through mundane channels that the crisis frame renders invisible. The AI coding assistant affects more programmers than any AGI scenario; the content filter shapes more online interactions than any superintelligence would. Here the crisis frame becomes actively misleading, directing regulatory attention toward hypothetical harms while current harms accumulate unaddressed. The disparity between AI safety funding and AI labor impact research demonstrates this precisely.
The synthetic frame that emerges recognizes crisis and use as complementary rather than competing lenses, operating at different timescales with different functions. Crisis frames are engines of possibility — they create the extraordinary conditions necessary for extraordinary development. Use frames are maps of actuality — they track where the technology goes once it exists. The distortion Edgerton identifies is real, but it may be a necessary distortion: the very mechanism by which societies mobilize resources for technological change. The task is not to eliminate the crisis frame but to maintain dual vision: honoring the crisis energy that makes development possible while insisting on use-centered analysis of what actually develops.