The Progress frame is one of two master frames Lakoff's analytical method identifies as dominating contemporary AI governance discourse. Within this frame, AI is the latest chapter in a long history of technological advancement — writing, printing, electricity, computing, and now artificial intelligence — each expanding human capability, each met with resistance, each ultimately producing more prosperity and freedom than the world it replaced. Resistance to AI within this frame is structurally identical to resistance to every previous technology: understandable, historically recurrent, and ultimately wrong. The frame entails that the appropriate posture is acceleration — rapid deployment, minimal regulation, market-driven adoption — because history demonstrates that the gains from technological capability eventually outweigh transition costs. The frame generates specific policy positions with the reliability of a machine.
Within the Progress frame, regulation should be light because heavy regulation impedes innovation and delays the gains that history promises. Education should focus on adoption because the primary risk is falling behind, not moving too fast. Displacement is temporary because new technologies create more jobs than they destroy — eventually. Transition costs are real but manageable, and they are best managed by the market rather than by institutions that move too slowly to keep pace with the technology. Each of these positions follows from the frame's core entailment: that technological history bends toward expansion of capability and broadly distributed benefit, so accelerating the process is prudent rather than reckless.
The frame draws cognitive authority from a specific reading of historical evidence. The Luddite movement of 1811–1816 — skilled textile workers who destroyed mechanical looms that had devalued their labor — is invoked as the paradigmatic cautionary tale: their resistance was futile, their predictions of ruin proved wrong in the long run, and the technology they opposed ultimately produced broadly shared prosperity. Within the frame, every subsequent resistance movement becomes a reenactment of the Luddite error. The argument pattern is consistent: previous skeptics were proven wrong; current skeptics are structurally identical to previous skeptics; therefore current skeptics will be proven wrong.
What the frame systematically obscures is the distributional question: the transition costs were borne by specific populations in specific times, and the eventual broadly shared prosperity took generations to materialize. The Luddite workers were not wrong that their specific livelihoods were being destroyed. They were wrong, in the frame's reading, only if one aggregates across enough time and population to dissolve their particular suffering into the historical average. The frame's aggregation is not neutral. It foregrounds eventual capability expansion and backgrounds immediate distributional damage. Within the frame, the damage is visible only as a regrettable but unavoidable friction. The frame cannot conceptually prioritize the people absorbing the damage because its source structure — history as upward-sloping line — places the damage in the line's roughness rather than in the line's destination.
The Progress frame dominates the technology industry, much of the policy establishment, and the financial press. It is so thoroughly naturalized in these contexts that challenges to it are received as incomprehensible: anyone questioning acceleration must be either ignorant of history or operating from bad faith. This rhetorical asymmetry is itself a product of the frame. Within it, "Luddite" functions as an argument-ending epithet rather than a historical reference, because the frame has already rendered Luddite-analogous positions incoherent. The dominance is substantial but not total. The Protection frame competes with it, and the emerging Cultivation frame offers a third structure the frame war has not yet accommodated.
The Progress frame as applied to AI emerged from the broader narrative of technological progress that dominated late-twentieth-century Anglo-American political and economic thought. Its specific application to AI crystallized in the 2010s and intensified after the capability threshold crossings of 2022–2025, when the technology industry mobilized historical analogies to argue against regulatory constraint.
AI as chapter in progress narrative. The frame positions AI as the latest in a sequence of capability-expanding technologies that have produced broadly shared prosperity.
Acceleration as prudent posture. History, read through the frame, demonstrates that gains eventually outweigh transition costs, so speed is rational.
Light regulation as default. Heavy regulation impedes innovation and delays the gains the frame promises; the market manages transitions better than institutions.
Luddite analogy as argument-ender. Opposition is rendered incoherent by identification with historically failed resistance movements.
Distributional blind spot. The frame backgrounds the specific populations absorbing transition costs because its structure prioritizes aggregate outcomes over distributional detail.