An AI winter is a period in which AI research loses public and institutional credibility, funding dries up, and capable researchers leave the field. The phrase was coined at the 1984 American Association for Artificial Intelligence meeting by analogy with "nuclear winter," as a deliberate warning about the expectations-disappointment cycle. Two canonical AI winters are generally recognized: 1974–1980 (following the UK Lighthill Report and ALPAC's pessimism about machine translation) and 1987–1993 (following the collapse of the expert-systems industry and the LISP-machine market). Whether a third winter is coming is a live question in every era of AI enthusiasm, including this one.
There is a parallel reading where AI winters are less about failed promises than about the material substrate required to sustain each wave. The energy infrastructure, chip fabrication capacity, and data center footprint needed for current AI systems represents a capital commitment orders of magnitude beyond what supported symbolic AI or expert systems. When a technology requires Google-scale infrastructure to demonstrate basic competence, the question shifts from "will it work?" to "who can afford to find out?"
This reading suggests the next winter won't arrive through disappointed expectations but through infrastructure bottlenecks. The current generation of models requires training runs costing tens of millions of dollars, inference infrastructure that burns gigawatts, and supply chains for specialized chips that take years to build. Unlike the LISP machines that could be quietly abandoned, today's AI infrastructure represents sunk costs that utilities have planned grids around, that Taiwan has built fabs for, that entire career paths now depend upon. The social cost of walking away from this investment may be higher than the cost of maintaining the fiction that it's working. We may get not a winter but a kind of technical debt zombie spring—infrastructure too expensive to abandon, not productive enough to justify, sustained by the financial logic of sunk costs rather than by any genuine belief in its future. The researchers who weathered previous winters in unfashionable corners of academia have no equivalent refuge when the infrastructure itself becomes the constraint.
This matters historically because the same field that now claims imminent general intelligence has twice before claimed imminent general intelligence, raised billions of dollars, and had the claims fail in ways that hurt careers, funding, and public trust. Whether the current era is different is the question both camps are currently debating with increasing heat. Arguments for this-time-different: an exponential capability curve that now exceeds human performance on specific benchmarks; real economic traction (annual AI revenue in the tens of billions, enterprise adoption curves shorter than prior technologies); self-improvement loops not available in earlier eras. Arguments for this-time-similar: specific capabilities that have plateaued (reasoning, long-horizon planning, reliability under distribution shift); the commercial hype cycle that fits the Gartner pattern precisely; the history of AI researchers confidently predicting human-level intelligence "within a decade" at approximately every decade mark since 1956.
The first AI winter followed directly from the Dartmouth-era promises. The Dartmouth Workshop proposal (1955) claimed that significant progress could be made in a summer on learning, language, abstract thinking, and self-improvement. Progress was real but incremental; funders who had been promised revolutions demanded them; when the revolution did not arrive on schedule, the Lighthill Report (1973) and the ALPAC machine-translation report (1966) provided the institutional rationale for funding cuts. The DARPA Speech Understanding Research program collapse (1976) was a further marker.
The second winter was narrower: it was primarily a collapse of the commercial expert-systems industry (Symbolics, LMI, Teknowledge) that had over-promised general problem-solving from systems that only worked in carefully constrained domains. Academic AI was less affected than commercial AI, but the downstream effects on research funding and graduate enrollment were significant through the early 1990s.
The post-2012 deep-learning era emerged in part from researchers (Hinton, LeCun, Bengio) who had weathered the second winter by continuing to work on neural networks when neural networks were unfashionable. The irony is that the current era of AI enthusiasm was made possible by researchers whose careers were shaped by the previous winter.
The term was coined at the 1984 AAAI annual meeting by Roger Schank and Marvin Minsky (accounts vary on primary authorship), explicitly as a warning about the field's expectations-disappointment cycle. The first winter is traced conventionally to the 1973 Lighthill Report commissioned by the UK Science Research Council; the second to the collapse of the expert-systems industry and the LISP-machine market in the late 1980s and early 1990s.
Hype cycle. AI winters fit Gartner's canonical hype-cycle pattern: inflated expectations → disappointment trough → stabilization → productive deployment.
Paradigm sidelining, not field collapse. Each winter was partly a consequence of one research paradigm failing to deliver on its specific promises (symbolic AI, expert systems), not of AI-as-such failing.
Talent diaspora. Winters cause capable researchers to leave the field, which compounds the slowdown for a generation. The second winter drove many AI researchers into finance and software engineering; several did not return.
Expectation asymmetry. Funders react more strongly to missed promises than to exceeded ones. Over-claiming produces sharper future cuts than under-claiming produces future gains.
Commercial vs. academic decoupling. The second winter hit commercial AI harder than academic AI; the current era has much tighter coupling, so a new winter would be more synchronized across sectors.
The tension between these views resolves differently depending on which timeline we examine. For the question of "will there be another winter?" the infrastructure reading carries more weight (70/30)—the sheer capital commitment makes a 1970s-style abandonment unlikely. But for "what would a winter look like?" the classical framing dominates (80/20)—disappointment cycles remain the emotional driver even if infrastructure creates inertia.
The synthetic frame emerges when we ask about research velocity rather than research funding. Both views correctly identify constraints, but at different scales: the entry focuses on institutional confidence (which can collapse quickly), while the infrastructure reading identifies material lock-in (which creates momentum even through disappointment). The resolution is that modern AI may experience winters of velocity without winters of activity—a kind of "cooling" where work continues but breakthrough expectations moderate, sustained by infrastructure investments that are too large to abandon.
This suggests the right question isn't "winter or not?" but rather "what forms can disappointment take when the infrastructure is too heavy to abandon?" The answer may be a new pattern entirely: not the boom-bust cycles of earlier eras but a kind of plateauing where massive resources continue flowing into marginal improvements, justified not by revolutionary promise but by the impossibility of writing off the investment. The researchers who survived previous winters by finding unfashionable corners may find this harder to navigate than outright collapse—there's no refuge from a technology that's too big to fail but too limited to succeed.