You On AI Encyclopedia · AI Winter The You On AI Encyclopedia Home
Txt Low Med High
EVENT

AI Winter

The periodic cycles of collapsed expectations and funding in AI research, most famously 1974–1980 and 1987–1993 — moments when the gap between promised and delivered capability became too painful to sustain.
An AI winter is a period in which AI research loses public and institutional credibility, funding dries up, and capable researchers leave the field. The phrase was coined at the 1984 American Association for Artificial Intelligence meeting by analogy with "nuclear winter," as a deliberate warning about the expectations-disappointment cycle. Two canonical AI winters are generally recognized: 1974–1980 (following the UK Lighthill Report and ALPAC's pessimism about machine translation) and 1987–1993 (following the collapse of the expert-systems industry and the LISP-machine market). Whether a third winter is coming is a live question in every era of AI enthusiasm, including this one.
AI Winter
AI Winter

In The You On AI Encyclopedia

This matters historically because the same field that now claims imminent general intelligence has twice before claimed imminent general intelligence, raised billions of dollars, and had the claims fail in ways that hurt careers, funding, and public trust. Whether the current era is different is the question both camps are currently debating with increasing heat. Arguments for this-time-different: an exponential capability curve that now exceeds human performance on specific benchmarks; real economic traction (annual AI revenue in the tens of billions, enterprise adoption curves shorter than prior technologies); self-improvement loops not available in earlier eras. Arguments for this-time-similar: specific capabilities that have plateaued (reasoning, long-horizon planning, reliability under distribution shift); the commercial hype cycle that fits the Gartner pattern precisely; the history of AI researchers confidently predicting human-level intelligence "within a decade" at approximately every decade mark since 1956.

The first AI winter followed directly from the Dartmouth-era promises. The Dartmouth Workshop proposal (1955) claimed that significant progress could be made in a summer on learning, language, abstract thinking, and self-improvement. Progress was real but incremental; funders who had been promised revolutions demanded them; when the revolution did not arrive on schedule, the Lighthill Report (1973) and the ALPAC machine-translation report (1966) provided the institutional rationale for funding cuts. The DARPA Speech Understanding Research program collapse (1976) was a further marker.

Dartmouth Workshop 1956
Dartmouth Workshop 1956

The second winter was narrower: it was primarily a collapse of the commercial expert-systems industry (Symbolics, LMI, Teknowledge) that had over-promised general problem-solving from systems that only worked in carefully constrained domains. Academic AI was less affected than commercial AI, but the downstream effects on research funding and graduate enrollment were significant through the early 1990s.

The post-2012 deep-learning era emerged in part from researchers (Hinton, LeCun, Bengio) who had weathered the second winter by continuing to work on neural networks when neural networks were unfashionable. The irony is that the current era of AI enthusiasm was made possible by researchers whose careers were shaped by the previous winter.

Origin

The term was coined at the 1984 AAAI annual meeting by Roger Schank and Marvin Minsky (accounts vary on primary authorship), explicitly as a warning about the field's expectations-disappointment cycle. The first winter is traced conventionally to the 1973 Lighthill Report commissioned by the UK Science Research Council; the second to the collapse of the expert-systems industry and the LISP-machine market in the late 1980s and early 1990s.

Key Ideas

Hype cycle. AI winters fit Gartner's canonical hype-cycle pattern: inflated expectations → disappointment trough → stabilization → productive deployment.

The first AI winter followed directly from the Dartmouth-era promises

Paradigm sidelining, not field collapse. Each winter was partly a consequence of one research paradigm failing to deliver on its specific promises (symbolic AI, expert systems), not of AI-as-such failing.

Talent diaspora. Winters cause capable researchers to leave the field, which compounds the slowdown for a generation. The second winter drove many AI researchers into finance and software engineering; several did not return.

Expectation asymmetry. Funders react more strongly to missed promises than to exceeded ones. Over-claiming produces sharper future cuts than under-claiming produces future gains.

Commercial vs. academic decoupling. The second winter hit commercial AI harder than academic AI; the current era has much tighter coupling, so a new winter would be more synchronized across sectors.

Further Reading

  1. Lighthill, James. "Artificial Intelligence: A General Survey" (1973).
  2. Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence (1993).
  3. Nilsson, Nils J. The Quest for Artificial Intelligence (2009).
  4. McCorduck, Pamela. Machines Who Think (1979, rev. 2004).
Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
EVENT Book →