Artificial general intelligence (AGI) is the hypothesized AI system that can perform, at human level or better, the full range of cognitive tasks a human can. There is no agreed operational definition; the contested question is whether currently-capable large language models already qualify, are on a direct path to qualifying, or are fundamentally different. Every major AI lab has a public position, and the positions differ enough to be substantive disagreements about what AI is and where it is going.
AGI is the polarizing word at the center of AI discourse. Frontier-lab leaders (OpenAI's Sam Altman, Anthropic's Dario Amodei, DeepMind's Demis Hassabis) publicly discuss it as imminent or inevitable; many academic researchers (Gary Marcus, Yann LeCun in certain registers, Rodney Brooks) consider the term marketing. The honest position for a general reader is that the word is doing different work for different people, and understanding what the word is doing in a particular conversation is more useful than committing to a definition.
Definitions vary widely. Ray Kurzweil's Singularity is Near (2005) implies AGI by 2029 on the basis of computational-capability extrapolations. OpenAI's charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work" — a deliberate narrowing that makes the definition operationally testable. DeepMind's 2023 "Levels of AGI" paper proposes a six-level taxonomy (no AI, emerging, competent, expert, virtuoso, superhuman) along performance and generality axes. Philosophers sometimes insist on "passes any cognitive test a human could" — a stringent definition that may never be satisfiable in full.
Isaac Asimov's positronic robots are the most familiar fictional AGI: systems with human-level performance across the full cognitive range, bound by the Three Laws. The Asimovian image is so influential that it can distort contemporary discussion: real AI systems do not look like Asimovian robots (no positronic brain, no explicit rules, no unified agent), and arguments about whether "AGI has arrived" often turn on how closely the real thing needs to resemble the fictional image.
The timeline debate is the most contested and the most important. A 2022 survey of AI researchers (Grace et al.) produced a median estimate of ~2059 for high-level machine intelligence. A 2023 update, post-GPT-4, cut the median to ~2047. Frontier-lab leaders in 2024–2025 are publicly predicting 2027–2030. The gap between academic median and frontier-lab prediction is itself a datum worth thinking about — whether it reflects information asymmetry, social dynamics within the field, or genuine disagreement about the evidence.
The concept is at least as old as the 1955 Dartmouth proposal (see Dartmouth Workshop 1956), which described an ambition to produce "human-level" intelligence without using the later term. The term "artificial general intelligence" was coined by Mark Gubrud in 1997 and popularized by Shane Legg and Ben Goertzel in the early 2000s. Legg's PhD thesis Machine Super Intelligence (2008) established the term's academic use.
No agreed benchmark. The most-cited definitions (human-level, transformatively capable, passes any cognitive test) are not operational in a way that yields a clean date when AGI "arrived."
Continuum vs threshold. AGI may not be a moment but a continuous axis that we have partly crossed — and where different observers place the threshold at different points.
Economic vs cognitive definitions. "Can do 80% of economically valuable work" (OpenAI's charter-style definition) vs "can pass any cognitive test a human could" (philosophical definition) — the two yield radically different timelines and policy implications.
Generality vs performance. DeepMind's 2023 paper distinguishes how broad a system's capabilities are (generality) from how good it is at each one (performance). A system could be strong on performance but narrow, or broad but shallow; AGI requires both.
The Asimovian image. Fictional AGI (unified agent, explicit reasoning, bounded by rules) is different enough from real capable AI (distributed, trained, opaque) that arguments "but real AI doesn't look like Asimov" often mean different things to different speakers.
AGI vs superintelligence. AGI is usually defined at human level; superintelligence is substantially beyond. See Superintelligence.