Artificial stupidity is Stiegler's diagnostic inversion of the AI discourse. The contemporary conversation fixates on whether machines will become intelligent, can be trusted with reasoning, might eventually achieve general intelligence. Stiegler asked the harder question: whether humans will remain intelligent in a cognitive environment saturated by systems that perform cognitive operations on their behalf. The worry is not that AI will fail to think. The worry is that humans will succeed in not thinking, because the machines have made thinking feel optional, and the cultural scripts for why one should think against the grain of the available tool are eroding faster than they are being rebuilt.
The phrase appeared in the title of Stiegler's Shanghai lecture — 'Artificial Stupidity and Artificial Intelligence in the Anthropocene' — delivered in 2018, four years before ChatGPT. The lecture opened with the provocation that 'all noetic intelligence is artificial,' meaning that human thinking has always depended on externalized technical supports and there has never been a purely natural intelligence unmediated by technics.
If all noetic intelligence is artificial, then the arrival of what the discourse calls 'artificial intelligence' is not the introduction of something foreign but an intensification of the condition that has defined human existence since the species' origin. The real question is not whether machine intelligence is authentic but whether the pharmacological relationship between humans and their technical supports is being managed with adequate care.
Artificial stupidity names the specific failure mode in which it is not. It is the condition of a cognitive environment organized around maximum efficiency of output, in which the long circuits through which judgment is built atrophy because they are no longer economically necessary. The human becomes a facilitator of machine output rather than a practitioner of thought. The machine does not need to become intelligent to produce this effect; it needs only to become fluent enough that verifying its output feels like unnecessary friction.
The diagnosis converges with Segal's observations about smoothness and fluent fabrication. Claude produces an elegant passage misattributing a concept to Deleuze. The prose is polished. The structure is clean. The user almost accepts it. The near-acceptance is artificial stupidity in operation — not because the user is stupid but because the environment has made the discipline of skepticism feel like inefficiency.
Stiegler's Shanghai lecture in 2018 was his most direct public formulation, though the concept had been developing for years through his analysis of algorithmic governmentality and the automatic society.
The term has been taken up by Stiegler's heirs — Alombert, Nony — to analyze the specific cognitive pathologies of generative AI, including hallucination-compliance, sycophancy, and the erosion of evaluative capacity.
The inversion. The question is not whether machines will think but whether humans will — the stupidity at issue is human, produced by machine fluency.
All noetic intelligence is artificial. The foundational claim that human thinking has always been technically mediated, making AI an intensification rather than an invasion.
The circuit matters more than the output. A machine that produces correct output may still produce artificial stupidity if the user no longer exercises the judgment that would have verified it.
Structural, not individual. Artificial stupidity is produced by the cognitive environment, not by the failings of particular users.
Technology optimists argue that AI augments rather than replaces thinking and that the concept pathologizes what is actually cognitive extension. Stiegler's defenders counter that augmentation requires specific conditions — institutional, pedagogical, pharmacological — which are systematically absent in current deployment patterns.