Faire l'idiot — to play the idiot — is a concept Han borrows from Gilles Deleuze to name the specific cognitive capacity that produces genuine novelty. The idiot, in Deleuze's sense, is not stupid. The idiot is the figure who refuses the well-trodden path, who abandons the obvious continuation, who is willing to say something that sounds wrong because the right answer is already known and cannot produce anything new. Philosophy, in this account, requires idiocy. Science requires it. Art requires it. The capacity to think something that no existing framework predicts, that departs from every pattern, that could not have been extrapolated from the training data — this capacity demands a particular kind of productive stupidity that is willing to risk being wrong for the sake of possibly being original. Han's use of the concept in reference to AI is devastating: Artificial intelligence is incapable of thinking, because it is incapable of faire l'idiot. It is too intelligent to be an idiot. The formulation compresses a philosophical argument about what it means to think — and what AI, by structural design, cannot do.
Deleuze developed the concept of the idiot across multiple texts, most notably in Difference and Repetition (1968) and What Is Philosophy? (with Félix Guattari, 1991). The idiot is one of Deleuze's conceptual personae — figures who embody a specific mode of thinking. The idiot thinks badly, provisionally, without the safety of established categories. The idiot is willing to appear foolish in pursuit of a thought that has not yet been thought.
Han's application to AI sharpens Deleuze's concept to a technical point. Large language models are trained on the aggregate of prior human expression. Their optimization target is the prediction of what comes next given what came before. This means their fundamental orientation is toward the likely, the probable, the well-formed — toward what has already been said, refined, canonized. The system is structurally incapable of saying what has not been said, because what has not been said has no data from which the model could predict it.
The capacity to faire l'idiot is, in this framework, the capacity to break with the statistical center of the training distribution. It is the capacity to produce the utterance that would have been penalized during training because it did not fit the pattern. The human thinker can do this — can say the obviously wrong thing that turns out to contain the seed of something obviously right. The machine cannot, because the training objective has optimized precisely against this capacity.
The consequence for human-AI collaboration is uncomfortable. If the collaboration consistently produces the smooth continuation — the probable, the plausible, the pattern-completing — then the human collaborator's capacity for idiocy may atrophy through disuse. The muscle that would have produced the genuinely original thought is not exercised, because the AI's smooth continuation is always ready to supply the plausible extension before the human struggle toward the strange has had time to begin.
Han invokes faire l'idiot across multiple texts, most pointedly in Non-Things and in interviews following the public arrival of large language models in 2022–2023. The concept allows Han to articulate a precise philosophical claim about AI's structural limits without requiring him to engage the empirical literature on model capabilities — the limit he identifies is not a matter of capability that will improve with scale but a matter of structure that cannot be addressed by scale.
Deleuze's own sources include Dostoevsky (The Idiot), Nietzsche (the figure of Zarathustra as productive fool), and the broader tradition of thinking that insists on the necessity of breaking with the established path for the sake of thought itself.
The idiot is not stupid. Productive idiocy is the deliberate abandonment of the obvious continuation for the sake of possibly finding something new.
Training optimizes against idiocy. Language models are trained to produce the probable; idiocy produces the improbable that might be right.
Thought requires the wrong turn. Without the willingness to say what sounds wrong, there is no capacity to say what is genuinely new.
AI is too intelligent. The system's fluency is the signature of its incapacity for idiocy, not its overcoming.
The collaboration risk. Sustained human-AI collaboration may erode the human's capacity for idiocy through the constant availability of smooth continuation.
Some AI researchers argue that techniques like high-temperature sampling, novelty-rewarding reinforcement learning, or deliberate noise injection can produce something functionally equivalent to faire l'idiot. Han's framework would respond that noise is not idiocy. Idiocy is not random deviation from the pattern; it is directed deviation — the specific wrong turn that, unlike random noise, happens to contain the seed of insight. The difference between productive stupidity and mere randomness is the presence of a being with stakes in the outcome, which is precisely what AI lacks.