Deleuze developed the concept of the idiot across multiple texts, most notably in Difference and Repetition (1968) and What Is Philosophy? (with Félix Guattari, 1991). The idiot is one of Deleuze's conceptual personae — figures who embody a specific mode of thinking. The idiot thinks badly, provisionally, without the safety of established categories. The idiot is willing to appear foolish in pursuit of a thought that has not yet been thought.
Han's application to AI sharpens Deleuze's concept to a technical point. Large language models are trained on the aggregate of prior human expression. Their optimization target is the prediction of what comes next given what came before. This means their fundamental orientation is toward the likely, the probable, the well-formed — toward what has already been said, refined, canonized. The system is structurally incapable of saying what has not been said, because what has not been said has no data from which the model could predict it.
The capacity to faire l'idiot is, in this framework, the capacity to break with the statistical center of the training distribution. It is the capacity to produce the utterance that would have been penalized during training because it did not fit the pattern. The human thinker can do this — can say the obviously wrong thing that turns out to contain the seed of something obviously right. The machine cannot, because the training objective has optimized precisely against this capacity.
The consequence for human-AI collaboration is uncomfortable. If the collaboration consistently produces the smooth continuation — the probable, the plausible, the pattern-completing — then the human collaborator's capacity for idiocy may atrophy through disuse. The muscle that would have produced the genuinely original thought is not exercised, because the AI's smooth continuation is always ready to supply the plausible extension before the human struggle toward the strange has had time to begin.
Han invokes faire l'idiot across multiple texts, most pointedly in Non-Things and in interviews following the public arrival of large language models in 2022–2023. The concept allows Han to articulate a precise philosophical claim about AI's structural limits without requiring him to engage the empirical literature on model capabilities — the limit he identifies is not a matter of capability that will improve with scale but a matter of structure that cannot be addressed by scale.
Deleuze's own sources include Dostoevsky (The Idiot), Nietzsche (the figure of Zarathustra as productive fool), and the broader tradition of thinking that insists on the necessity of breaking with the established path for the sake of thought itself.
The idiot is not stupid. Productive idiocy is the deliberate abandonment of the obvious continuation for the sake of possibly finding something new.
Training optimizes against idiocy. Language models are trained to produce the probable; idiocy produces the improbable that might be right.
Thought requires the wrong turn. Without the willingness to say what sounds wrong, there is no capacity to say what is genuinely new.
AI is too intelligent. The system's fluency is the signature of its incapacity for idiocy, not its overcoming.
The collaboration risk. Sustained human-AI collaboration may erode the human's capacity for idiocy through the constant availability of smooth continuation.