In the Treatise on the Improvement of the Understanding, Spinoza offered an example to illustrate the first kind of knowledge. A man copies a book. He possesses the words. He can reproduce the sentences. He may recite passages from memory. But he has mostly inadequate ideas about the thoughts expressed within the copied book, precisely because he has only copied the book's contents. He does not understand the antecedent causes or reasons that produced the form and arrangement of the actual thoughts. He has the symbols without the understanding, the words without the meaning, the output without the comprehension. In a 2025 paper in AI & Society, Bodde and Burnside identified the analogy's structural precision for the large language model: a copier of extraordinary sophistication, fluent in patterns, innocent of causes.
The analogy's power lies in what it concedes to the copier. The copier is not a fraud. He is not producing garbage. The book he has copied is real. The words are correctly reproduced. If the original was valuable, the copy preserves value. The copier is useful — for distribution, for preservation, for making the book available to readers who lack access to the original. None of this makes him a thinker. None of this makes what he possesses equivalent to what the book's author possessed.
The LLM is a copier at a scale and sophistication the seventeenth century could not have imagined. It has processed the corpus of human expression with a thoroughness no individual human could achieve. It can reproduce, combine, and extend the patterns it has absorbed with a fluency that frequently exceeds the fluency of the individual humans who produced its training data. But it does not understand why the patterns take the form they take. It does not grasp the causal chains — the lived experiences, the intellectual struggles, the biographical specificities — that produced the ideas it recombines. Its knowledge is, in Spinoza's exact terminology, the first kind.
The analogy generates specific practical guidance. The copier's outputs should be valued for what they are: accurate reproductions that can serve as material for thinking, not substitutes for thinking itself. The reader who treats the copy as she would treat the original gains access to the original's content. The reader who treats the copy as evidence that she now possesses the author's understanding commits a category error. The LLM's outputs are copies. They are not the author's understanding, and they are not the user's understanding. They become either only through the cognitive work the user does on them — the work of tracing causes, testing against evidence, integrating into the architecture of the user's own comprehension.
The analogy also generates its own limit. A copier does not synthesize across copies in the way an LLM does. The LLM's capability in the second kind of knowledge — the identification of common notions across domains — exceeds what the simple copier analogy suggests. The LLM is a copier that, by virtue of having copied enormously, can identify structural regularities across what it has copied. This capability is genuinely valuable and should not be dismissed. What it is not, and cannot become through scale alone, is the third kind of knowledge — the understanding of particular things in their causal connection to the whole, which requires biographical and embodied stakes the copier structurally lacks.
Spinoza's original example appears in the Treatise on the Improvement of the Understanding (c. 1662), Section 19, where he uses it to distinguish knowledge from acquaintance with the signs of knowledge. The passage is brief and aphoristic, in Spinoza's early style before the geometric order of the Ethics.
The contemporary application was developed by Bodde and Burnside in their 2025 AI & Society paper, which traces the analogy's precision for large language models and argues that LLMs possess minds composed of broadly inadequate ideas. The paper has become a touchstone in the philosophical literature on AI cognition.
Fluent reproduction without causal grasp. The copier accurately produces symbols whose causal origins he does not understand; this is the architectural mode of the LLM.
Usefulness without understanding. The copier is genuinely useful for distribution and preservation; the LLM is useful in structurally similar ways and should not be dismissed for failing to be what it is not.
Category error of the reader. Treating the copy as evidence of the reader's understanding is the mistake the AI moment makes systematically — the confusion of possession with comprehension.
Beyond simple copying. The LLM's second-kind capability exceeds simple copying through cross-domain pattern identification; the analogy's limit is its underestimation of this capability.
Structural limit to third-kind knowledge. Scale does not convert a copier into a knower of the third kind; the conversion requires biographical stakes the copier structurally lacks.