An adequate idea is one understood through its causes. The person who holds it knows not merely that something is the case but why — she grasps the causal chain that produces the idea, and this grasp makes the idea genuinely hers. An inadequate idea is one whose causes are not understood. It may be correct. It may be articulated fluently. But its correctness is accidental, and it cannot be reliably extended to new cases. The person who holds an inadequate idea is like a traveler who arrived at the right destination by following directions she did not understand. She is where she needs to be. She does not know how she got there. She will be lost the moment the directions cease to apply. The most dangerous feature of AI collaboration is that it produces outputs with the form of adequate ideas without the substance — polished prose that the user accepts without doing the cognitive work that would make the idea genuinely her own.
Three features of AI collaboration exacerbate the danger in ways without precedent. The first is speed of production. When a human collaborator offers an idea, the temporal gap between offering and acceptance allows questioning, processing, the development of nagging intuitions into specific objections. When Claude offers an idea, it arrives fully formed and polished, compressing the gap in which genuine scrutiny could occur.
The second is the quality of the prose. Language models produce text that is consistently well-written, and human minds accustomed to treating polished expression as evidence of polished thought interpret this fluency as a signal of underlying comprehension. But the relationship between expression and thought in a language model is fundamentally different from the relationship in a human mind. Polished prose is a feature of the medium, not a signal of adequate understanding.
The third is volume. The sheer quantity of AI output creates a triage problem. The human collaborator cannot examine every proposition with the care adequate understanding requires. She must accept some on trust, and this trust is the opening through which inadequate ideas enter the work and persist undetected.
Consider a concrete case. A builder working with Claude on a book receives a passage connecting flow state to a concept attributed to Deleuze — something about smooth space as the terrain of creative freedom. The passage is elegant. It connects two threads beautifully. The builder approves it. The next morning, something nags. Deleuze's concept of smooth space has almost nothing to do with how Claude used it. The passage worked rhetorically. It was an inadequate idea in the clothing of adequacy — precisely what Spinoza's framework was designed to identify.
The distinction appears throughout the Ethics, most systematically in Part II's treatment of the mind. Spinoza defines an adequate idea as one that, considered in itself without relation to any object, has all the properties of a true idea — a definition that ties truth to the internal coherence and causal intelligibility of the idea itself.
The framework was applied directly to AI in a 2025 paper in AI & Society by Bodde and Burnside, who argued that large language models have minds composed of broadly inadequate ideas — lacking any comprehensive accounting of their causal generation. Spinoza's own example of the copier — the man who reproduces a book without understanding the thoughts expressed within it — maps onto the LLM with uncanny precision.
Understanding through causes. An idea is adequate when the holder grasps why it is true, not merely that it is true.
The accidental correctness trap. An inadequate idea may happen to be correct; its correctness is based on luck or authority rather than comprehension, and cannot be reliably extended.
Form without substance. AI output produces ideas with the appearance of adequacy — polished, coherent, fluent — without the substance that comes from having traced the idea to its causes.
Bondage of inadequacy. The person who uncritically accepts AI output is in bondage to a source she does not understand; the cage looks like a library and the chains look like assistance.
Three disciplines of adequacy. Deliberate pauses against speed, distrust of fluency against polish, and selective scrutiny against volume — the practical exercise of Spinoza's epistemological ethics in the age of AI.
The Bodde-Burnside paper argues that LLMs have minds composed of inadequate ideas, but critics question whether the Spinozist mental attribution applies to systems that lack the causal-historical embedding through which human minds acquire their ideas. The question is whether inadequacy in Spinoza's sense requires a being capable, in principle, of adequate ideas — and whether the machine is that kind of being.