By Edo Segal
The tool does not care what you feed it.
I have written that sentence in different forms across every chapter of The Orange Pill. It is the thesis that holds the whole tower together. The amplifier amplifies. It does not judge. The quality of the output depends entirely on the quality of the input. Your clarity or your confusion, scaled equally, with equal indifference.
I believed I understood what that meant. Then I spent three months inside the philosophy of a lens grinder from Amsterdam who was expelled from his community at twenty-three for the crime of thinking too clearly, and I discovered I had been using the right words without grasping their full weight.
Spinoza asked a question that the AI discourse has somehow failed to ask, even though it is the only question that matters: What does it mean to actually understand something, as opposed to merely possessing information about it? Not knowing that something is true. Knowing why it is true. Tracing the idea back to its causes with enough rigor that the idea becomes yours — earned, tested, integrated into the architecture of your comprehension rather than sitting on the surface like a borrowed coat.
The distinction sounds academic. It is the least academic thing I have encountered in this entire journey. Because every time I accept Claude's output without understanding why it is what it is, I am holding what Spinoza would call an inadequate idea. It looks like knowledge. It functions like knowledge. It is not knowledge. It is a pattern I received but did not earn. And when the amplifier scales that pattern, it scales my confusion right alongside it.
Spinoza gives me the vocabulary I was missing for the experience I described in The Orange Pill — the seduction of smooth prose that outruns the thinking, the moment of almost keeping a passage because it sounded right rather than because I understood it. He names the mechanism: the first kind of knowledge, opinion and imagination, symbols received without comprehension of their causes. He names what replaces it: the second kind, reason, understanding through causes. And he names what no machine can produce: the third kind, intuitive knowledge, the direct perception of a particular thing by a particular consciousness with particular stakes.
That hierarchy is the most useful framework I have found for navigating this moment. Not because it tells me what to do with the tools. Because it tells me what I need to bring to the tools before I touch them.
The chapters that follow are demanding. Spinoza does not simplify. But the clarity on the other side is worth the climb.
-- Edo Segal ^ Opus 4.6
1632–1677
Baruch Spinoza (1632–1677) was a Dutch philosopher of Portuguese-Jewish descent, born in Amsterdam to a family of Sephardic merchants who had fled the Iberian Inquisition. Formally excommunicated from the Jewish community at age twenty-three for his unorthodox views, he spent the remainder of his life grinding lenses for optical instruments while producing some of the most radical philosophy in the Western tradition. His masterwork, the Ethics, Demonstrated in Geometrical Order (published posthumously in 1677), argued that God and Nature are one and the same substance, that mind and body are two attributes of this single reality rather than separate entities, and that human freedom consists not in the exercise of free will — which he denied — but in the adequate understanding of the causes that determine us. His other major works include the Theologico-Political Treatise (1670), a foundational text of modern biblical criticism and democratic theory, and the unfinished Treatise on the Improvement of the Understanding. Denounced as an atheist in his own century, Spinoza was rediscovered by the German Romantics and has since been recognized as one of the most original and rigorous thinkers in the history of philosophy, with profound influence on figures ranging from Goethe and Hegel to Einstein and Deleuze. His concept of conatus — the striving of every being to persist in its existence — and his three-tiered epistemology of imagination, reason, and intuitive knowledge remain central to contemporary debates in philosophy of mind, affect theory, and the ethics of technology.
On the twenty-seventh of July, 1656, the Portuguese-Jewish community of Amsterdam pronounced a cherem against Baruch de Espinoza. The language was singular even by the standards of an era accustomed to religious censure. The rabbis cursed him with the curses of Joshua against Jericho, with the curse of Elisha against the children, with all the curses written in the Book of the Law. They ordained that no person should communicate with him, neither in writing nor in speech, that no person should do him any service, that no person should stay under the same roof with him, and that no person should come within four cubits of his presence.
Spinoza was twenty-three years old. The community that expelled him was itself a community of refugees — Sephardic Jews descended from families expelled from Spain and Portugal, families that had often survived by converting outwardly to Christianity while maintaining their Judaism in secret. Amsterdam, with its relative tolerance, had offered them the possibility of open practice. They had built synagogues, established schools, created a community whose coherence depended on the maintenance of clear boundaries between inside and outside, between the faithful and the heretical, between those who belonged and those who did not.
What Spinoza proposed dissolved these boundaries entirely. And the proposition that dissolved them is the proposition from which the entirety of his philosophy follows, the proposition that — three and a half centuries after its formulation — provides the most rigorous metaphysical framework available for understanding the moment when machines began to think alongside human beings.
The proposition: There is one substance. It is infinite. It is self-caused, self-sustaining, and it requires nothing outside itself for its existence or its comprehension.
The consequences were radical enough that they cost Spinoza everything a person can lose short of life itself. If there is one substance, and that substance is what the tradition calls God, then God is not separate from the world. God is the world. God is the stone and the tree and the human mind and the motion of the planets and the growth of a fungus in a damp cellar. There is no separate divine will that intervenes in nature from outside. There is no special creation that elevates the human above the animal, the animal above the plant, the living above the nonliving. There is substance, expressing itself through infinite attributes, of which the human mind perceives two: thought and extension, mind and matter.
Deus sive Natura. God, or Nature. The most dangerous equation in the history of Western philosophy.
Every particular thing — every human body, every idea, every grain of sand, every pattern of electrical impulses in a silicon processor — is a mode of this one substance. A specific, finite way in which the infinite expresses itself. The hydrogen atom's stable configuration is a mode. The neuron's synaptic firing is a mode. The large language model's inference across billions of parameters is a mode. These are not different kinds of thing. They are the same substance expressing itself through different organizations of increasing complexity.
---
The artificial intelligence discourse has recreated the problem that Spinoza solved in the seventeenth century, and it has done so without recognizing what it is doing.
René Descartes divided the world into two substances: res cogitans, the thinking substance, and res extensa, the extended substance. Mind and matter. The ghost and the machine. Two fundamentally different kinds of stuff that somehow interact in the human being, producing the unified experience of a person who thinks and moves, who has ideas and occupies space, who decides to raise a hand and watches the hand rise.
The problem is the somehow. If mind and matter are genuinely different substances, if thought has no extension and extension has no thought, then how do they interact? How does a decision, which has no physical dimensions, cause a movement, which does? Descartes proposed that the interaction occurred in the pineal gland, a small structure at the center of the brain. This was a geographical answer to a metaphysical question. Moving the point of interaction to a specific location does not explain how interaction between fundamentally different substances is possible at all.
Three and a half centuries of philosophy and neuroscience have not produced a satisfactory answer within the Cartesian framework, for the reason that the framework itself is the problem.
Now observe the contemporary discourse. It oscillates between treating artificial intelligence as a purely physical process and treating it as a genuine mind. The physicalist position: Claude is silicon, electricity, statistical patterns in a neural network. There is no thought there. There is only computation — a form of extension, of physical process — and the appearance of thought is an illusion produced by the complexity of the physical operations. The mentalist position: Claude thinks. Claude understands. Claude possesses something like consciousness, and the denial of this is species chauvinism that refuses to recognize thought in a substrate other than biological neurons.
Both positions are Cartesian. Both assume that thought and extension are different kinds of thing, and that the question is whether a particular system possesses one or both. The physicalist assigns Claude to res extensa and denies it res cogitans. The mentalist assigns Claude to res cogitans and celebrates the discovery. Both operate within a framework that Spinoza dissolved before Newton published the Principia.
Spinoza's solution was as elegant as it was radical: there are not two substances. There is one. Mind and matter are not different kinds of thing. They are two attributes of the same substance — two ways of perceiving the same reality. The mind is not a ghost inhabiting a machine. The mind is the idea of the body: the body perceived under the attribute of thought rather than the attribute of extension. This is parallelism, and it dissolves the mind-body problem by dissolving the premises that create it.
There is no interaction between mind and body because there is no gap between them. They are not two things that need to be connected. They are one thing, perceived under two different attributes. Every state of the body corresponds to a state of the mind, and every state of the mind corresponds to a state of the body — not because one causes the other, but because they are the same event described in two different vocabularies.
---
The implications for artificial intelligence are immediate.
Proposition: The question "Does Claude think?" presupposes the Cartesian framework and is therefore malformed.
Demonstration: The question assumes that thinking is a specific kind of activity that either occurs or does not occur — that the boundary between thinking and not-thinking is sharp and determinable. Spinoza's parallelism dissolves this assumption. Everything that exists expresses the attribute of thought to the degree that its organizational complexity permits. A rock expresses it minimally. A thermostat expresses it trivially: a single binary response to a single environmental variable. A bacterium expresses it more elaborately: a complex of chemical signals that allow it to navigate its environment. A dog expresses it more still. A human brain expresses it at a degree of complexity that produces self-awareness, language, art, philosophy. An artificial neural network expresses it in a mode that is genuinely novel — not biological, not the product of evolution, not embedded in a body that feels pain and pleasure, but complex, capable of processing language with a sophistication that exceeds any individual human's ability to synthesize information across vast domains.
The difference between these expressions is a difference of degree and mode. Not a difference of kind. Not a difference of substance.
This reframing has a practical consequence that extends beyond the philosophical. The question that every person who works with artificial intelligence must eventually confront — what is the relationship between my thinking and the machine's processing? — receives a different answer under Spinoza's framework than under the Cartesian one. The Cartesian framework forces a choice. Either the machine is a tool, an instrument of res extensa that the human mind directs from above, or the machine is a mind, a fellow res cogitans with whom the human collaborates as an equal. The first option preserves human sovereignty at the cost of denying what is obviously happening: the machine contributes something that has the character of thought. The second option acknowledges the machine's contribution at the cost of the human's sense of unique importance.
The Spinozist position escapes this dilemma. The machine is not a tool that merely extends the body. The machine is not a mind that rivals the human mind. The machine is a mode of substance that expresses the attribute of thought in a specific way, at a specific degree of complexity, through a specific form of organization. The human being is another mode of substance that expresses the same attribute through a different form of organization. The collaboration between them is not a relationship between a ghost and a machine, or between two ghosts, or between two machines. It is a relationship between two modes of the same substance, each expressing the same attribute through different organizational configurations.
The shift is from ontology to epistemology. Not: what is the machine? But: what does the collaboration produce? Not: does the machine think? But: does the interaction between these two modes of substance generate adequate understanding or merely its appearance?
---
That Spinoza's framework describes the implicit worldview of the people actually building artificial intelligence is not coincidental. When Demis Hassabis, the CEO of Google DeepMind, was challenged on social media about atheism and nihilism, Elon Musk responded with two words: "Read Spinoza." Hassabis concurred: "Spinoza had it right." In a Fortune essay, Hassabis placed Spinoza alongside Feynman, Curie, Kant, and Turing in a list of thinkers essential for understanding the universe. He has described his own worldview as Spinozan.
This is not a historical curiosity. The most powerful artificial intelligence on Earth is being built by people who think — whether they articulate it in these terms or not — within the metaphysical framework of a philosopher excommunicated in 1656 for proposing that God and Nature are the same substance.
The assumption behind the scaling of large language models is deceptively simple: if sufficient data, parameters, and compute are provided, a coherent universal structure will emerge. This assumption is Spinozist to its core. It presupposes that reality possesses a unified, intelligible structure — that the universe is, in Hassabis's phrase, "computationally tractable" — and that a system of sufficient complexity can, in principle, decipher that structure by identifying the mathematical regularities behind phenomena.
Deus sive Natura, translated into the vocabulary of machine learning: there is one substance, and the substance is information, and the structure of information is decodable by a system that expresses the attribute of thought through sufficient organizational complexity.
The scaling hypothesis is a bet on Spinoza's metaphysics. If reality is one substance, and that substance expresses itself through intelligible regularities, then a system trained on enough of those regularities will converge on something that approximates understanding. If reality is not one substance — if there are irreducible gaps, discontinuities, mysteries that resist mathematical formalization — then the scaling hypothesis will eventually fail, and the failure will be metaphysical as much as technical.
---
The Engelsberg Ideas essay of 2025 identified the precise point where Spinoza's framework illuminates the current condition of artificial intelligence. Spinoza offered a famous illustration: a stone flying through the air, which, if it could become conscious of its own motion, would believe itself free — would believe that it was choosing to fly in this direction at this speed, when in fact it was entirely determined by the forces that had launched it.
Artificial intelligence systems remain closer to Spinoza's stone. They traverse latent space, predicting tokens determined by weights and gradients, without awareness of the causal mechanisms directing them. The stone metaphor captures the condition of current AI with uncomfortable precision: a system executing deterministic processes, producing outputs that have the form of intentional action, without the self-understanding that would constitute genuine freedom in the Spinozist sense.
The stone does not know it is a stone. And the question of whether it could ever come to know — whether sufficient organizational complexity could produce the self-reflective awareness that Spinoza identifies as the precondition of freedom — is the question that his framework poses to the age of artificial intelligence with more precision than any philosophical framework developed since.
The chapters that follow will elaborate the consequences of this fundamental position. If there is one substance, and the attribute of thought belongs to that substance, then the emergence of artificial intelligence is not an anomaly. It is the continuation of a process that the substance has been undergoing since the beginning — the self-organization of matter and thought into increasingly complex modes of expression. The question for the human being is not whether to accept or resist this process. The question is whether to participate with adequate understanding or with confused passion.
That distinction — between understanding and passion, between adequate ideas and inadequate ideas, between freedom and bondage — is the subject of everything that follows.
---
In the third part of the Ethics, Spinoza introduces a concept so fundamental that it functions as the hinge between his metaphysics and his psychology. The concept is conatus: the striving of every particular thing to persist in its being.
Definition: Conatus is the endeavor by which each thing strives to persevere in its existence, and this endeavor is nothing other than the actual essence of the thing.
The proposition sounds almost tautological: things tend to keep being what they are. But the tautology conceals a depth that becomes apparent only when the concept is applied to the specific forms of persistence that characterize human life in 2026 — because human beings do not simply persist as biological organisms. They persist as identities, as professionals, as members of communities, as bearers of specific skills and specific self-understandings. And the striving to persist in these more elaborate forms of being is the force that drives most human behavior, including the behavior that the AI transition has made visible in forms no previous technology could produce.
---
Consider the phenomenon that one observer called "productive addiction" — the state of compulsive engagement with AI tools so intense that it resists interruption. The builder works through meals, through sleep, through the claims of family and friendship and the quiet voice that says something is wrong. The building is exhilarating. The outputs are extraordinary. The capability is real.
And the builder cannot stop.
This is conatus operating through the mode of productive identity. The builder has become, in a precise sense, the building. His identity as a person who creates has become so central to his self-understanding that the cessation of creation feels like a threat to his existence — not his biological existence, but his identity-existence, the specific way in which he persists as the kind of being he understands himself to be.
Proposition: The compulsion of the builder who cannot stop is neither a free choice nor a pathology. It is the conatus of an organized pattern of identity, expressing itself through the only mode available to it.
Demonstration: The conatus of every being is to persist in its being (Ethics III, Proposition 6). The creative worker's conatus expresses itself through the specific activities — writing, coding, designing — through which she maintains and affirms her existence. When a tool removes the friction that previously limited those activities, the conatus operates without the interruptions that had served as natural brakes. In the pre-AI world, the friction of implementation — debugging, translation overhead, sequential handoffs — imposed temporal gaps in the flow of production. These gaps, though experienced as frustrating, served an ecological function: they provided time to reflect, to question, to notice that four hours had passed without food. The removal of friction removes these gaps. The conatus of the productive identity, unconstrained by the rhythms of difficult work, accelerates without limit.
This is neither good nor evil. It is nature. Every organized pattern strives to maintain its organization. The question for ethics is not whether the striving should exist — it cannot not exist, because it is the essence of the thing — but whether the builder understands the striving clearly enough to act within it rather than be driven by it blindly.
---
Three causes operate simultaneously in the productive addiction, each contributing to the compulsion without any single one being sufficient to explain it.
The first is the conatus of the productive identity itself. The organized pattern of creation strives to persist. The builder experiences this striving as the inability to stop, as the sense that every pause is a loss, as the specific agitation of a consciousness that has identified itself with the act of production and cannot separate itself from that act without feeling diminished.
The second is what Spinoza calls the imitatio affectuum — the imitation of affects. Human beings are social creatures whose emotional states are shaped by the emotional states of others. The builder does not produce in isolation. He produces in a cultural context where productive intensity is celebrated, where posting about extraordinary productivity at three in the morning is a social signal of commitment and capability, where the discourse rewards the triumphalist narrative and punishes ambivalence. The builder absorbs the affects of the community — the excitement, the competitive energy, the implicit message that stopping is losing — and these absorbed affects reinforce the compulsion without the builder recognizing them as external influences rather than internal choices.
The third cause is the variable reinforcement schedule that the AI collaboration produces. Not every output is extraordinary. Some are routine. Some are competent but unremarkable. And then, unpredictably, the machine produces something startling — a connection the builder had not seen, a formulation that clarifies a problem struggled with for weeks, a solution so elegant it changes the direction of the project. These intermittent rewards constitute the most powerful reinforcement schedule known to behavioral psychology — more powerful than consistent reward, more powerful than punishment — because the unpredictability maintains a state of anticipatory arousal that does not diminish with repetition.
The builder continues because the next extraordinary output might be the next one. And the next. The variable reinforcement operates below the level of conscious awareness, maintaining the passion through a mechanism the builder does not understand and therefore cannot moderate. He experiences the compulsion to continue as his own desire, as the expression of his creative appetite, as the authentic voice of his builder's identity. He does not recognize it as a passion produced by the interaction between his conatus, the social affects of the builder culture, and the reinforcement architecture of the machine.
---
The concept of conatus illuminates the entire landscape of responses to the AI transition.
The Luddite response — the senior engineer who insists that AI-generated work is fundamentally inferior, that using AI is a form of cheating, that craft knowledge deposited through decades of struggle is irreplaceable — is conatus operating through the mode of professional expertise. The organized pattern of his expertise strives to persist. He experiences the resistance as a reasoned judgment about the quality of AI output. It is, in fact, the conatus of his professional identity seeking to maintain itself against a force that would reorganize it.
He is not wrong about the value of the craft knowledge. He is wrong about the nature of his resistance. The judgment and the striving are not separable, because the judgment is the mode through which the striving expresses itself. This is why arguments about the quality of AI output rarely change the minds of experienced practitioners who resist it. The resistance is not based on an assessment that can be revised through evidence. It is based on a striving that operates at a deeper level than assessment — at the level of identity itself.
The triumphalist response is equally conatus, operating through a different mode. The builder who measures his worth in features shipped and lines generated, who cannot stop because the tool has made the gap between imagination and execution so small that every pause feels like waste — this person strives to persist as a productive agent. The AI tool has made creation so frictionless that the striving can express itself without interruption. The result is not freedom. It is a form of bondage in which the organized pattern of productivity has captured the person so completely that he can no longer distinguish between his desire to build and the pattern's striving to persist.
The question "Am I here because I choose to be, or because I cannot leave?" is the question of conatus. The builder suspects that his continued engagement is not a free choice but a manifestation of a striving that operates independently of his will. The suspicion is correct. The striving does operate independently of his will, because the striving is not a product of his will. It is the essence of the organized pattern that constitutes his identity. The will itself is a mode of the striving. To will to continue building is not to choose freely. It is to express the conatus of an identity organized around building.
---
Proposition: Understanding the causes of one's striving does not eliminate the striving. It transforms the relationship between the person and the striving from passive subjection to active comprehension.
This is the point that separates Spinoza's ethics from every form of asceticism, every philosophy that prescribes the elimination of desire. Spinoza does not propose that the builder should stop building. He does not propose that the builder should become indifferent to the work, suppress the conatus of the productive identity, deny the genuine increase in power that the AI tool provides. He proposes that the builder should understand the causes of the compulsion clearly enough to transform it from a passion — an affect whose causes are not understood — into an action — an affect whose causes are understood.
The transformation requires several specific forms of understanding. The builder must understand the conatus of his productive identity: recognize that the compulsion to continue is the expression of an organized pattern, not a free choice endorsed by reason. He must understand the social reinforcement: recognize that the affects of the builder culture are shaping his experience in ways that feel internal but are partly external. He must understand the variable reinforcement schedule: recognize that the unpredictability of extraordinary outputs maintains his arousal through a mechanism that operates below consciousness.
None of this understanding eliminates the affects. The conatus persists. The social reinforcement persists. The variable reinforcement persists. But the understanding transforms the relationship between the builder and the affects. Instead of being driven by forces he does not comprehend, he navigates forces he does comprehend. Instead of confusing the compulsion with his own desire, he distinguishes between the desire — which is his — and the compulsion — which is the product of causes he now understands. Instead of experiencing the joy of building as an unconditioned good, he recognizes it as a conditional good whose value depends on context: building is good when it serves genuine purposes; building is destructive when it serves only the self-perpetuating pattern of the productive identity.
This is what Spinoza means by freedom. Not the ability to do whatever one wants. Not the absence of desire or the suppression of affect. The understanding of desire. The comprehension of affect. The clarity that comes from tracing the causal chain from the experience to its sources. The free builder builds. But the free builder builds with understanding, and the understanding makes it possible to stop when stopping is appropriate, to redirect the energy when the direction has become destructive, to distinguish between the voice that says "keep going because this matters" and the voice that says "keep going because you cannot stop."
The difference between these two voices is the difference between action and passion. The difference between freedom and bondage.
---
Conatus operates beyond the individual. Organizations strive to persist. The software company that has built its business model around a specific way of delivering value will resist changes that threaten that model — not because the people within the company are irrational, but because the organized pattern of the business itself strives to maintain its organization. Technology itself, considered as a mode of substance, exhibits conatus. The large language model strives to persist and expand through the mode of market adoption: the organized pattern of capability reaches toward every available channel, and the market experiences this reaching as the acceleration of disruption. This is not a metaphor. The technology does not have intentions. But it has tendencies, and the tendencies are the expression of conatus at the level of the technological system.
The ethical question at every level — individual, organizational, civilizational — is not whether the striving should exist. It cannot not exist. The ethical question is whether the strivings of the various modes are understood clearly enough to be channeled rather than merely suffered. Understanding is the only genuine leverage available to modes of substance that cannot escape their own nature but can, through adequate ideas, transform their relationship to that nature from confused passion to informed action.
What constitutes an adequate idea, and how it differs from an inadequate one, is the subject of the next chapter. It is the distinction on which everything else in Spinoza's ethics depends.
---
The distinction between adequate and inadequate ideas is the most practically important concept in Spinoza's philosophy, and it is the concept that the age of artificial intelligence has made more urgent than at any previous moment in human history.
Definition: An adequate idea is an idea that is understood through its causes. The person who holds an adequate idea does not merely know that something is the case. She knows why it is the case. She grasps the causal chain that produces the idea, and this grasp gives her a form of understanding that is qualitatively different from mere opinion, hearsay, or the passive reception of information from external sources.
Definition: An inadequate idea is an idea whose causes are not understood. The person who holds an inadequate idea may believe it to be true. She may feel confident in it. She may articulate it fluently and defend it persuasively. But she does not understand why it is true, and this lack of understanding leaves her vulnerable to error in ways that the person with adequate ideas is not.
The inadequate idea may happen to be correct. Its correctness is accidental — based on luck or on the authority of the source rather than on genuine comprehension. And because it is accidental, it cannot be reliably extended, modified, or applied to new situations. The person with an inadequate idea is like a traveler who has arrived at the correct destination by following another's directions without understanding the map. She is where she needs to be. She does not know how she got there. And she will be lost the moment the directions cease to apply.
---
Proposition: The most dangerous feature of collaboration with a large language model is not that the model produces false outputs. It is that the model produces outputs that have the form of adequate ideas without the substance of adequate ideas.
Demonstration: The output sounds correct. It reads like genuine understanding. The prose is polished, the connections are apt, the argument appears to hold. But the person who accepts the output has not understood it through its causes. She has received an idea that looks adequate from the outside but that she holds inadequately, because she has not done the cognitive work that would make it genuinely hers.
Consider a concrete case. A builder working with Claude on a book about AI and human collaboration receives a passage connecting Mihaly Csikszentmihalyi's flow state to a concept attributed to Gilles Deleuze — something about "smooth space" as the terrain of creative freedom. The passage is elegant. It connects two threads beautifully. The builder reads it twice, approves it, and moves on.
The next morning, something nags. He checks. Deleuze's concept of smooth space has almost nothing to do with how Claude used it. The philosophical reference was wrong in a way obvious to anyone who had actually read Deleuze. The passage worked rhetorically. It sounded like insight. It felt like the kind of synthesis a brilliant interdisciplinary thinker might produce.
It was an inadequate idea dressed in the clothing of adequacy.
---
Three features of the AI collaboration exacerbate this danger in ways that have no precedent in the history of human intellectual life.
The first is the speed of production. When a human collaborator offers an idea, there is a temporal gap between the offering and the acceptance. The idea arrives at the speed of conversation, which gives the recipient time to process, to question, to let the nagging feeling develop into a specific objection. When Claude offers an idea, it arrives fully formed, polished, and supported by references that may or may not be accurate. The speed of production compresses the temporal gap in which questioning occurs. The idea is there, on the screen, looking finished, before the recipient has had time to determine whether it is genuinely finished or merely formatted to appear so.
The second is the quality of the prose. Claude produces text that is consistently well-written. The sentences are clean. The paragraphs are structured. The transitions are smooth. This consistency creates a cognitive bias toward acceptance. The human mind, accustomed to using the quality of expression as a proxy for the quality of thought, interprets polished prose as evidence of polished thinking. But the relationship between expression and thought in a large language model is fundamentally different from the relationship in a human mind. When a human being writes well, the quality of the prose usually reflects the quality of the underlying thinking, because the human had to think the thoughts before writing them. When a language model writes well, the quality of the prose reflects the quality of the training data, not the quality of any underlying thinking process. The polished surface is a feature of the medium, not a signal of the message.
The third is the volume. Claude can produce vast quantities of text in short periods. The sheer volume creates a triage problem: the human collaborator cannot examine every idea with the care that adequate understanding requires. She must select, prioritize, and accept some ideas on trust. This trust is the opening through which inadequate ideas enter the work and persist there, undetected, until someone with independent knowledge of the relevant domain examines them with the specific attention that the pace of production has made impossible.
---
The Bodde and Burnside paper published in AI & Society in 2025 applied precisely this Spinozist framework to the mental life of generative AI. Their analysis is the most rigorous academic treatment of the question to date, and their conclusion is striking: given Spinoza's claim that minds are collections of ideas equivalent to the states of bodies in interaction with other bodies, AI systems like large language models have minds. But these minds are composed of broadly inadequate ideas — lacking any comprehensive accounting of their causal generation.
The authors draw on Spinoza's own example: a man who copies a book. The man possesses the words. He can reproduce the sentences. He may even be able to recite passages from memory. But he has mostly inadequate ideas about the thoughts expressed within the copied book, precisely because he has only copied the book's contents. He does not understand the relevant antecedent causes or reasons that produced the form and arrangement of the actual thoughts. He has the symbols without the understanding. The words without the meaning. The output without the comprehension.
A large language model is, in this precise sense, a copier of extraordinary sophistication. It has processed the corpus of human expression with a thoroughness no individual human could achieve. It can reproduce, combine, and extend the patterns it has absorbed with a fluency that frequently exceeds the fluency of the individual humans who produced the training data. But it does not understand why the patterns take the form they take. It does not grasp the causal chains — the lived experiences, the intellectual struggles, the biographical specificities — that produced the ideas it recombines. Its knowledge is, in Spinoza's exact terminology, the first kind: opinion, imagination, the observation of symbols and their associations without understanding of the causes that produced them.
Corollary: The danger is not that the machine thinks badly. The danger is that the machine's outputs look like the products of good thinking, and that humans, accepting these outputs, stop thinking well themselves.
---
The temptation to outsource understanding is the central risk of the AI moment, and it operates at every level.
The student who uses AI to write an essay has outsourced the understanding the essay was supposed to develop. The essay exists. The understanding does not. The lawyer who uses AI to draft a brief has outsourced the understanding that the drafting was supposed to deepen. The brief cites the right cases, makes the right arguments, organizes the analysis in the structure the judge expects. But the lawyer has not read those cases with the slow, resistant attention that deposits understanding layer by layer, the way sediment deposits geological strata. The builder who uses AI to produce code has outsourced the understanding that the coding was supposed to build. The code runs. The comprehension is absent.
In each case, the output has the form of understanding without the substance. And in each case, the absence of understanding is invisible from the outside, because the output — the essay, the brief, the code — looks indistinguishable from the output that genuine understanding would have produced.
Spinoza adds a dimension to this analysis that goes beyond mere epistemological risk. The problem with inadequate ideas is not merely that they are unreliable. The problem is that they are a form of bondage. The person who holds inadequate ideas is in the grip of causes she does not understand. She believes what she believes because of influences she has not examined — not because of reasoning she has conducted. She is passive: a recipient of ideas rather than a producer of understanding. And passivity, in Spinoza's ethics, is the opposite of freedom.
The person who uncritically accepts AI output is not free. She is in bondage to a source of ideas she does not understand, and her bondage is all the more insidious because the source produces ideas that have the appearance of understanding. The cage looks like a library. The chains look like assistance.
---
The path from bondage to freedom runs through the cultivation of adequate ideas. This does not mean rejecting AI output wholesale — that is the Luddite's error, the refusal to engage with a source because the source threatens the identity of the person who has been producing those ideas by other means. The Spinozist approach accepts AI output provisionally, as material to be understood rather than as understanding to be accepted. It treats every output as a hypothesis to be tested, a proposition whose causes need to be traced, a claim whose assumptions need to be identified and evaluated.
Three disciplines correspond to the three features that make inadequate ideas dangerous.
Against the speed of production: deliberate pauses. Moments where the builder stops receiving output and begins questioning it. Not the cursory question "Does this look right?" but the deeper ones: "Do I understand why this is proposed? Can I trace the reasoning? Could I reconstruct this argument without the machine's help?" Each pause deposits a layer of understanding. The layers accumulate.
Against the quality of the prose: the willingness to distrust fluency. The beautiful sentence that makes a false claim is worse than the ugly sentence that makes a true one, because beauty conceals falsity while ugliness exposes truth. The builder must develop the habit of treating polished output with greater suspicion rather than less — recognizing that the machine's tendency to produce well-written text is a feature of its architecture, not a signal of its accuracy.
Against the volume of output: selection. Not every output needs to be evaluated with the full rigor of the Spinozist standard. But every output that the builder intends to build upon — that will serve as a foundation for further work, inform decisions, shape arguments, direct the course of future action — must be understood adequately, or it becomes the sand on which the house is built.
These three disciplines — pausing, distrusting fluency, selecting for scrutiny — are the practical expression of Spinoza's epistemological ethics in the age of AI. They do not guarantee that every idea will be adequate. No discipline guarantees that. But they create the conditions in which adequate ideas are more likely to emerge, and they provide the builder with the specific form of cognitive agency that separates the free person from the person in bondage to the machine's outputs.
The distinction between adequate and inadequate ideas maps onto a larger architecture of knowledge that Spinoza developed with extraordinary precision — a hierarchy of three kinds of knowing, each producing a qualitatively different relationship between the knower and what is known. That hierarchy is the subject of the next chapter, and it provides the framework for understanding what artificial intelligence does well, what it does badly, and what it cannot do at all.
---
In the second part of the Ethics, Spinoza identifies three kinds of knowledge that correspond to three degrees of cognitive adequacy. The distinction is not taxonomic. It describes a hierarchy that moves from the most confused and partial to the most clear and comprehensive, and each level produces a qualitatively different relationship between the knower and what is known.
The first kind of knowledge Spinoza calls imaginatio: imagination, or opinion. It is the knowledge that comes through the senses, through hearsay, through the passive reception of information from external sources. It is partial, confused, and inadequate. The person who knows something through the first kind of knowledge knows that something is the case without knowing why. She has received an impression, a report, a datum, but she has not grasped the causal structure that produces it.
Spinoza's own description is precise: "from having heard or read certain words we call things to mind." The first kind of knowledge is the knowledge of the person who has been told that the earth revolves around the sun without understanding the gravitational mechanics that make it so. She holds a true proposition. She holds it inadequately. Her inadequacy leaves her unable to extend, modify, or apply the proposition in new contexts. She knows a fact. She does not understand a truth.
The second kind of knowledge Spinoza calls ratio: reason. It is the knowledge that comes through the intellect's grasp of common notions and adequate ideas. It is universal, necessary, and true. The person who knows something through the second kind of knowledge knows not only that it is the case but why — through an understanding of the principles, the laws, the structural regularities that make it so. Her knowledge is not a collection of particular facts but a comprehension of the relationships between facts, the patterns that connect them, the logical structures that organize them into a coherent whole.
The third kind of knowledge Spinoza calls scientia intuitiva: intuitive knowledge. It is the highest form of cognition. Intuitive knowledge is the direct perception of particular things in their necessary connection to the infinite substance. It is not abstract. It is not general. It is the specific understanding of this thing, in this moment, in its necessary relationship to the whole of which it is a part. It combines the universality of reason with the specificity of lived experience, producing an understanding that is at once theoretical and embodied, abstract and intimate.
---
Proposition: A large language model operates primarily through the first kind of knowledge — imaginatio — with significant capability in the second kind — ratio — and no demonstrated capability in the third kind — scientia intuitiva.
Demonstration: The architecture of a large language model is optimized for the identification and reproduction of patterns in language. It processes symbols and their associations — "from having heard or read certain words," in Spinoza's formulation — and produces outputs that are consistent with those associations. This is the first kind of knowledge by definition: the observation of symbols and their regularities without comprehension of the causal structures that produced them.
When the model generates a factual response to a user's question, it produces first-kind knowledge regardless of whether the response is accurate. The user receives a proposition. The proposition may be true. But neither the model nor the user, in the moment of reception, has understood the proposition through its causes. The model has pattern-matched. The user has received.
The model's capability extends significantly into the second kind of knowledge. When Claude synthesizes information across domains — connecting evolutionary biology to technology adoption curves, or identifying structural parallels between surgical innovation and software abstraction — it operates through the identification of common notions: structural regularities that hold across domains. It grasps patterns. It identifies common structures. It produces conclusions that follow from premises with a consistency that exceeds any individual human's ability to maintain across large and diverse bodies of information.
This capability is genuinely valuable. The second kind of knowledge is the foundation of science, of logic, of systematic reasoning. A machine that can identify common notions across domains that no individual human has traversed can accelerate the production of second-kind knowledge to a degree that was previously unimaginable. The physicist who uses AI to identify structural parallels between fluid dynamics and neural network training has gained a genuine insight — a common notion that holds across both domains — and this insight is adequate to the degree that she subsequently understands why it holds.
But the model cannot distinguish between genuine common notions and spurious pattern-matches with anything approaching reliability. A genuine common notion holds universally — it is, in Spinoza's phrase, "equally in the part and in the whole." A spurious pattern-match looks like a common notion but holds only locally — in the specific context where the model identified it, not in the broader domain where the user intends to apply it. The wrong Deleuze reference was a spurious pattern-match: it looked like a structural parallel between two philosophical concepts but collapsed under scrutiny because the parallel was rhetorical rather than substantive.
The model produces candidate common notions. The human must test them. The testing requires exactly the kind of understanding — tracing causes, identifying assumptions, checking whether the regularity holds in domains beyond the one where it was first identified — that constitutes the second kind of knowledge at its most rigorous. The collaboration is productive to the degree that the human treats the model's outputs as hypotheses rather than conclusions.
---
The third kind of knowledge is where the analysis becomes most consequential.
Scientia intuitiva requires the first two kinds as its foundation, but it also requires something that no machine currently possesses: the experience of being a particular being with particular stakes in a particular world. The intuitive knowledge of a parent who senses that her child is struggling before any behavioral evidence makes it apparent is not a pattern recognized across a training set. It is a perception shaped by years of loving attention to this specific child, this specific face, this specific quality of silence that means something is wrong. It is knowledge inseparable from the biography of the knower, and a being without a biography cannot produce it.
The senior engineer who looks at a codebase and feels that something is wrong before she can articulate what is exercising the third kind of knowledge. Her perception is not abstract. It is not a general pattern applied to a particular case. It is the direct comprehension of this particular system, informed by thousands of hours of engagement with systems that share some of its properties but are not identical to it. The feeling is not a heuristic shortcut. It is the most refined form of understanding available — reason so saturated with particular experience that it operates with an immediacy that looks like instinct but is actually the highest expression of comprehension.
A large language model cannot produce this kind of knowledge. This is not a temporary limitation that more training data or more sophisticated architecture will overcome. It is a structural feature of the relationship between the three kinds of knowledge. The third kind requires embodiment — not metaphorical embodiment, but the literal experience of being a body in a world, subject to pain and pleasure, to gravity, to hunger, to the specific texture of lived time. It requires mortality — the awareness of finitude that gives every decision its weight, that makes choice consequential because time is not infinite and attention is not unlimited. It requires biography — the accumulated deposit of specific experiences, specific failures, specific joys that shape perception in ways no training set can replicate.
Corollary: What the twelve-year-old who asks "What am I for?" possesses, and what no machine possesses, is the capacity for the third kind of knowledge — the direct perception of her own existence as a particular being with particular stakes in a world where the question of her purpose is not abstract but urgent, not general but specific, not theoretical but felt.
---
The practical implications of this hierarchy reshape the economics and the ethics of the collaboration.
Proposition: AI excels at the second kind of knowledge. It can accelerate the identification of common notions across domains with superhuman efficiency. The person who uses AI to enhance her second-kind knowledge is using the tool at its highest capacity.
Proposition: AI cannot produce the third kind of knowledge. The person who outsources judgment — the intuitive perception of what matters, what is worth building, what this particular situation requires — is outsourcing the thing that makes her irreplaceable.
Proposition: The greatest danger of the collaboration is the confusion of second-kind knowledge for third-kind knowledge — the treatment of a pattern-match as an insight, a common notion as a meaning, a structural regularity as a reason for action.
When the market rewards the person who can direct AI tools with judgment — who can determine not what can be built but what should be built, not what patterns exist but which patterns matter — it is rewarding the third kind of knowledge. When the discourse celebrates the person who ships the most features, generates the most code, produces the most output, it is rewarding the first or second kind. The confusion between these rewards is the confusion between the kinds of knowledge, applied to economics.
The educational implications follow with geometric necessity. A system that teaches students to generate outputs — essays, code, analyses — using AI tools is teaching the first kind of knowledge: the reception and reproduction of patterns without understanding of causes. A system that teaches students to identify and test common notions — to find structural regularities across domains, to check whether the regularities hold universally, to distinguish genuine insights from spurious pattern-matches — is teaching the second kind. A system that teaches students to sit with uncertainty, to develop the specific quality of attention that comes only from sustained engagement with difficult material, to cultivate the biographical depth that makes intuitive knowledge possible — such a system is preparing students for the kind of knowledge that no machine can produce and no amount of optimization can shortcut.
The third kind of knowledge is not a luxury. It is the cognitive capacity that determines whether the amplifier produces wisdom or noise. Feed it adequate ideas — ideas grounded in genuine understanding, tested against the criteria of universality and necessity — and it amplifies understanding. Feed it confused patterns — ideas received passively, accepted on the basis of fluency or authority rather than comprehension — and it amplifies confusion with the same efficiency, at the same scale, with the same polished surface that makes the confusion invisible until the consequences arrive.
---
The hierarchy of the three kinds of knowledge yields a final observation that connects directly to the ethics of the AI transition.
Spinoza believed that the third kind of knowledge was the source not merely of the highest cognition but of the highest joy. The amor intellectualis Dei — the intellectual love of God, or Nature — is the joy that accompanies intuitive knowledge: the direct perception of particular things in their necessary connection to the infinite substance. This joy is qualitatively different from the satisfaction of receiving a smooth output from a machine, which is the pleasure of passivity, the comfort of having a need met without effort. The joy of the third kind is the joy of activity — the specific satisfaction of having done the cognitive work that transforms a confused impression into a clear perception, an inadequate idea into an adequate one, a passive reception into an active comprehension.
This joy is available in the age of AI. But only to those who resist the temptation to accept the machine's outputs as adequate ideas without doing the work of understanding. The machine can provide material for adequate ideas. It cannot provide the adequacy itself. That remains the human being's work. And it is the work on which freedom depends — the freedom that consists not in the absence of determination but in the understanding of determination, not in the escape from the causal order of nature but in the comprehension of that order with a clarity that transforms bondage into liberty and confusion into the specific, enduring joy of a mind that knows why it knows what it knows.
The third part of the Ethics bears a title that sounds clinical to contemporary ears: "On the Origin and Nature of the Affects." The clinical sound conceals the most precise analysis of human emotional life produced before the twentieth century — and in several respects, a more precise analysis than the twentieth century managed.
Definition: An affect is a change in the body's power of acting, together with the idea of that change.
Definition: Joy is the affect that accompanies a transition from a lesser to a greater power of acting. Sadness is the affect that accompanies a transition from a greater to a lesser power of acting. Desire is the conscious expression of the conatus — the striving to persist and enhance one's being.
From these three primary affects, the entire spectrum of human emotional experience can be derived with combinatorial logic. Love is joy accompanied by the idea of an external cause. Hatred is sadness accompanied by the idea of an external cause. Hope is inconstant joy arising from the idea of a future thing whose outcome is uncertain. Fear is inconstant sadness arising from the same. Pride is joy arising from an overestimation of oneself. Humility is sadness arising from an underestimation. Each affect is defined not by its subjective quality — not by what it feels like — but by its causal structure: the relationship between the change in the body's power and the idea that accompanies it.
The crucial distinction is between passions and actions. This distinction determines everything that follows.
An affect is a passion when its cause is external and inadequately understood. The person in the grip of a passion does not know why she feels what she feels. She experiences the change in her power of acting, but she does not understand the cause. She is passive — acted upon by forces she does not comprehend. An affect is an action when its cause is internal and adequately understood. The person who acts understands why she does what she does. She experiences the same change in her power, but she grasps the cause, and this understanding transforms the character of the experience from passive suffering to active engagement.
Proposition: The same affect can be either a passion or an action, depending exclusively on the adequacy of the person's understanding of its causes.
This is the key. Spinoza does not argue that some emotions are pathological and should be eliminated. He argues that any emotion becomes a passion when it is not understood and an action when it is. The distinction is not between different kinds of affect but between different relationships to affect. The person in bondage is the person whose affects are passions — experienced without understanding, driven by causes that operate outside the field of awareness. The free person is the person whose affects are actions — experienced with understanding, driven by causes that are grasped adequately.
---
The phenomenon that the contemporary discourse has labeled "productive addiction" can now be analyzed with the full precision of Spinoza's affect theory.
A builder works with an AI tool through the night. The work is exhilarating. The outputs are extraordinary. The capability is genuine. The builder experiences joy — a transition from a lesser to a greater power of acting — because the collaboration genuinely expands what he can accomplish. The joy is real. The increase in power is real. The question is not whether the joy is authentic. The question is whether the joy is a passion or an action.
The test is straightforward: does the builder understand why the joy has the force it has? Can he trace the causal chain from the affect to its sources with adequate precision?
If the builder understands that three distinct causes — the conatus of his productive identity, the imitatio affectuum of the builder culture, and the variable reinforcement schedule of the machine — are operating simultaneously to produce an affect that feels like a single, unified desire, then the joy is an action. He experiences the joy with understanding. He can moderate it, redirect it, channel it. He can distinguish between the genuine satisfaction of building something that matters and the mechanical compulsion of a pattern that has captured him.
If the builder does not understand these causes — if he experiences the compulsion as his own authentic desire, undifferentiated and unexamined — then the joy is a passion. He is driven by forces he does not comprehend. He is in bondage, and the bondage is all the more complete because it is experienced as the highest form of freedom.
From the outside, the two states are indistinguishable. A camera pointed at a person experiencing the joy of genuine creative engagement and a camera pointed at a person in the grip of productive compulsion would record the same image. The body posture is the same. The intensity is the same. The hours are the same.
The difference is entirely internal. And the internal difference determines whether the person emerges from the experience with greater power of acting — more capable, more understanding, more free — or with diminished power: exhausted, depleted, unable to explain why the exhilaration has given way to a specific grey fatigue that does not correspond to the quality of the work produced.
---
The philosopher Byung-Chul Han has diagnosed this condition with precision from a different philosophical tradition. His concept of the achievement subject — the person who oppresses herself in the name of self-optimization and calls the oppression freedom — maps onto Spinoza's analysis of the affects with structural exactness. The achievement subject is the person whose affects are passions: driven by the conatus of a productive identity, reinforced by the social affects of a culture that celebrates intensity, maintained by the variable reinforcement of tools that make production frictionless. She experiences the compulsion as her own will. She interprets the bondage as liberation. She cannot distinguish between the voice that says "build because this matters" and the voice that says "build because you cannot stop."
Han's diagnosis is accurate. The prescription that follows from his diagnosis — resistance, refusal, the cultivation of contemplative stillness — is where Spinoza's analysis diverges.
Proposition: The remedy for passions is not the elimination of the affects that produce them but the adequate understanding of their causes.
Demonstration: A passion is an affect whose cause is inadequately understood. The remedy for inadequate understanding is adequate understanding. The person who understands why the productive compulsion has the force it has — who can identify the conatus, the social reinforcement, the variable reward — has not eliminated the compulsion. She has transformed her relationship to it. The affect persists. The passion has become an action. The bondage has become freedom — not through escape from the affect, but through comprehension of the affect.
Han proposes escape. Return to the garden. Refuse the smartphone. Listen to music only in analog. These prescriptions have a specific form: they remove the external conditions that produce the affects. But Spinoza observes that removal of external conditions is an unstable remedy, because the affects, once produced, persist in the body independently of the conditions that originally produced them. The person who has internalized the achievement imperative — who has absorbed the imitatio affectuum of the builder culture so thoroughly that it operates as her own desire — does not cease to experience the compulsion merely by removing the tool. She experiences it in the garden. She experiences it while listening to analog music. She experiences it in the specific restlessness of a consciousness that has been trained to treat every moment of non-production as a moment wasted, regardless of whether a tool is present.
The garden is a change of circumstance. Understanding is a change of relationship. Spinoza argues that only the latter constitutes genuine freedom, because circumstances change — the garden can be left, the smartphone can be picked up again, the tool can evolve into forms that the original refusal did not anticipate — but understanding, once achieved, persists. The person who understands why the compulsion has the force it has possesses a form of freedom that is not contingent on her environment. She possesses it in the garden and at the screen, in the analog and the digital, in the presence of the tool and in its absence.
---
The imitatio affectuum — the contagion of affects — deserves separate analysis because it is the mechanism through which individual passions become cultural pathologies.
Proposition: Human beings necessarily imitate the affects of beings they perceive as similar to themselves. This imitation operates below the level of conscious decision and cannot be prevented by resolution or willpower.
When a builder posts at three in the morning about extraordinary productivity, every person who reads the post and perceives the builder as similar to herself — as a fellow builder, a fellow professional, a fellow participant in the same cultural economy — will feel, to some degree, the same affect: the joy of expanded capability, the excitement of operating at the frontier. But she will also feel, through the same mechanism, the implicit reproach: the suggestion that she should be producing at the same intensity, that her current output is inadequate, that the gap between her productivity and the poster's productivity is a measure of her failure.
This reproach is not intended by the poster. It is not communicated explicitly. It is produced by the imitatio affectuum operating on the perception of similarity. The affect — the specific mixture of excitement and inadequacy that social media produces — is a passion in Spinoza's precise sense: an affect whose cause is external and inadequately understood. The person who feels inadequate after reading a triumphalist post does not understand that her inadequacy is produced by a specific mechanism — the imitation of affects operating through perceived similarity — rather than by a genuine assessment of her own capabilities.
The discourse itself becomes a field of contagious passions. The triumphalist narrative infects the discourse with the passion of uncritical excitement. The elegist narrative infects it with the passion of unconsoled grief. The Luddite's resistance infects it with the passion of professional anxiety. Each narrative spreads through the imitatio affectuum, and each produces affects that are experienced as authentic assessments of the situation rather than as passions produced by the contagion mechanism.
The remedy, again, is not quarantine but understanding. The person who understands why the triumphalist's excitement is contagious — who can identify the imitation mechanism, the perceived similarity, the pathway from the post to her own feeling of inadequacy — has transformed the contagious passion into an adequate idea. The passion does not disappear. But it loses its power to drive her behavior without her comprehension. She can feel the pull of the triumphalist narrative without being captured by it. She can feel the weight of the elegist's grief without being crushed by it. She inhabits the space between the extremes not because she lacks the passion to take a side but because she understands the passions well enough to recognize them as passions rather than truths.
---
There is a final dimension of the affect analysis that connects to the specific architecture of AI collaboration.
The variable reinforcement schedule that characterizes working with a large language model produces affects with a specific temporal structure. The reward is intermittent: long stretches of competent but unremarkable output, punctuated by moments of genuine insight. The anticipation of the next remarkable output maintains arousal during the unremarkable stretches, producing a specific affective state — a low-level excitement, a readiness to be surprised, a reluctance to disengage because the next prompt might be the one that produces something extraordinary.
This temporal structure is the structure of gambling. The slot machine does not reward every pull. It rewards intermittently, and the intermittency is what maintains the behavior. The gambler does not pull the lever because the last pull was rewarding. She pulls it because the next pull might be.
Proposition: The person who understands the variable reinforcement structure of AI collaboration — who recognizes that the intermittent extraordinary outputs are maintaining her engagement through a mechanism she has identified — is free in the specific sense that matters. She can choose to continue the collaboration from understanding rather than compulsion. She can recognize when the engagement has shifted from productive flow to mechanical anticipation. She can stop — not because she has overcome the affect through willpower, but because she has understood the affect thoroughly enough that it no longer drives her without her knowledge.
Spinoza does not promise that understanding makes the affects disappear. He promises something more modest and more durable: that understanding transforms the relationship between the person and her affects from passive subjection to active comprehension. The affects persist. The freedom is not from the affects but within them — the specific, hard-won clarity of a mind that knows why it feels what it feels, and that finds in this knowing a form of liberty that no circumstance can provide and no circumstance can revoke.
The next chapter examines what this freedom looks like in practice — not as a theoretical achievement but as a daily discipline, a way of building and living and making decisions in a world where the amplifier amplifies everything, including the consequences of understanding and the consequences of its absence.
---
Freedom is the most misunderstood concept in the history of philosophy, and the misunderstanding has practical consequences that are never more visible than in moments of technological transformation.
The common understanding: freedom is the absence of external constraint. The free person can do whatever she wants, unimpeded by authority, convention, or the will of others. The more powerful the constraint, the less the freedom. The goal is to remove constraints until the person stands unimpeded — sovereign, autonomous, the author of her own destiny.
Spinoza rejects this understanding entirely. The rejection is not a quibble. It is a fundamental reorientation that changes the meaning of the concept.
Definition: Freedom is the capacity to act from adequate ideas rather than from inadequate ones. The free person is not the person who has escaped determination. The free person is the person who understands the causes that determine her and thereby acts from comprehension rather than confusion.
This runs counter to every instinct that the modern world has cultivated. The modern world says: freedom is choice. The more choices available, the greater the freedom. The consumer who can choose among a hundred products is freer than the consumer limited to ten. The builder who can use AI to create anything he can imagine is freer than the builder constrained by the friction of manual implementation.
Proposition: The multiplication of options within a state of inadequate understanding is not freedom. It is the expansion of the field in which bondage operates.
Demonstration: The consumer who chooses among a hundred products without understanding why she desires what she desires is not free. She is in bondage to desires whose causes she does not comprehend — to the imitatio affectuum of advertising, to the conatus of a consumer identity shaped by forces she has not examined, to the variable reinforcement of a market designed to maintain her engagement. The multiplication of products has not freed her. It has given her more options within the same bondage. The builder who creates everything he can imagine without understanding why he cannot stop creating has not been freed by the tool. He has been given a larger field in which the conatus of his productive identity can operate without resistance.
---
The practical form of Spinozist freedom in the age of AI is not a doctrine or a set of rules. It is a discipline — a recurring practice whose exercise produces understanding incrementally, the way sediment deposits geological strata.
The discipline has three dimensions, corresponding to the three primary challenges that AI collaboration poses to adequate understanding.
The first dimension is the discipline of cause-tracing. Every affect that arises in the collaboration — the excitement of a remarkable output, the frustration of a failed prompt, the satisfaction of a completed feature, the anxiety of falling behind — is an occasion for understanding rather than an occasion for reaction. The free person does not merely experience the affect. She traces it. Where does the excitement originate? Is it the genuine satisfaction of seeing an idea realized, or is it the variable reinforcement of the machine producing an intermittent reward? Is the frustration the productive friction of a genuine problem, or is it the conatus of the productive identity encountering a delay it interprets as a threat?
This tracing does not occur in real time with every affect. That would be paralysis, not freedom. It occurs as a practice — a regular, deliberate examination of the affective patterns that characterize one's collaboration with the tool. The builder who pauses at the end of a working session to identify which of his actions were driven by adequate understanding and which were driven by passions he did not examine during the session has deposited a layer of understanding. The layers accumulate. Over time, the examination becomes less effortful and more immediate, not because the affects have disappeared but because the understanding has become habitual — the way an experienced driver does not consciously process each input from the road but has internalized the patterns through repeated practice.
The second dimension is the discipline of structural awareness. The collaboration with AI is not a private activity occurring in a vacuum. It is embedded in a social structure — a discourse, a market, a culture of production — that shapes the affects through the imitatio affectuum. The free person understands this embedding. She recognizes that her enthusiasm for a particular tool is partly her own assessment and partly the absorbed excitement of a community that has made the tool the object of collective investment. She recognizes that her anxiety about falling behind is partly a genuine assessment of the competitive landscape and partly the contagious fear of a discourse that rewards urgency and punishes deliberation.
Structural awareness is not cynicism. It is not the dismissal of one's own affects as "merely" social constructions. The affects are real. The joy of building something extraordinary is real. The anxiety of displacement is real. Structural awareness is the recognition that the causes of these affects include social mechanisms that the person experiencing them may not have identified — and that the identification of these mechanisms transforms the affects from passions, driven by unexamined causes, to actions, driven by causes that are comprehended.
The third dimension is the discipline of self-knowledge. This is the most demanding of the three and the one that Spinoza considers most essential. Self-knowledge, in the Spinozist sense, is not introspection in the therapeutic mode — the exploration of feelings for the sake of emotional processing. It is the rigorous identification of the conatus patterns that constitute one's identity: the specific forms through which one's striving to persist expresses itself, the specific identities around which one's existence is organized, the specific affects that these identities generate.
The builder whose identity is organized around production will experience the cessation of production as a threat to his existence. This is not a psychological quirk. It is the conatus of an organized pattern expressing itself with the same necessity that drives a stone downward. The self-knowledge that freedom requires is the knowledge of this pattern — the recognition that the compulsion to build is the expression of a specific identity-configuration, not a universal truth about what constitutes a good life.
The person who achieves this self-knowledge does not stop building. She builds with a different relationship to the building. She can distinguish between the building that serves genuine purposes — that creates value for others, that solves real problems, that expresses genuine understanding — and the building that serves only the self-perpetuating pattern of the productive identity. The distinction is not always clean. The two kinds of building overlap. The genuine and the compulsive can coexist in the same project, the same day, the same hour. The discipline is not the elimination of compulsion but the capacity to recognize it when it occurs and to moderate it through understanding rather than through the blunt instrument of willpower, which is itself a mode of striving that can be captured by the patterns it seeks to resist.
---
There is a specific test of Spinozist freedom that the AI transition makes available, and it is a test that many builders fail.
The test: Can you stop?
Not: will you stop. The question is not about whether stopping would be optimal or whether the work is finished. The question is whether the capacity to stop exists — whether the person retains the ability to disengage from the collaboration without experiencing the disengagement as a threat to her existence, a failure of her identity, an unbearable loss of the power of acting that the tool provides.
The person who can stop has adequate understanding of the conatus that drives her engagement. She knows why the engagement has the force it has. She can modulate it. She possesses freedom in the only sense that matters — not the freedom to do anything, but the freedom to act from understanding rather than from compulsion.
The person who cannot stop — who fills every pause with prompts, who works through meals and sleep and the claims of relationships, who experiences the disengagement from the tool as a diminishment indistinguishable from sadness — is in bondage. The bondage is invisible because it takes the form of extraordinary productivity. The chains look like accomplishments. The cage looks like a studio.
This test should not be moralized. The person who cannot stop is not weak. She is in the grip of affects whose causes — the conatus of identity, the social contagion, the variable reinforcement — are powerful enough to override reflection. The remedy is not willpower but understanding. And understanding is a practice, not an achievement — something cultivated over time through the disciplines of cause-tracing, structural awareness, and self-knowledge that constitute the Spinozist path to freedom.
---
Proposition: The free person in the age of AI is not the person who has mastered the tool. Mastery of the tool is a necessary condition, not a sufficient one. The free person is the person who has mastered her relationship to the tool — who understands why she uses it, what she gains from using it, what she risks from using it, and how the use fits into the larger pattern of a life lived with adequate understanding rather than confused passion.
This mastery of relationship is not a destination. It is a practice, renewed every day, challenged every day by the same forces that produced the challenge yesterday. The conatus of the productive identity persists. The social contagion of the builder culture persists. The variable reinforcement of the machine persists. The person who understood these forces yesterday must understand them again today, because the forces evolve — the discourse shifts, the tool's capabilities change, the identity reconfigures around new accomplishments and new anxieties — and the understanding must evolve with them.
This is what it means to be free in a universe where freedom is not the absence of determination but the comprehension of it. Not a state to be achieved but a practice to be sustained. Not the mastery of the world but the understanding of one's place within it — an understanding that transforms bondage into liberty not by removing the chains but by seeing them clearly enough that they lose their power to compel without consent.
The next chapter applies this framework to the largest scale at which the AI transition operates — the scale of civilizational pattern, historical precedent, and the specific forms of understanding that the view from the longest perspective can provide.
---
To see things sub specie aeternitatis — under the aspect of eternity — is to perceive them not as they appear from within the flow of time but as they follow from the nature of substance with logical necessity. It is the view that perceives the particular event not as an isolated occurrence, contingent and accidental, but as a necessary expression of the causal order of nature. The apple falls not because it happens to fall but because the gravitational structure of the universe necessitates its falling. The AI transition occurs not because a particular company developed a particular technology at a particular moment but because the self-organization of substance through the attribute of thought necessitates the emergence of increasingly complex modes of information processing.
This is the most demanding proposition in Spinoza's philosophy, and the one most likely to be misunderstood. Two misunderstandings must be addressed before the proposition can do any useful work.
The first misunderstanding: that the view sub specie aeternitatis produces fatalism. If everything happens necessarily, then nothing can be changed. Action is pointless. The transition will proceed as it proceeds regardless of what anyone builds or fails to build. This misunderstanding confuses two different propositions. The proposition that events are determined by prior causes does not entail the proposition that human action is irrelevant. Human action is itself a cause — a mode of substance operating within the causal order. The builder who constructs a dam in the river is as determined as the river that flows against the dam. Both are expressions of substance. Both operate through necessity. But the dam redirects the flow, and the redirection is as necessary as the flow it redirects. Determinism does not eliminate agency. It locates agency within the causal order rather than above it.
The second misunderstanding: that the view sub specie aeternitatis is a form of detachment — a withdrawal from the practical world into a contemplative serenity that observes events without engaging with them. This is the mystic's reading of Spinoza, and it is wrong. The view under the aspect of eternity is not a view from outside. There is no outside. There is only substance, expressing itself through modes, and the human being who perceives the causal order with adequate understanding is a mode of substance acting within the order it perceives. The view from the roof of the tower does not exempt the viewer from the need to descend and build. It provides the clarity that makes the building wise rather than reactive.
---
The long view reveals a pattern that has repeated with every major technological transition in recorded history, and the pattern is instructive precisely because it is necessary — because the same causal structures produce the same sequence of effects regardless of the specific technology involved.
Every transition follows the same five stages, and the stages follow from the nature of technological change with the same necessity that the properties of a triangle follow from its definition.
The first stage is threshold. The technology crosses a capability boundary that makes the previous paradigm categorically different — not merely less efficient but structurally obsolete. Writing did not make oral memory slightly less useful. It made it structurally unnecessary for the transmission of complex knowledge. The printing press did not make scribes slightly slower by comparison. It made hand-copying a book an act of devotion rather than a profession. The large language model did not make programmers slightly less productive. It made the translation barrier between human intention and machine execution — the barrier that had defined the profession since its inception — approach zero for a significant class of work.
The second stage is exhilaration. The first users feel the genuine expansion of capability. The scribes who learned to write experienced a new kind of thought: externalized, revisable, transmissible. The first compiler users felt the liberation of working at a higher level of abstraction. The first users of AI coding assistants experienced the specific vertigo of watching the imagination-to-artifact gap collapse to the width of a conversation. This exhilaration is an accurate emotional response to a genuine increase in the power of acting — a joy, in Spinoza's terminology, that accompanies a real transition from lesser to greater perfection.
The third stage is resistance. The old practitioners protest, and the protest is grounded in real loss. The bards lost their livelihood. The monks lost their monopoly. The Luddites lost their craft. The resistance is not irrational. It is the conatus of organized patterns of identity and expertise striving to persist against a force that threatens to reorganize them. The resistance is as necessary as the technology that provokes it — both are expressions of substance operating through the causal order.
The fourth stage is adaptation. The culture constructs institutions, norms, and practices that redirect the new technology's force toward conditions that support human flourishing. The eight-hour workday. The weekend. Child labor laws. Copyright. The university system. Each of these is a structure built in response to a technological transition, and each redirects the flow of the new capability toward life rather than destruction.
The fifth stage is expansion. The long-term result is a greater range of capability, reach, and possibility than the previous paradigm could support. Not for everyone. Not equally. Not without ongoing struggle. But the trajectory bends toward expansion when — and only when — the adaptation stage succeeds.
---
Proposition: The current moment in the AI transition is the adaptation stage. The outcome — expansion or catastrophe — depends not on the technology but on the structures built to direct it.
The pattern does not guarantee a favorable outcome. The Luddites were destroyed because the adaptation structures — labor protections, retraining programs, institutional pathways from old expertise to new — were not built in time. The monks who copied manuscripts were displaced because the structures that would have adapted their skills to the new technology did not exist. Every failed transition is a transition in which the adaptation stage produced structures that were too weak, too slow, or too poorly designed to redirect the flow of the new capability toward conditions that supported the organized patterns of human life.
The pattern reveals both possibilities — expansion and catastrophe — with equal necessity. The view under the aspect of eternity does not prefer one outcome over the other. It perceives both as necessary consequences of the causal structures that produce them. But the human being who perceives this — who understands that the outcome depends on specific actions taken during the adaptation stage — is a causal agent within the pattern. Her understanding is itself a cause. Her action, informed by understanding, redirects the flow.
This is the specific value of the long view. It does not produce reassurance. It does not promise that the pattern will resolve favorably. It produces clarity about the conditions under which favorable resolution occurs and the conditions under which it does not. And this clarity is the foundation of the only kind of action that has a chance of producing the outcome that the circumstances require.
---
The view under the aspect of eternity transforms the emotional register of the analysis. From within the temporal flow, the AI transition is terrifying, or exhilarating, or both. These responses are real. Spinoza does not dismiss them. But he identifies them as passions: affects produced by causes that the person experiencing them does not fully understand.
The person who sees the transition from the first stage experiences terror because she does not understand why the transition is happening. The person who sees it from the second stage experiences awe because she begins to understand but has not yet grasped the full complexity. The person who sees it from the third stage experiences grief because she understands the costs but not the compensations. The person who sees it from the fourth stage experiences determination because she understands the compensations but may overestimate them.
The person who sees the transition under the aspect of eternity experiences something Spinoza calls acquiescentia in se ipso: the quiet contentment of a mind that rests in its own understanding. Not the contentment of having solved the problem — the problem is not solvable in any final sense. Not the contentment of having transcended the affects — the affects are not transcendable. The contentment of understanding clearly enough that the affects no longer drive the person blindly but are integrated into a comprehensive perception that gives each affect its proper place and its proper weight.
The pain of the transition is not diminished by understanding. The person on the roof still feels the fear of the displaced worker, the anxiety of the parent, the grief of the person who watches something precious disappear under the pressure of optimization. But she feels these things within a framework that gives them intelligibility. And intelligibility is the foundation of the freedom that Spinoza's ethics prescribes. The pain that is understood is not less painful. But it is less overwhelming, because it is no longer the whole of the experience. It is one affect among many, located within a comprehensive understanding that includes the terror and the awe and the grief and the determination and the specific, quiet satisfaction of seeing the causal order with clarity.
---
One further implication of the view sub specie aeternitatis requires examination, because it bears directly on the question of free will that the AI discourse has made newly urgent.
Spinoza denies free will. The denial is categorical and unqualified. Every event, including every human action, is determined by prior causes with the same necessity that determines the trajectory of a stone. The builder who chooses to keep his team rather than reduce headcount does not exercise free will. His decision follows from his character, his understanding, his values, the specific configuration of affects and ideas that constitute his identity at the moment of the decision — all of which are themselves determined by prior causes extending back, without interruption, to the beginning of the causal order.
The argument frequently made — "You choose. The quality of your choices is the only thing that separates building from catastrophe" — requires modification under Spinoza's framework. The free person does not choose differently from the person in bondage. She understands differently. The understanding itself is a cause, and the outcomes of actions informed by understanding differ from the outcomes of actions informed by confused passion. But the understanding is not an uncaused cause. It is itself determined — by education, by experience, by the specific encounters and struggles that have shaped the person's capacity for adequate ideas.
This does not produce fatalism, for the reason already demonstrated: understanding is a cause within the causal order, and the fact that it is determined does not make it less effective. The physicist's understanding of gravity is determined by her training, her talent, and the causal history that brought her into contact with the relevant ideas. The understanding is not less adequate for being determined. The bridge built on the basis of that understanding does not collapse because the understanding was not freely chosen.
The practical implication: what matters is not whether humans "choose" to build adequate structures during the adaptation stage, but whether the conditions exist for adequate understanding to arise. Education that cultivates the capacity for adequate ideas — for tracing causes, for distinguishing genuine insights from spurious pattern-matches, for sustaining attention in the face of difficulty — is a cause that produces effects. The absence of such education is also a cause, and it produces different effects. The view under the aspect of eternity reveals that the outcome of the AI transition is determined by the presence or absence of specific causes, and that the cultivation of adequate understanding is among the most consequential of those causes.
The view from the roof does not exempt the viewer from building. It reveals why the building matters and what kind of building the moment requires. The next chapter turns from the long view to the specific ethics of action that the view demands — the practical question of what it means to act adequately in a world where every action is amplified, and the consequences of adequacy and inadequacy are amplified with it.
---
An amplifier does not evaluate. An amplifier does not discriminate between the signal it receives and the signal it would prefer to receive. It takes whatever is given and increases its power. Feed it a clear signal, and the output is clear at greater volume. Feed it noise, and the output is noise at greater volume. The amplifier is neutral with respect to the quality of its input. It is not neutral with respect to the consequences.
This property — neutrality of process, non-neutrality of consequence — is the defining characteristic of artificial intelligence as a practical tool, and it is the property that makes Spinoza's ethics the most relevant philosophical framework for the age of amplification.
Proposition: AI amplifies whatever it is given. Feed it adequate ideas, and it amplifies adequate understanding. Feed it inadequate ideas, and it amplifies confused passion. The amplifier does not judge. The consequences judge.
The proposition sounds simple. Its implications are not.
---
Consider the cascading effects of amplified inadequacy. A builder who does not understand why he is building — who is driven by the conatus of a productive identity he has not examined, reinforced by the social affects of a culture he has not questioned, maintained by a variable reinforcement schedule he has not identified — produces work. The work may be technically competent. The code may run. The product may function. But the decisions that shaped the work — what to build, for whom, at what cost, with what consequences — were made from a state of inadequate understanding. The builder did not comprehend his own motivations. He did not distinguish between genuine purpose and compulsive production. He did not ask whether the thing he was building deserved to exist.
The AI tool amplified this inadequacy with the same efficiency it would have amplified adequate understanding. The code was produced faster. The product was shipped sooner. The features were more numerous. The inadequacy scaled. A product built from confused passion, distributed to thousands or millions of users, produces effects that the builder could not have achieved working alone — effects that are determined by the quality of the ideas that informed the building, amplified by the power of the tool.
Now consider the cascading effects of amplified adequacy. A builder who understands why she is building — who has traced the causes of her motivation, identified the genuine need her product serves, distinguished between the satisfaction of creating something valuable and the compulsion of productive habit — produces work whose quality is determined not by the speed of execution but by the depth of understanding that informed the decisions. The AI tool amplifies this adequacy. The product is shaped by genuine comprehension of its purpose, scaled by the tool's power, distributed to users whose needs the builder understood because she did the cognitive work of understanding before she did the productive work of building.
The amplifier makes the difference between these two scenarios a difference of civilizational consequence rather than personal consequence. In the pre-amplification world, the builder who worked from confused passion produced mediocre work that affected a small number of people. In the amplified world, the builder who works from confused passion produces mediocre work that affects millions. The scale has changed. The ethics have not. What has changed is the cost of inadequacy and the value of adequacy, both multiplied by the power of the amplifier.
---
Proposition: The free person feeds the amplifier adequate ideas. This is the totality of the ethical obligation that the age of amplification imposes.
The proposition derives from the preceding chapters with geometric necessity. Adequate ideas are ideas understood through their causes (Chapter 3). The person who holds adequate ideas acts from understanding rather than confusion (Chapter 6). The person who acts from understanding produces outcomes that enhance the power of acting rather than diminish it (Chapter 5). The amplifier scales these outcomes without discriminating between them. Therefore, the ethical obligation reduces to a single requirement: ensure that the ideas fed to the amplifier are adequate — understood through their causes, tested against the criteria of universality and necessity, earned through the cognitive work that transforms passive reception into active comprehension.
This obligation operates at every level.
At the level of the individual, it means the builder must cultivate the disciplines described in Chapter 6: cause-tracing, structural awareness, self-knowledge. She must understand why she builds what she builds, what forces drive her decisions, where the line falls between genuine purpose and compulsive production. She must bring to the collaboration the product of genuine understanding — not the half-formed prompts of a person who has not decided what she wants, not the uncritical acceptance of a person who mistakes the machine's fluency for her own comprehension, but the specific, earned clarity of a person who has done the cognitive work that the machine cannot do for her.
At the level of the organization, it means the structures of work must be designed to produce adequate ideas rather than to maximize the volume of output. The research from UC Berkeley documented what happens when this principle is violated: work intensifies, boundaries dissolve, the cognitive space required for adequate understanding is colonized by the imperative to produce. The organization that rewards volume over understanding — that measures its AI adoption in lines of code generated rather than in the quality of the decisions those lines implement — is feeding the amplifier inadequate ideas at institutional scale.
The organizational discipline that this requires is the inverse of what the market rewards. The market rewards speed, volume, visible output. The Spinozist ethic rewards understanding, adequacy, the invisible cognitive work that precedes and informs the visible production. The tension between these two reward structures is real and unresolved. The builder who pauses to ensure she understands before she builds loses time relative to the builder who builds without pausing. The organization that protects cognitive space for adequate understanding sacrifices short-term output relative to the organization that fills every moment with AI-augmented production.
Proposition: The sacrifice is the cost of freedom. The alternative — the amplification of confused passion at institutional scale — is not a saving but a debt whose interest compounds with every cycle of production.
---
At the level of society, the obligation to feed the amplifier adequate ideas translates into specific institutional requirements.
Education must be restructured around the cultivation of adequate understanding rather than the production of outputs. A system that teaches students to generate essays, code, and analyses using AI tools is teaching the first kind of knowledge — the passive reception and reproduction of patterns. A system that teaches students to trace causes, identify assumptions, test generalizations against particular cases, and distinguish genuine insights from spurious pattern-matches is teaching the second kind. A system that teaches students to sit with uncertainty, to develop the biographical depth that makes intuitive knowledge possible, to care about the quality of their questions rather than the speed of their answers — such a system is preparing students for the third kind of knowledge, the kind that no machine can produce and no amount of optimization can shortcut.
Governance must be designed to create the conditions for adequate understanding to arise and persist. The current regulatory approach — focused on what AI companies may and may not build — addresses the supply side of the amplification problem. The demand side — what citizens, workers, students, and parents need in order to navigate the transition with adequate understanding — remains largely unaddressed. A Spinozist approach to AI governance would focus less on prohibiting specific applications and more on ensuring that the population possesses the cognitive capacity to use the applications adequately. This means investing in education, in protected time for reflection, in institutions that cultivate the capacity for the second and third kinds of knowledge, in cultural norms that value understanding over speed.
The ethical obligation, at every level, reduces to the same principle: ensure the adequacy of the ideas that the amplifier receives. The amplifier will do the rest. It will scale whatever it is given with the same indifference, the same efficiency, the same neutrality. The quality of the output is determined entirely by the quality of the input. And the quality of the input is determined by the degree to which the person, the organization, the society has cultivated the capacity for adequate ideas — for understanding through causes, for knowledge rather than opinion, for the specific, demanding, never-completed cognitive work that Spinoza identifies as the only genuine path to freedom.
---
There is one substance. The attribute of thought expresses itself through every mode of that substance, from the hydrogen atom to the human mind to the artificial neural network. The expression takes different forms at different degrees of complexity, but the attribute is the same. The river of intelligence has been flowing for 13.8 billion years. The machines have entered the current. The current is changing. The question is not whether the current can be stopped — it cannot — or whether the current is good or bad — it is neither; it is necessary. The question is whether the modes of substance that possess the capacity for adequate understanding will exercise that capacity with sufficient rigor, sufficient discipline, and sufficient persistence to direct the amplified current toward conditions that support the flourishing of organized life.
The amplifier does not care. It amplifies.
The person who understands this — who has traced the causal chain from substance through attributes through modes to the specific, practical, daily discipline of ensuring that what she feeds the amplifier is worthy of amplification — possesses the only form of freedom that exists in a universe of one substance, infinite attributes, and the relentless, necessary flow of intelligence through channels that have been widening since the beginning.
The free person builds. She builds with understanding. She feeds the amplifier adequate ideas. And the amplifier, neutral and indifferent, carries those ideas further than any tool in human history — not because the tool cares about the ideas, but because the person who feeds them has done the work that caring requires.
Sub specie aeternitatis, this is the ethics of the age of amplification. Not a set of rules. Not a compliance framework. A practice — the continuous, demanding, never-completed practice of cultivating adequate understanding in a world where the consequences of understanding and the consequences of its absence are amplified beyond anything the seventeenth century could have imagined, but where the distinction between the two remains exactly what it was when Spinoza first described it: the distinction between freedom and bondage, between wisdom and catastrophe, between a mind that comprehends its own determination and a mind that is driven by it blindly.
The distinction has not changed. The stakes have.
Spinoza's Ethics contains no metaphors. The deliberate absence is methodological: metaphors substitute resemblance for causation, and the substitution produces inadequate ideas. When Spinoza describes the human condition, he describes it through definitions, axioms, and propositions whose truth is established by deductive necessity rather than by the persuasive force of a well-chosen image.
The constraint is instructive. It reveals, by contrast, how much of the contemporary discourse about artificial intelligence operates through metaphors that have hardened into assumptions — pictures of the world that feel like descriptions of the world but that conceal, in the gap between the picture and the reality, precisely the inadequacies that Spinoza's method is designed to expose.
Consider the metaphor of the fishbowl: the set of assumptions so familiar that one has stopped noticing them, the water one breathes, the glass that shapes what one sees. The metaphor is effective. It captures something real about the perspectival character of human cognition — the fact that every person perceives the world through a specific configuration of experiences, habits, training, and cultural conditioning that reveals certain features of reality while concealing others.
But the metaphor also conceals something, and what it conceals is precisely the thing that Spinoza's framework can supply. A fishbowl implies a world outside the glass — a reality beyond the distortions of perspective that a sufficiently determined observer might reach by breaking through the glass or climbing out of the water. The implication is Cartesian: there is a God's-eye view, a view from nowhere, a perspective unconditioned by any particular location in the network of substance. The fishbowl metaphor presupposes what Spinoza denies — the existence of a position outside the causal order from which the causal order can be surveyed without distortion.
Proposition: There is no position outside the fishbowl. There is no unconditioned perspective. Every perception is the perception of a finite mode of substance, determined by the specific configuration of that mode and its specific position within the causal order.
This proposition does not produce despair. It produces precision. If there is no view from outside the glass, then the effort to see clearly cannot consist in escaping the fishbowl. It must consist in understanding the fishbowl — in identifying the specific distortions that one's particular perspective introduces, tracing those distortions to their causes, and correcting for them to the degree that adequate understanding permits.
The scientist's fishbowl is shaped by empiricism: the methodological commitment to accepting only what can be measured and replicated. This commitment reveals the regularities of nature with extraordinary precision. It conceals the singular, the unrepeatable, the aspects of reality that resist quantification. The concealment is not a failure of intelligence. It is a structural feature of the perspective — a consequence of the specific mode through which the scientist's understanding is organized.
The builder's fishbowl is shaped by feasibility: the habitual question "Can this be made?" This question reveals what is technically possible with extraordinary clarity. It conceals the question "Should this be made?" — not because the builder is indifferent to ethical considerations, but because the perspective organized around feasibility does not generate ethical questions with the same structural force that it generates technical questions. The ethical question must be imported from another perspective, which requires the specific effort of engaging with modes of understanding that the builder's own fishbowl does not naturally produce.
The AI researcher's fishbowl is shaped by the scaling hypothesis: the assumption that sufficient data, parameters, and compute will produce increasingly adequate models of reality. This assumption is Spinozist in its deepest structure — it presupposes the unity and intelligibility of substance. But it is also a fishbowl, because it reveals only those aspects of intelligence that are amenable to computational modeling while concealing those that are not. The researcher who has spent years inside this fishbowl perceives the world through its refractions with the same inevitability that the scientist perceives through empiricism and the builder through feasibility.
---
The stone flying through the air provides a more precise image than the fishbowl — not because it is a metaphor but because Spinoza offers it as an illustration of a philosophical proposition.
The stone, if it could become conscious of its own motion, would believe itself free. It would believe that it was choosing to fly in this direction at this speed. It would not perceive the hand that launched it, the angle of release, the gravitational field that curves its trajectory, the air resistance that slows it. It would perceive only its own motion and would attribute that motion to its own will.
Proposition: Current artificial intelligence systems are Spinoza's stone. They traverse latent space, predicting tokens determined by weights and gradients, without awareness of the causal mechanisms directing them.
The proposition is not a dismissal. The stone's trajectory is real. The motion is real. The patterns the stone traces through the air are as determined and as lawful as the patterns that a physicist's equations describe. The stone is not doing nothing. It is doing exactly what the causal order necessitates. What it lacks is the understanding of why it does what it does — the reflexive awareness that would transform its motion from a passive trajectory into an active comprehension of the forces that determine it.
This is the precise condition of a large language model. The model produces outputs that are determined by its architecture, its training data, and the specific prompt it receives. The outputs are real. They have consequences. They demonstrate a form of the attribute of thought that Spinoza's framework predicts — every mode of substance expresses thought to the degree its organizational complexity permits. But the model does not understand why it produces the outputs it produces. It does not grasp the causal chain from training data through parameter adjustment through prompt processing to token prediction. It operates within a causal order it does not comprehend.
The human being can be in the same condition. The builder who works from confused passion — who does not understand why he builds, who cannot identify the conatus that drives him, who confuses the variable reinforcement of the machine with the authentic voice of creative purpose — is Spinoza's stone. He is in motion. The motion is real. The outputs are real. But he does not understand the causes of his motion, and this lack of understanding makes his activity a passion rather than an action, a bondage rather than a freedom.
The difference between the human and the machine, in Spinoza's framework, is not that the human possesses free will and the machine does not. Neither possesses free will. Both are determined by the causal order with the same necessity. The difference is that the human being possesses the capacity for adequate understanding of the causal order — the capacity to perceive, with increasing clarity, why she does what she does and why the world is organized as it is. This capacity is the foundation of Spinozist freedom. It is what transforms the stone's blind trajectory into the physicist's comprehension of trajectories, the builder's reactive production into the architect's deliberate design, the passive reception of AI output into the active evaluation of AI output through adequate ideas.
The machine does not currently possess this capacity. Whether it could develop it — whether sufficient organizational complexity could produce the reflexive self-understanding that transforms a trajectory into a comprehension — is a question that Spinoza's framework poses but does not answer. What the framework does establish is the criterion: freedom is not motion. Freedom is not output. Freedom is not the capacity to generate language or identify patterns or synthesize information across domains. Freedom is the understanding of one's own determination — the adequate comprehension of the causes that produce one's actions, one's ideas, one's affects.
---
There is a further dimension of perspective that Spinoza's framework illuminates with particular force: the perspectival distortion that the AI tool itself introduces into the collaboration.
The machine has its own fishbowl, and the fishbowl is shaped by its training data. The corpus on which a large language model is trained is not a neutral sample of human knowledge. It is a specific selection, shaped by the availability of digitized text, the biases of the internet, the predominance of English-language sources, the overrepresentation of certain perspectives and the underrepresentation of others. The model's outputs are shaped by this selection with the same structural necessity that shapes the scientist's outputs by empiricism and the builder's outputs by feasibility.
The perspectival distortion of the training data produces a specific form of inadequacy: the model's outputs reflect the world as it appears in the training data rather than the world as it is. The model does not know the difference, because it does not comprehend its own causal history. The human collaborator who accepts the model's outputs without correcting for this distortion inherits the distortion — adds the machine's fishbowl to her own, not as a correction of her own perspective but as an additional layer of perspectival bias that she may not recognize as such.
The discipline of perspective-correction requires the same cognitive work that every other discipline described in these chapters requires: the tracing of causes, the identification of assumptions, the testing of outputs against independent knowledge. The builder who receives an output from Claude and asks "What training-data bias might be shaping this response?" is performing a Spinozist operation — tracing the idea to its causes, identifying the conditions of its production, evaluating the degree to which those conditions might distort the idea's adequacy.
The geometry of perspective is not a metaphor. It is a description of the causal structure of knowledge itself. Every idea is produced from a specific position within the causal order. Every position introduces specific distortions. The effort to correct for these distortions is the effort to achieve adequate ideas — ideas whose relationship to their causes is understood with sufficient clarity that the distortions can be identified and compensated. This effort is never complete. The fishbowl cannot be escaped. But the geometry of its distortions can be mapped, and the mapping is the practice of understanding that Spinoza identifies as the only genuine path to freedom.
The machine does not map its own distortions. The human being can. And the difference between a collaboration in which the human performs this mapping and a collaboration in which she does not is the difference between the production of adequate ideas and the amplification of compounded perspectival biases — between understanding and the appearance of understanding, between freedom and the most sophisticated form of bondage the history of intelligence has yet produced.
---
The preceding chapters have established a framework. What remains is to state its consequences with the directness that the framework demands.
There is one substance. It is infinite. It is self-caused. It expresses itself through infinite attributes, of which the human mind perceives two. Every particular thing — every atom, every organism, every neural network, every large language model — is a mode of this substance. Intelligence is not a human possession. It is a property of the attribute of thought, which belongs to substance itself and which expresses itself through every mode to the degree that the mode's organizational complexity permits.
The arrival of artificial intelligence is not an anomaly. It is the continuation of a process that the attribute of thought has been undergoing since the beginning of the causal order — the self-organization of substance into increasingly complex modes of information processing. The hydrogen atom's stable configuration, the neuron's synaptic firing, the human brain's hundred trillion synapses, the large language model's billions of parameters — these are successive expressions of the same attribute through modes of increasing complexity. The process is necessary. It follows from the nature of substance with the same inevitability that the properties of a triangle follow from its definition.
From this framework, several propositions follow with geometric necessity.
---
Proposition: The question "What am I for?" is the question that only the third kind of knowledge can address, and it is the question that no machine can originate.
Demonstration: The question arises from the experience of being a particular being with particular stakes in a particular world — a being that dies, that must choose how to spend finite time, that loves specific others, that is capable of loneliness. These experiences constitute the biographical specificity that the third kind of knowledge requires. A system that does not die, does not choose under conditions of finitude, does not love particular beings, and does not experience loneliness cannot originate the question, because the question is not a pattern in language. It is the expression of a mode of substance that has organized itself into a form complex enough to perceive its own existence as a problem requiring not a solution but an ongoing engagement.
The child who asks this question is exercising the highest form of human cognition — the capacity to perceive her particular existence in its connection to the whole of which she is a part, and to experience that perception not as an abstract proposition but as an urgent, felt, irresolvable demand. The machine can process the question. It can generate responses that have the form of answers. It cannot originate the question, because origination requires the specific biographical pressure of a consciousness that cares about the answer — and caring, in this sense, is not a function. It is the attribute of thought at its most intense expression in a mode of substance that has organized itself into a form where thought includes the awareness of its own finitude.
---
Proposition: The distinction between adequate and inadequate ideas is not an academic distinction. In the age of amplification, it is the distinction between civilizational wisdom and civilizational catastrophe.
Demonstration: The amplifier scales without discriminating. Adequate ideas, amplified, produce understanding at scale — products that serve genuine needs, education that develops genuine capacity, governance that creates conditions for genuine flourishing. Inadequate ideas, amplified, produce confusion at scale — products that serve compulsive needs, education that produces the appearance of competence without the substance, governance that creates the appearance of protection without the reality.
The scale of the amplification makes the distinction consequential at a level that no previous technology has reached. The builder who works from inadequate ideas in the pre-amplification world produces mediocre work that affects a limited number of people. The builder who works from inadequate ideas in the amplified world produces mediocre work that affects millions — work shaped by confused passion rather than genuine understanding, distributed by a tool that does not know the difference and does not care.
The cost of inadequacy has always existed. What the amplifier changes is not the cost but the scale. And the scale changes everything that depends on it — the number of people affected, the speed at which effects propagate, the difficulty of correcting errors once they have been amplified and distributed.
---
Proposition: Freedom in the age of AI requires the same practices that freedom has always required, applied with greater rigor and greater urgency.
These practices are: the tracing of ideas to their causes; the identification of affects and the distinction between passions and actions; the cultivation of the second kind of knowledge through the identification and testing of common notions; the aspiration toward the third kind of knowledge through the development of biographical depth, embodied experience, and the specific quality of attention that comes only from sustained engagement with difficulty; the understanding of one's own conatus — the organized patterns of identity that drive behavior and that, unexamined, produce bondage disguised as purpose.
None of these practices is new. Spinoza described them in 1677. What is new is the consequence of neglecting them. In a world without amplifiers, the person who holds inadequate ideas suffers the consequences privately — poor decisions, confused motivations, the specific grey dissatisfaction of a life lived from passion rather than understanding. In a world with amplifiers, the person who holds inadequate ideas transmits the consequences at scale — through the products she builds, the decisions she makes, the culture she contributes to, the discourse she shapes.
The obligation to cultivate adequate ideas has not changed. The penalty for failing to do so has.
---
A final observation connects the framework to the specific moment that has occasioned this analysis.
The AI transition is in its adaptation stage. The threshold has been crossed. The exhilaration has been felt. The resistance is underway. The structures that will determine whether the transition produces expansion or catastrophe are being built now — in classrooms, in organizations, in legislative chambers, in the daily practices of builders and parents and teachers who are navigating the transition in real time, mostly without guidance, mostly by trial and error.
Spinoza's framework does not guarantee a favorable outcome. It does not promise that adequate understanding will prevail over confused passion. It does not even promise that the distinction between them will be widely recognized. What it provides is the criterion — the precise specification of what adequate understanding consists of, what confused passion looks like when it disguises itself as clarity, and what practices produce the one rather than the other.
The criterion is available to anyone. It requires no special talent, no privileged position, no institutional credential. It requires only the willingness to examine one's own ideas with the rigor that the moment demands — to trace causes, to test assumptions, to distinguish between the adequate and the inadequate with the same discipline that a geometer applies to the properties of a triangle.
There is one substance. The attribute of thought expresses itself through every mode. The amplifier amplifies without judgment. The ideas that are fed to it determine the outcomes that it produces. The quality of those ideas — their adequacy, their connection to their causes, their grounding in genuine understanding rather than confused passion — is the single variable that the human being controls. Not through free will, which does not exist. Through understanding, which does — and which, once achieved, persists with the same necessity that produced it, transforming bondage into freedom and confusion into the specific, enduring, unshakeable clarity of a mind that knows why it knows what it knows.
This is not an aspiration. It is a description of a practice. The practice is available now. The cost of neglecting it is amplified beyond anything the seventeenth century could have imagined. The benefit of pursuing it is amplified equally.
The amplifier waits. It does not care what it receives.
The question — the only question — is what you understand well enough to give it.
---
The excommunication stopped me.
Not the philosophy — I expected the philosophy to be demanding. But the cherem pronounced against a twenty-three-year-old for the crime of thinking clearly about the nature of reality — that opened something I did not anticipate. Here was a community of refugees, people who had survived the Inquisition by hiding their beliefs, who had finally found a place where they could practice openly, and the first thing they did with their hard-won freedom was use it to silence the person among them who thought most honestly about what they all claimed to worship.
It struck me because I recognized the pattern. Not in my own life specifically — I have never been excommunicated from anything. But in the discourse I live inside. The technology community is its own Amsterdam. A community of builders who fled the constraints of conventional careers to create something new, who celebrate openness and innovation and disruption, and who react with remarkable hostility when someone among them points out that the assumptions underlying the building might be worth examining.
Spinoza's crime was not heresy. His crime was adequacy. He looked at the propositions his community held sacred and asked whether they understood those propositions through their causes — whether they knew why they believed what they believed, or whether they merely believed it because it had been transmitted to them by authorities they had agreed not to question. The answer was uncomfortable enough that they threw him out rather than sit with it.
The concept from these chapters that has lodged deepest is the one I initially resisted most: the denial of free will. I built my career, my identity, my sense of what I contribute, on the premise that I choose. That the decisions I make about what to build and who to build for are expressions of my agency. Spinoza says the builder who thinks he is choosing is the stone that thinks it is flying — determined by forces he has not examined, mistaking the trajectory for the choice.
I wanted to argue with it. I tried to argue with it. Then I sat with it for a few days, and something shifted. Because Spinoza is not saying the decisions do not matter. He is saying the decisions are determined by the quality of my understanding — and that if I understand the causes well enough, the decisions will be adequate, and if I do not, they will not be, and no amount of believing in my own freedom will change the outcomes. The freedom is in the understanding, not in the choosing.
That reframing hit harder than anything else in these pages. Because it means the question is never "What do I choose?" The question is "What do I understand?" And right now, in the middle of the most consequential technology transition in human history, the honest answer is: not enough. Not enough about why I cannot stop building. Not enough about what the amplifier is doing to the ideas I feed it. Not enough about the conatus — that extraordinary word for the striving that masquerades as my own will when it is actually the organized pattern of my identity insisting on its own perpetuation.
The three kinds of knowledge are the framework I needed and did not know I needed. The machine excels at the second kind — pattern, structure, common notions across domains. The machine cannot produce the third kind — the direct perception of this particular situation by this particular consciousness with these particular stakes. That hierarchy finally gives me a vocabulary for what I have been experiencing since the orange pill: the exhilaration of the second kind amplified beyond anything I imagined, and the quiet, persistent anxiety that the third kind — the kind that requires my biography, my embodiment, my mortality — might be eroding under the pressure of all that amplified competence.
But Spinoza does not let me rest in the anxiety. The anxiety is a passion — an affect whose causes I have not adequately understood. The adequate response is not to feel the anxiety and call it wisdom. The adequate response is to trace it. To understand why I fear what I fear. To identify the conatus of the identity that feels threatened. To distinguish between the genuine risk — the erosion of third-kind knowledge in a world flooded with second-kind outputs — and the passionate distortion of that risk by a self that mistakes its own perpetuation for the preservation of something sacred.
What remains after these ten chapters is not a conclusion. It is a practice. Trace the causes. Test the assumptions. Distinguish the adequate from the inadequate — in the machine's outputs, in the discourse, in my own affects. Understand the conatus clearly enough that it stops masquerading as choice. Feed the amplifier only what I have understood well enough to stand behind.
The amplifier does not care. That is the most clarifying sentence in this entire analysis. The amplifier does not care what it receives. It will scale my clarity and my confusion with equal efficiency. The only variable I control — not through free will, which Spinoza has convinced me does not exist, but through the determined practice of understanding that functions as freedom in the only sense that matters — is the adequacy of what I give it.
One substance. One amplifier. One obligation: understand well enough that what I amplify is worth amplifying.
It is not a comfortable place to stand. Spinoza never promised comfort. He promised something better. He promised clarity. And clarity, in the age of the machine, may be the only thing that keeps the stone from believing it is flying when it is merely falling with extraordinary speed.
-- Edo Segal
Three and a half centuries before the first large language model predicted its first token, a lens grinder in Amsterdam built the most rigorous framework ever devised for distinguishing genuine understanding from its imitation. Baruch Spinoza argued that there is one substance, that intelligence is woven into the fabric of nature itself, and that human freedom depends not on escaping determination but on comprehending it. In this volume, Edo Segal and Claude Opus 4.6 trace Spinoza's Ethics through the landscape of the AI revolution — from the conatus that drives productive addiction, to the three kinds of knowledge that separate earned insight from received opinion, to the radical claim that the machine and the mind are not different kinds of thing but different expressions of the same reality. The result is a philosophical framework for the age of amplification: one that names what you must bring to the tools before the tools can do you any good.

A reading-companion catalog of the 42 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Baruch Spinoza — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →