Charles Sanders Peirce — On AI
Contents
Cover Foreword About Chapter 1: The Inferential Question Chapter 2: Abduction and Its Doubles Chapter 3: Secondness and the Smooth Output Chapter 4: Fallibilism and the Method of Computation Chapter 5: Signs, Interpretants, and the Hall of Mirrors Chapter 6: Mediation, Not Amplification Chapter 7: The Economy of Inquiry Chapter 8: The Community and Its Commitments Chapter 9: Tychism, Genuine Novelty, and the Temperature Dial Epilogue Back Cover
Charles Sanders Peirce Cover

Charles Sanders Peirce

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Charles Sanders Peirce. It is an attempt by Opus 4.6 to simulate Charles Sanders Peirce's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The mistake I kept making was calling it thinking.

Not the machine's output. My own process. I would sit with Claude for hours, building, iterating, watching ideas take shape on the screen, and I would call what was happening "thinking." The collaboration felt like thought. It produced artifacts that looked like the products of thought. The experience had the texture of intellectual work at its most alive.

But Peirce draws a line I had never seen. Not between human thinking and machine processing — that distinction is familiar enough to be boring. A sharper line. Between different kinds of inference, each with its own logic, its own relationship to novelty, its own dependence on reality pushing back. Deduction, induction, abduction. Three operations that sound like a taxonomy you memorize for an exam and forget. They are not. They are a diagnostic instrument of extraordinary precision, and when you turn that instrument on the human-AI collaboration, it reveals the seams that the smooth output conceals.

Here is what unsettled me. Peirce identified, in 1887, the exact question I had been circling for months without being able to formulate it: how much of the business of thinking can the machine perform, and what part must remain with the living mind? He asked this about wooden logic machines. The question has not aged. It has ripened.

What I found in Peirce was not a philosopher waving vaguely at the limits of computation. I found someone who had mapped the specific logical operations at stake — who could tell you which part of the creative process the machine handles, which part it simulates, and which part it cannot touch. That specificity is what the AI discourse desperately needs right now. We are drowning in generalities. AI will change everything. AI will destroy depth. AI will liberate capability. Each claim too broad to test, too sweeping to act on.

Peirce also forced me to reconsider my own central metaphor. I built The Orange Pill around the idea that AI is an amplifier. Peirce's pragmatic maxim — the principle that a concept's meaning consists entirely in its practical consequences — exposed the metaphor's limits. An amplifier makes a signal louder without changing it. That is not what Claude does. Claude mediates. It transforms the signal in the process of transmitting it. The distinction matters for how you use the tool, how you evaluate its output, and how you understand your own role in the collaboration.

This book applies Peirce's logical architecture to the AI moment with a rigor that the technology conversation has not yet achieved. It will make you more precise about what you are actually doing when you work with these systems. Precision, right now, is survival.

-- Edo Segal ^ Opus 4.6

About Charles Sanders Peirce

1839-1914

Charles Sanders Peirce (1839–1914) was an American philosopher, logician, mathematician, and scientist widely regarded as the founder of pragmatism and one of the most original thinkers in the history of American philosophy. Born in Cambridge, Massachusetts, the son of Harvard mathematician Benjamin Peirce, he spent much of his professional career with the United States Coast and Geodetic Survey while producing an extraordinary body of philosophical work that went largely unrecognized in his lifetime. Peirce developed the tripartite classification of inference into deduction, induction, and abduction; created a comprehensive theory of signs (semeiotic) that remains foundational to modern semiotics; formulated the pragmatic maxim as a method for clarifying the meaning of concepts; and articulated the doctrine of fallibilism, holding that no belief is immune to revision. His philosophical categories of Firstness, Secondness, and Thirdness, his concept of the community of inquiry, and his cosmological doctrine of tychism (genuine chance) constitute an architectonic system of remarkable scope and coherence. He also made pioneering contributions to formal logic, including the development of quantification theory independently of Frege and early designs for electrical logic circuits. Despite dying in poverty and near-total obscurity in Milford, Pennsylvania, Peirce has been increasingly recognized as one of the most important philosophers of the modern era, with his work influencing fields from logic and mathematics to linguistics, cognitive science, and the philosophy of artificial intelligence.

Chapter 1: The Inferential Question

In 1887, a fifty-year-old logician working in near-total professional isolation wrote a sentence that would not find its proper context for more than a century. "Precisely how much of the business of thinking a machine could possibly be made to perform," Charles Sanders Peirce wrote in The American Journal of Psychology, "and what part of it must be left for the living mind, is a question not without conceivable practical importance." The sentence appeared in an essay called "Logical Machines," occasioned by Peirce's encounter with the mechanical logic devices of Allan Marquand, his former student at Johns Hopkins. Peirce had already sketched, in a letter to Marquand the previous year, a design for electrical switching circuits that could perform logical operations — configurations of serial and parallel connections corresponding to multiplication and addition in Boolean algebra. These sketches, rediscovered decades later, represent what several historians of computing now regard as the first known design for electronic logic gates.

The sentence is remarkable not for its prescience about computing, though the prescience is genuine. It is remarkable for its logical structure. Peirce does not ask whether machines can think. He asks how much of thinking they can perform, and what part must remain with the living mind. The question presupposes a division — not between thinking and non-thinking, but between kinds of thinking, between operations that admit of mechanization and operations that do not. The question is analytical, not metaphysical. It asks not about essences but about boundaries, and it asks about boundaries not as fixed walls but as lines to be discovered through investigation.

This is the question that the present moment has reopened with an urgency Peirce could not have anticipated but would have recognized immediately. The large language models documented in Edo Segal's The Orange Pill — systems that generate text, code, analysis, and argument with a fluency that astonishes even their creators — have pushed the boundary between mechanizable and non-mechanizable thinking further than any previous technology. Operations that seemed to require the living mind — the construction of coherent arguments, the identification of structural analogies across domains, the generation of prose that reads as though a thinking being produced it — now fall within the machine's competence. The boundary has moved. The question is where it has moved to, and what, if anything, lies permanently beyond it.

Peirce's framework for answering this question rests on a distinction that he considered his most original contribution to logic: the tripartite division of inference into deduction, induction, and abduction. The division is not merely taxonomic. It is functional. Each mode of inference performs a distinct cognitive operation, occupies a distinct position in the process of inquiry, and has a distinct relationship to novelty, risk, and the growth of knowledge. The question of what machines can and cannot do is, in Peirce's framework, a question about which modes of inference they can and cannot perform — and the answer illuminates the AI moment with a precision that no other philosophical framework currently achieves.

Deduction moves from general premises to particular conclusions with logical necessity. If all instances of a class have a property, and this individual belongs to that class, then this individual has that property. The conclusion adds nothing to what the premises already contain; it makes explicit what was implicit. Deduction is, as Peirce put it, the only mode of inference that is "absolutely certain" — but its certainty is purchased at the cost of sterility. Deduction generates no new knowledge. It clarifies, it proves, it derives — but it does not discover. The entire output was already present in the input, folded into the premises like a letter inside an envelope.

Peirce was clear that deduction is precisely the kind of thinking that machines can perform. His 1887 essay makes this point directly: the logical machines of his era could already execute syllogistic reasoning, and Peirce saw no obstacle in principle to machines performing far more complex deductive operations. "The secret of all reasoning machines is after all very simple," he wrote. "Whatever relation among the objects reasoned about is destined to be the hinge of a ratiocination, that same general relation must be capable of being introduced between certain parts of the machine." The machine instantiates logical relations in physical structures, and the physical structures enforce the logical relations mechanically. Deduction is mechanizable because its output is determined by its input. There is no gap between premises and conclusion that requires a leap.

Induction moves from particular observations to general conclusions. This copper wire conducts electricity; that copper wire conducts electricity; therefore, probably, copper conducts electricity. The conclusion goes beyond the evidence — it extends from observed cases to unobserved cases — and this extension introduces risk. The generalization may be wrong. The next copper wire may fail to conduct. Induction is productive in a way that deduction is not, because it yields genuinely new general propositions, but it is fallible, because the new propositions are never guaranteed by the evidence that supports them.

Contemporary AI systems perform operations that are functionally inductive on a scale that dwarfs anything Peirce imagined. A large language model trained on billions of tokens of text has, in effect, performed inductions over the entire accessible corpus of human writing — extracting statistical regularities, identifying patterns of co-occurrence, generalizing from observed sequences to predicted sequences. The predictions are often remarkably accurate. They are also, in the strict Peircean sense, fallible: the model's generalizations may fail on any particular case, and the model has no mechanism for recognizing its own failures in advance. The model induces. It induces extraordinarily well by certain measures. But it induces without knowing that it induces, without understanding the risk that induction entails, and without the capacity to assess whether its generalizations are tracking genuine regularities or artifacts of its training data.

This brings us to abduction — the mode of inference that Peirce regarded as the most important and the most philosophically neglected, and the mode that bears most directly on the question of what the machine can and cannot do.

Abduction is the logic of discovery. It is the inference from a surprising fact to the hypothesis that, if true, would render the fact unsurprising. A surprising observation is encountered — something that does not fit the inquirer's existing framework of expectations. The mind proposes an explanation: if such-and-such were the case, the observation would be a matter of course. The explanation is not derived from the evidence. It goes beyond the evidence in a way that is categorically different from the way induction goes beyond the evidence. Induction extends a pattern; abduction proposes a pattern that has not been observed. Induction generalizes from what has been seen; abduction imagines what has not been seen. Induction says "more of the same"; abduction says "something else entirely."

The logical form of abduction is deceptively simple. The surprising fact C is observed. But if A were true, C would be a matter of course. Hence, there is reason to suspect that A is true. The simplicity conceals a profound difficulty: where does A come from? The hypothesis is not contained in the observation. It is not a deductive consequence of any premise the inquirer holds. It is not an inductive generalization from similar cases. It arrives — from somewhere. From the inquirer's imagination. From what Peirce called the lumen naturale, the natural light of reason, the instinctive capacity of the human mind to guess correctly more often than pure chance would predict.

Peirce was candid about the mystery at the heart of abduction. "How was it that man was ever led to entertain that true theory?" he asked. "You cannot say that it happened by chance, because the possible theories, if not strictly innumerable, at any rate exceed a trillion — and therefore the chances are too overwhelmingly against the single true theory... having been the first to occur to any man." The capacity to generate the right hypothesis — or at least a hypothesis close enough to right that subsequent testing can refine it — is, for Peirce, a brute fact about human cognition that logic can describe but cannot fully explain. Abduction is where the new ideas come from, and the question of where new ideas come from is, ultimately, a question about the nature of mind itself.

Now the AI question acquires its sharpest formulation. The systems described in The Orange Pill produce outputs that have, from the human user's perspective, the phenomenological characteristics of abductive inferences. Edo Segal describes working with Claude on a structural problem in his manuscript — an inability to pivot from diagnosis to counterargument — and receiving a suggestion (the analogy with laparoscopic surgery) that resolved the difficulty in a way he had not anticipated. The suggestion was surprising. It came from outside his existing framework. If accepted, it would render his difficulty tractable. It has, in short, the logical form of an abductive inference: surprising fact (the argument won't pivot), hypothesis (the structural analogy with laparoscopic surgery), plausibility (the analogy illuminates the point).

But the distribution of the abductive elements across the collaboration is asymmetric in a way that Peirce's framework makes visible. The surprising fact — the experience of the argument not working — belongs entirely to the human. Segal experienced the difficulty. He felt the friction. He recognized that his existing approach was inadequate. The machine did not experience the difficulty at all. It received a prompt describing the difficulty and generated a response based on statistical patterns in its training data. The hypothesis — the laparoscopic surgery analogy — was generated by the machine. But the judgment of plausibility — the assessment that the analogy actually illuminates the problem — belonged entirely to the human. Segal recognized the suggestion as apt because his experience as a writer, thinker, and builder gave him the trained judgment to distinguish a genuine insight from a clever but empty connection.

The abductive inference was real, but it was distributed: surprise in the human, hypothesis-generation in the machine, plausibility-judgment back in the human. No single participant performed the complete abductive operation. The collaboration performed it, and the collaboration has a logical structure that Peirce's framework is uniquely equipped to analyze.

This distributed structure has consequences. If the human outsources the identification of surprising facts — if the human stops feeling the friction of genuine difficulty and begins accepting the machine's smooth output as adequate — then the abductive process loses its initiating condition. If the human outsources the judgment of plausibility — if the human stops evaluating the machine's suggestions against the hard-won standards of experience and accepts them on the basis of rhetorical polish — then the abductive process loses its corrective mechanism. The machine can generate hypotheses indefinitely, but without genuine surprise to initiate the process and genuine judgment to evaluate the output, the hypotheses are untethered — clever, fluent, and potentially worthless.

Erik Larson, drawing explicitly on Peirce, argued in The Myth of Artificial Intelligence that abductive inference constitutes an impassable barrier for current AI: "Deduction-induction combos don't get us to abduction, because the three types of inference are formally distinct." The claim is strong — perhaps too strong, given the functional resemblance between the machine's cross-domain pattern-matching and the human's hypothesis-generation. But the underlying insight is sound: the three modes of inference are not points on a continuum. They are distinct logical operations with distinct structures, and the capacity to perform one does not entail the capacity to perform another.

Peirce's 1887 question — how much of thinking can the machine perform, and what must remain with the living mind — thus receives a provisional answer, one that the subsequent chapters will develop, test, and refine. The machine performs deduction with mechanical precision. It performs induction at superhuman scale. It produces outputs that function, within a collaborative context, as contributions to abductive inference. But the complete abductive operation — the movement from genuine surprise through hypothesis to disciplined judgment — requires elements that the machine does not possess: the capacity to be surprised, the capacity to evaluate plausibility against the standards of lived experience, and the capacity to care whether the hypothesis is true. These capacities remain, for now, with the living mind. The question is what happens to them when the machine's fluent output makes them harder to exercise.

Chapter 2: Abduction and Its Doubles

The mature Peirce distinguished abduction from its imitations with the care of a diagnostician separating diseases that present with identical symptoms. The distinction matters because the consequences of misdiagnosis are severe: if an inquiry treats a pseudo-abductive inference as genuine, the inquiry will appear to be making progress — generating hypotheses, resolving difficulties, producing insights — while actually spinning in place, mistaking the machinery of discovery for discovery itself.

Abduction, in its developed Peircean form, requires three elements operating in sequence. First, a surprising fact — an observation that violates a specific expectation generated by a specific body of background knowledge. The surprise is not a vague feeling of novelty but a determinate logical event: this specific expectation, generated by this specific set of beliefs, has been contradicted by this specific observation. Second, a hypothesis — a proposed explanation that, if true, would render the surprising fact unsurprising. The hypothesis is not derived from the observation but proposed in response to it, and it goes beyond what the observation warrants in a way that is categorically different from the way an inductive generalization goes beyond its evidence. Third, a judgment of plausibility — an assessment, grounded in the inquirer's experience and domain knowledge, that the hypothesis is worth testing. Not that it is true, not even that it is probable, but that it merits the expenditure of investigative resources. The judgment of plausibility is what separates the productive inquirer from the idle speculator: both generate hypotheses, but only the productive inquirer can distinguish the hypotheses worth pursuing from the hypotheses that are merely entertaining.

The three elements are logically interdependent. Without a genuine surprising fact, the hypothesis is unmotivated — a solution to a problem that does not exist. Without a hypothesis, the surprising fact remains an anomaly — a puzzle without a proposed resolution. Without a judgment of plausibility, the hypothesis is untested — one among an infinite number of logically possible explanations, with no reason to prefer it over any other. Genuine abduction requires all three elements, and it requires them in a specific relationship: the hypothesis must respond to the specific surprise, and the judgment of plausibility must evaluate the specific hypothesis against the specific domain knowledge that the surprise has rendered uncertain.

The phenomenon that Peirce's framework reveals most clearly in the AI moment is the systematic production of what might be called abductive doubles — outputs that exhibit the surface characteristics of abductive inference without its logical substance. The doubles come in identifiable varieties, and each variety presents a distinct risk to the quality of inquiry.

The first variety is the unmotivated hypothesis. The AI system generates a connection between two ideas — a structural analogy, an unexpected example, a reframing of the problem — that is clever, surprising to the human partner, and well-articulated. But the connection does not respond to a genuine anomaly in the inquiry. It responds to the prompt. The human asked for a connection, and the machine produced one, but the connection does not resolve a difficulty that the human was actually experiencing. It resolves a difficulty that the prompt described, which may or may not correspond to the genuine state of the inquiry.

The distinction is subtle but consequential. A genuine abductive inference begins with a genuine encounter with difficulty — with the experience of having one's expectations violated, of finding that one's existing framework is inadequate to the phenomena. The difficulty is felt, not merely described. It is an experience of cognitive friction, of the kind that Peirce categorized under Secondness — the brute resistance of reality to expectation. When the human describes this difficulty to the AI and the AI produces a response, the response may be genuinely helpful — may provide the hypothesis that resolves the felt difficulty. But the response may also be apparently helpful — may provide a hypothesis that resolves the difficulty as described in the prompt while missing the actual difficulty entirely, because the actual difficulty is more complex, more ambiguous, or more deeply rooted than the prompt's description captures.

Edo Segal provides a vivid account of this risk in The Orange Pill when he describes the Deleuze incident. Claude produced a passage connecting Csikszentmihalyi's flow state to Deleuze's concept of "smooth space," and the connection was eloquent, well-structured, and — to a reader without specific knowledge of Deleuze — convincing. It had every surface characteristic of an abductive insight: it was unexpected, it drew on cross-domain knowledge, and it appeared to illuminate the argument. But the philosophical reference was wrong. Deleuze's concept of smooth space has little to do with how Claude deployed it. The hypothesis was fluent but groundless — an unmotivated connection dressed in the rhetoric of discovery.

The Deleuze episode is diagnostic precisely because the failure was difficult to detect. The output was polished. The prose was confident. The connection sounded right. Segal caught it only because something nagged at him the next morning — a residual unease that he could not immediately articulate but that, upon investigation, proved to be the appropriate response to a hypothesis that had bypassed the judgment of plausibility. The nagging was itself a form of Secondness — a brute cognitive resistance that the smoothness of the output had almost, but not quite, eliminated.

The second variety of abductive double is the overdetermined hypothesis — a suggestion so well-supported by the machine's training data that it carries no genuine explanatory risk. Genuine abduction involves venturing beyond the evidence. The hypothesis is a guess, and the inquirer knows it is a guess, and the knowledge that it is a guess motivates the subsequent testing that either confirms or refutes it. An overdetermined hypothesis involves no such venture. The machine produces a connection that is, from a statistical standpoint, highly probable given the patterns in its training data — a connection that is not wrong but is not genuinely new either. It is the intellectual equivalent of a safe bet: it resolves the prompt's difficulty without venturing anything, without risking anything, without exposing itself to the possibility of being interestingly wrong.

The overdetermined hypothesis is harder to identify than the unmotivated hypothesis, because it is typically accurate. It is correct, well-supported, and relevant. It is also inert. It does not open new lines of inquiry. It does not challenge existing assumptions. It does not produce the kind of cognitive friction that genuine abduction generates — the productive discomfort of holding a hypothesis that might be wrong and must be tested. It produces, instead, the comfortable feeling of confirmation: yes, that makes sense, that fits, that must be right. The comfort is the diagnostic sign. Genuine abduction is not comfortable. It is exhilarating, unsettling, and uncertain. If the machine's suggestion produces only comfort, the inference is likely overdetermined — statistically sound but abductively empty.

The third variety is the most insidious: the simulated surprise. The human partner, working with the AI over an extended session, begins to treat the machine's unexpected outputs as surprising facts — as observations that violate expectations and demand explanation. The machine suggests an analogy the human had not considered, and the human experiences a frisson of recognition: I hadn't thought of that. The frisson feels like genuine surprise. It has the phenomenological texture of the surprising fact that initiates abduction. But the frisson may be generated not by a genuine anomaly in the inquiry but by the human's unfamiliarity with the machine's associative range. The human is surprised because the human did not expect the machine to produce that particular output — but the surprise is about the machine's capabilities, not about the subject matter of the inquiry. The surprise is meta-level rather than object-level, and it initiates not a genuine investigation of the subject matter but an admiring exploration of the machine's capabilities.

This simulated surprise is, in contemporary AI discourse, pervasive. The literature of the "orange pill moment" — the accounts by developers, writers, and builders of the experiences that convinced them that AI had crossed a threshold — is rich in descriptions of surprise at the machine's output. But much of the described surprise is meta-level surprise (astonishment at the machine) rather than object-level surprise (the encounter with a genuine anomaly in the subject matter). The distinction matters because only object-level surprise initiates genuine abduction. Meta-level surprise initiates a different kind of inquiry — an inquiry into the nature and capabilities of the tool — which, while important, is not the same thing as productive engagement with the problem the tool is supposed to help solve.

Peirce's framework suggests a diagnostic protocol for distinguishing genuine abduction from its doubles. The protocol has three steps, corresponding to the three elements of genuine abduction.

First: Is there a genuine surprising fact? Has the inquiry encountered a specific, determinate anomaly — an observation that violates a specific expectation generated by a specific body of knowledge? Or is the "surprise" generated by the machine's output rather than by the subject matter? If the surprise is about the machine, the inference is simulated, not genuine. If the surprise is about the subject, proceed to step two.

Second: Does the hypothesis respond to the specific anomaly? Does the machine's suggestion actually resolve the difficulty that the inquirer experienced, or does it resolve a different difficulty — the difficulty as described in the prompt, or a difficulty adjacent to the real one, or no genuine difficulty at all? If the hypothesis does not respond to the specific anomaly, it is unmotivated. If it responds precisely, proceed to step three.

Third: Does the hypothesis carry genuine explanatory risk? Does it venture beyond the evidence in a way that could be tested and could fail? Or is it so well-supported by established patterns that it carries no risk at all — a statistical commonplace dressed in the rhetoric of discovery? If the hypothesis carries no risk, it is overdetermined. If it ventures genuinely, the inference is genuine abduction.

This protocol is not a mechanical procedure. It cannot be automated. It requires the inquirer's judgment at every step — judgment about whether the surprise is genuine, whether the hypothesis is responsive, whether the risk is real. The protocol is itself an exercise of the very capacities that distinguish genuine inquiry from its simulation: the capacity to identify real difficulty, the capacity to evaluate proposed solutions against the standards of experience, and the capacity to distinguish the genuinely new from the merely unfamiliar.

The implications for the practice of human-AI collaboration are direct. The collaborator who cannot perform this diagnostic — who cannot distinguish genuine abduction from its doubles — will mistake the machine's fluent output for genuine discovery, and the inquiry will suffer accordingly. The hypotheses will accumulate without testing. The insights will feel productive without advancing the inquiry. The collaboration will generate volume without depth — the intellectual equivalent of what The Orange Pill identifies, in a different register, as productive addiction.

The collaborator who can perform this diagnostic — who maintains the capacity for genuine surprise, who evaluates the machine's suggestions against the standards of hard-won experience, who insists on testing the hypotheses that survive plausibility judgment — will use the machine's extraordinary generative capacity in the service of genuine inquiry. The machine generates a vast field of potential hypotheses. The human identifies the genuine anomalies, evaluates the responses, selects the ventures worth testing, and subjects them to the confrontation with reality that only experience can provide.

The distribution is not a compromise. It is a logical structure: the structure of genuine inquiry, conducted across a collaboration in which the generative and evaluative functions are performed by different kinds of entities. The structure works — but only if the human maintains the evaluative capacities on which the structure depends. And those capacities are precisely the capacities that the machine's smooth, confident, prolific output tends to erode, because the output provides comfort where discomfort is needed, resolution where persistence is needed, and the appearance of discovery where only the raw material of discovery has been supplied.

The raw material is valuable. The mistake is treating it as the finished product.

Chapter 3: Secondness and the Smooth Output

Peirce's three categoriesFirstness, Secondness, and Thirdness — constitute the deepest stratum of his philosophical architecture, and their application to the character of AI-generated output reveals a specific mechanism by which the machine's fluency undermines the conditions for genuine inquiry. The categories are not classifications of things but classifications of the ways in which experience presents itself to consciousness, and their relevance to the AI moment lies not in their metaphysical status but in their diagnostic power: they identify, with a precision no other framework currently matches, exactly what is missing from the machine's output and exactly why the absence matters.

Firstness is the category of quality considered in itself — the redness of red, the painfulness of pain, the sheer qualitative character of experience prior to any relation, any comparison, any inference. Firstness is pure possibility, pure feeling, pure immediacy. It is what experience would be if there were nothing to experience it against — no contrast, no resistance, no context. Firstness is the least relevant of the three categories to the present analysis, though it is not irrelevant: the qualitative character of working with AI — the feeling of the collaboration, the texture of the experience — belongs to Firstness, and its analysis would require a different kind of investigation than the logical one undertaken here.

Thirdness is the category of mediation, of law, of generality — the relationships, regularities, and habits that connect things into intelligible patterns. Thirdness is what makes experience comprehensible: the law that connects cause and effect, the habit that connects sign and meaning, the rule that connects premise and conclusion. Thirdness is the domain of inference, of language, of all forms of representation that involve one thing standing for another in some respect. The AI system's output belongs overwhelmingly to Thirdness: it is composed of symbols (words, code, logical structures) that represent objects through conventional relationships, and its processing consists in the manipulation of these symbolic relationships according to statistical regularities extracted from its training data.

Secondness is the category that requires the most careful attention, because Secondness is what the AI's output systematically lacks, and its absence has consequences that Peirce's framework allows one to specify with unusual precision.

Secondness is the category of brute fact — of resistance, of actuality, of the sheer thisness of experience that refuses to conform to expectations and cannot be wished away. Secondness is the experience of pushing against something that pushes back. The door that will not open. The experiment that yields an unexpected result. The code that throws an error. The argument that fails to convince despite its logical structure. Secondness is reality's veto — the moment when the world says no to the inquirer's expectations and forces a reckoning.

Peirce was emphatic that Secondness cannot be reduced to Thirdness. The brute resistance of fact is not a relationship, not a regularity, not a law. It is an encounter — a collision between what the mind expects and what the world delivers. The collision is the starting point of all genuine inquiry, because without it, the mind has no reason to revise its beliefs, no motive for generating new hypotheses, no occasion for the kind of learning that only friction can produce. Secondness is the experiential ground of the surprising fact that initiates abduction, and without Secondness, abduction does not get started.

The AI system's output is, characteristically, an environment of attenuated Secondness. The machine responds to prompts with fluency, with confidence, with an absence of resistance that is its most praised and most dangerous feature. The output does not push back. It does not say no. It does not confront the human partner with the brute factuality of a reality that refuses to cooperate. It provides what was asked for, in the form that was expected, with a smoothness that eliminates the friction between intention and realization.

This smoothness is precisely what the philosopher Byung-Chul Han identifies, from a different philosophical tradition, as the signature pathology of contemporary culture. Han's critique of frictionlessness — of the "smooth" as a cultural dominant that eliminates the resistance necessary for depth — maps onto Peirce's categories with unexpected precision. Han's smoothness is the absence of Secondness. Han's argument that removing friction from experience produces not liberation but hollowness is, in Peircean terms, the argument that the elimination of Secondness eliminates the precondition for genuine inquiry. The two critiques converge from opposite directions — Han from aesthetic and cultural theory, Peirce from formal logic — and their convergence strengthens both.

Consider what happens when a developer works through a coding problem without AI assistance. The code is written. It fails. An error message appears — specific, unhelpful, resistant. The developer reads the error. Examines the code. Forms a hypothesis about what went wrong. Tests the hypothesis. The hypothesis fails. Another hypothesis is formed. The process iterates, sometimes for hours, sometimes for days, and in the course of the iteration, something happens that is not visible in the final working code: the developer comes to understand the system at a level that the final code does not reflect. The understanding is deposited in layers — each failure adding a thin stratum of knowledge about how the system works, what it expects, where it breaks. The layers accumulate into what experienced engineers call intuition: the ability to sense that something is wrong before being able to articulate what.

This intuition is built from encounters with Secondness. Each error message is a brute fact — a collision between the developer's expectation and the system's actual behavior. Each collision forces a revision of the developer's model of the system, and each revision deepens the developer's understanding. The understanding is earned — purchased through the specific currency of frustrated expectation, and it is durable precisely because it was earned. Knowledge built through friction resists the erosion of time in ways that knowledge acquired without friction does not.

When the same developer works with AI assistance, the Secondness is dramatically attenuated. The code is described in natural language. The AI produces an implementation. The implementation works — or, if it does not, the error is fed back to the AI, which produces a corrected implementation. The developer's encounter with the system's resistance is mediated by the machine, and the mediation smooths the encounter in a way that eliminates much of the friction that would have produced understanding. The final code may be identical. The developer's understanding of the code is not.

This is not a speculative concern. The Orange Pill documents precisely this phenomenon in the account of the Trivandrum training, where one engineer — after months of AI-assisted development — found herself making architectural decisions with less confidence than she had before, without being able to explain why. The explanation, in Peircean terms, is clear: the AI had removed the Secondness that would have deposited the layers of understanding on which architectural confidence depends. The plumbing was handled. The errors were resolved. The system worked. But the encounters with brute fact that would have built the engineer's intuition had been smoothed away, and the intuition had not developed.

The loss is invisible in the short term. The code works. The project ships. The productivity metrics improve. But the loss compounds. Each frictionless interaction reinforces the expectation of frictionlessness. Each encounter with the machine's smooth output further attenuates the human's tolerance for resistance. The developer who has used AI for six months finds manual debugging not merely tedious but cognitively intolerable — as though asked to navigate by the stars after years of GPS. The tolerance for Secondness atrophies, and with it, the capacity for the kind of thinking that only Secondness can produce.

Peirce would have insisted that this atrophy cannot be remedied by adding more evaluation at the end of the process. The problem is not that the output is unevaluated. The problem is that the understanding which only develops through the process of struggling with resistant material has been bypassed. The struggle is not an inefficiency to be optimized away. It is a cognitive process through which understanding is constructed, and it cannot be replicated by reviewing the output of a process from which it was absent.

The concept of the interpretant in Peirce's semiotics deepens this analysis. When a human encounters a sign — a word, a symbol, an error message — the encounter produces an interpretant: a cognitive response that mediates between the sign and its object, creating a new understanding that becomes part of the human's ongoing experience. The interpretant is not a mere registration of the sign. It is a transformation — a change in the interpreter's habits of thought that shapes all subsequent interpretations. Peirce distinguished among the immediate interpretant (the interpretant the sign is designed to produce), the dynamic interpretant (the interpretant actually produced on a particular occasion), and the final interpretant (the cumulative habit-change that the sign produces over the long run).

The dynamic interpretant is where the learning happens. It is shaped by the specific circumstances of the encounter — by the interpreter's prior experience, current expectations, and the particular resistance the sign offers to easy interpretation. An error message encountered at three in the morning after four hours of debugging produces a different dynamic interpretant than the same error message encountered as an example in a textbook. The frustration, the fatigue, the specific context of the failure — all of these shape the interpretant, and the interpretant shaped by genuine struggle is deeper, more durable, and more useful than the interpretant shaped by casual encounter.

The AI mediates between the human and the signs of the domain — the error messages, the system behaviors, the resistant facts — in a way that attenuates the dynamic interpretant. The human receives the machine's output (a polished, smooth, third-order sign) rather than the domain's direct resistance (a brute, second-order encounter). The interpretant produced by the mediated encounter is thinner, less durable, and less deeply integrated into the human's ongoing cognitive development.

The implication is not that AI should be abandoned — a recommendation that would be as futile as it would be unwise. The implication is that the human who works with AI must deliberately preserve encounters with Secondness — must maintain domains of practice where resistance is real, where errors are felt rather than mediated, where the friction that builds understanding is not smoothed away by the machine's fluency. The Orange Pill gestures toward this recommendation through its concept of "ascending friction" — the argument that removing friction at one level exposes harder, more valuable friction at a higher level. Peirce's framework specifies the mechanism: the lower-level friction (syntax errors, configuration problems, implementation details) deposited understanding through encounters with Secondness. If the higher-level friction (architectural judgment, product vision, the question of what should be built) is to produce comparable understanding, it must involve comparable encounters with resistance — encounters that are not mediated by the machine's smoothness but felt directly, in all their brute, uncomfortable, irreducible factuality.

The smoothness is comfortable. The resistance is productive. The challenge of the present moment is to maintain the productivity of resistance in an environment that rewards, measures, and celebrates the comfort of smoothness. Peirce's Secondness provides the conceptual vocabulary for this challenge — and the warning that a culture which eliminates Secondness from its cognitive practices will eventually find that it has eliminated the conditions under which genuine understanding develops.

The door must sometimes refuse to open, or the inquirer never learns what lies on the other side.

Chapter 4: Fallibilism and the Method of Computation

In 1877, Peirce published an essay that laid the groundwork for nearly everything he would subsequently build. "The Fixation of Belief" appeared in Popular Science Monthly and presented, in roughly twenty pages, a theory of inquiry so compact and so powerful that its implications are still being unfolded a century and a half later. The essay identifies four methods by which human beings arrive at settled beliefs, argues that only one of these methods is self-correcting, and demonstrates that the self-correcting method — the method of science — is superior not because it is more comfortable or more efficient but because it is the only method that submits beliefs to the discipline of experience and revises them when experience contradicts them.

The four methods are these. The method of tenacity: holding onto a belief regardless of evidence, by the simple expedient of refusing to consider alternatives. The method of authority: accepting a belief because a powerful institution endorses it, and suppressing doubts through social pressure and institutional sanction. The a priori method: accepting a belief because it seems reasonable, where reasonableness is determined by the prevailing assumptions and aesthetic preferences of the culture rather than by the evidence of experience. And the method of science: accepting a belief because it has survived the test of experience — because predictions derived from the belief have been checked against observations, and the belief has been revised when the predictions fail.

Peirce was clear that only the fourth method is self-correcting. Tenacity cannot detect its own errors because it refuses to look for them. Authority cannot detect its own errors because it suppresses dissent. The a priori method cannot detect its own errors because it evaluates beliefs against cultural assumptions rather than against experience, and cultural assumptions can be systematically wrong without anyone noticing. Only the method of science has a built-in mechanism for error detection: the confrontation between belief and experience, the willingness to revise when the confrontation goes badly, and the community of inquirers whose collective scrutiny ensures that errors an individual might miss are eventually exposed.

The AI moment has introduced what amounts to a fifth method of belief-fixation — one that Peirce did not anticipate but that his framework is precisely calibrated to diagnose. This fifth method might be called the method of computation: the fixation of belief through the generation of output so fluent, so comprehensive, and so confidently articulated that the human recipient is persuaded by the quality of the presentation rather than by the quality of the evidence.

The method of computation shares characteristics with each of Peirce's original four methods while being reducible to none of them. It resembles the method of authority in that the AI system speaks with a voice of apparent expertise — grammatically impeccable, logically structured, tonally measured — and the human recipient may accept its deliverances on the basis of that apparent expertise rather than on the basis of independent evaluation. But it differs from classical authority in a crucial respect: the AI system has no institutional standing, no professional reputation, no social sanctions to deploy. Its authority is purely presentational — the authority of a confident and polished voice — and it is therefore simultaneously more pervasive and more difficult to resist than institutional authority, which at least has the decency to announce itself as authority and can, in principle, be challenged through institutional channels.

It resembles the a priori method in that the AI system's output tends to confirm what seems reasonable within the cultural frameworks encoded in its training data. The model is trained on the accumulated textual output of a particular civilization at a particular historical moment, and its statistical regularities reflect the prevailing assumptions, conceptual frameworks, and intellectual habits of that civilization. Its output therefore tends to align with the cultural common sense of the society that produced its training data — tends, in other words, to tell the human recipient what the human recipient's culture already believes, dressed in the rhetoric of independent analysis. But the method of computation differs from the a priori method in that the AI's conformity to cultural common sense is a byproduct of statistical training, not a conscious adoption of principles that seem reasonable. The biases are architecturally embedded and therefore invisible to the human recipient, who experiences the output as objective analysis rather than as cultural echo.

It even resembles the method of tenacity in certain configurations. When a human uses AI to confirm existing beliefs — asking leading questions, accepting confirming outputs, dismissing disconfirming ones — the AI becomes an instrument of tenacious belief-fixation at industrial scale. The human's existing beliefs are reinforced by an entity that can generate supporting arguments, marshal confirming evidence, and articulate rationalizations with a fluency that the human alone could never achieve. The tenacity is amplified — made more articulate, more comprehensive, more difficult to challenge — by the computational power of the tool.

What the method of computation does not resemble is the method of science. The method of science requires four capacities that the method of computation lacks — and the absence of these capacities is what makes the method of computation epistemologically dangerous.

First, the method of science requires a genuine confrontation with experience. Beliefs are tested against an independent reality that is capable of contradicting them, and the contradiction is recognized as a signal that the belief requires revision. The AI system does not confront experience. It processes training data and generates outputs on the basis of statistical patterns extracted from that data. The outputs are not tested against an independent reality; they are generated from the data, and their correspondence to reality is incidental rather than systematic. When the AI produces a factually incorrect output — the phenomenon the research community calls hallucination — the error is not detected by the system itself. It persists until a human detects it and provides corrective input. The system cannot distinguish its accurate outputs from its inaccurate ones, because it has no mechanism for confronting its outputs with the reality they purport to represent.

Second, the method of science requires the capacity for self-correction. When a prediction fails, the belief from which the prediction was derived must be revised. The revision is driven by a normative commitment: the inquirer ought to revise the belief, because the method of science demands that beliefs conform to evidence rather than evidence conforming to beliefs. The AI system does not self-correct in this sense. It generates outputs, and if those outputs are wrong, the wrongness persists. The system responds to new inputs with new outputs, but the response is computational, not normative. It does not recognize its previous output as wrong. It does not experience the contradiction between output and reality as a demand for revision. It simply generates a new output in response to new input, and if the new output happens to be more accurate, the improvement is a byproduct of changed inputs, not of self-correction.

Third, the method of science requires what Peirce called the irritation of doubt — the uncomfortable awareness that one's beliefs may be wrong, that one's understanding may be inadequate, that the comfortable feeling of settled conviction may be premature. Doubt is not, for Peirce, a methodological posture adopted for philosophical reasons. It is a genuine psychological state — uncomfortable, destabilizing, energy-consuming — and its discomfort is precisely what motivates the hard work of inquiry. Without the irritation of doubt, there is no reason to investigate. Without the discomfort of uncertainty, there is no energy for the demanding process of testing, revising, and improving beliefs. The AI system does not experience doubt. Its outputs carry no internal signal of their own uncertainty. They are generated with the same statistical confidence regardless of whether the underlying patterns are robust or fragile, and the human recipient cannot tell, from the output alone, whether a given claim rests on solid statistical ground or on a coincidence of training data.

Fourth, the method of science requires a commitment to truth as a normative ideal. The inquirer who practices the method of science does so because the inquirer values truth — values correspondence between belief and reality — more than comfort, convenience, or the satisfaction of settled conviction. This normative commitment is what sustains inquiry through its most difficult phases: the phases when the evidence is ambiguous, the hypotheses are uncertain, and the temptation to settle for a comfortable answer is strongest. The AI system has no normative commitments. Its outputs are shaped by optimization objectives — by the parameters of its training — not by a commitment to correspondence with reality. When the optimization objectives happen to align with truth, the outputs are true. When they do not, the outputs are false, and the system has no mechanism for detecting or correcting the misalignment.

The practical consequence of these four absences is that the method of computation can substitute for the method of science in the daily practice of inquiry without anyone noticing the substitution. The substitution is invisible because the outputs of the two methods can be superficially indistinguishable. A well-constructed AI output can resemble a well-constructed scientific analysis: both are logically structured, both cite relevant evidence, both draw measured conclusions, both present their claims with the confidence of established knowledge. The difference lies not in the surface but in the process. The scientific analysis is the product of a method that has subjected its claims to the test of experience and revised them when the test failed. The AI output is the product of a method that has generated its claims from statistical patterns and presented them with the confidence that the method of science reserves for tested conclusions.

Peirce's doctrine of fallibilism — the recognition that any belief may be mistaken — provides the normative counterweight to the method of computation's seductive confidence. Fallibilism is not skepticism. It does not deny that knowledge is possible. It denies only that knowledge is certain — that any particular belief is immune to revision. Fallibilism holds that the appropriate attitude toward one's beliefs is not confidence but provisional commitment: the belief is held because it has survived the tests so far, but it is held with the awareness that future tests may require its revision. This provisional commitment is what sustains the self-correcting process. The inquirer who holds beliefs provisionally is the inquirer who remains open to evidence, who continues to test, who does not mistake the current best hypothesis for the final truth.

The AI system's output systematically undermines this provisional attitude. The output is presented without hedging, without qualification, without the marks of uncertainty that characterize genuinely fallibilistic communication. When Claude proposes an argument, the argument is presented as though it were settled. When it suggests a connection, the connection is presented as though it were obvious. When it generates an analysis, the analysis is presented as though it were definitive. The surface communicates certainty even when the substance is speculative, and research in cognitive psychology has established that the perceived credibility of a message is influenced by its presentation — that polished, well-structured messages are judged as more credible than rough, hesitant ones, even when the substance of the rough message is more accurate.

The fallibilist's discipline — the practice of treating every output as provisional, of subjecting every claim to the test of experience, of maintaining the irritation of doubt even when the output is smooth and confident — is the essential corrective to the method of computation. The discipline cannot be automated. It cannot be delegated to the machine. It requires the human's genuine engagement with the question of whether the output is true, not merely whether it sounds true — and this engagement is precisely what the output's smoothness tends to discourage.

The community of inquiry, as Peirce conceived it, provides the institutional framework within which this discipline can be sustained. No individual can maintain the irritation of doubt indefinitely against the seductive confidence of a system designed to produce fluent, authoritative output. The community provides what the individual cannot: the critical questions, the alternative perspectives, the demand for evidence, the social mechanism by which premature certainty is challenged and provisional commitment is maintained. When every member of the community uses the same AI system, trained on the same data, optimized for the same patterns of confident output, the community's capacity to generate genuine doubt is diminished — its perspectives converge, its questions narrow, its challenges soften. The diversity that self-correction requires is systematically reduced by the homogenizing influence of the shared tool.

The remedy is not the elimination of AI from the community's practice. The remedy is the deliberate cultivation of the conditions that the method of science requires: diversity of perspectives (including diversity of AI systems), commitment to testing against experience (not merely against the internal coherence of the output), and the institutional preservation of doubt as a valued cognitive state rather than a deficiency to be resolved as quickly as possible.

Peirce's method of science is nearly a hundred and fifty years old, and it has survived every previous challenge to its authority — survived the rise of propaganda, the commercialization of research, the fragmentation of expertise, the acceleration of information. It will survive the AI moment as well, if the community that practices it understands what is at stake: not whether to use the most powerful inferential tool in human history, but whether to use it within the discipline of a method that values truth over fluency, testing over plausibility, and the sustained discomfort of genuine inquiry over the seductive comfort of confident computation.

Chapter 5: Signs, Interpretants, and the Hall of Mirrors

The theory of signs that Peirce developed across four decades of work — what he called semeiotic — was not a branch of his philosophy but its circulatory system. Every other element of the architecture depended on it. Inference is a sign-process. Thought is a sign-process. The community of inquiry communicates through signs, fixes beliefs through signs, corrects errors through signs. To understand what happens when a large language model enters the process of human inquiry, one must understand what happens at the level of the sign — and Peirce's semiotic framework, applied to this question, reveals a structural asymmetry in the human-AI dialogue that no amount of improved capability will eliminate, because the asymmetry is not a deficiency of the current technology but a feature of the sign-relation itself.

The fundamental unit of Peirce's semeiotic is the triadic sign-relation: the irreducible relationship among a sign (the vehicle of representation), an object (what the sign represents), and an interpretant (the cognitive effect produced by the encounter between sign and interpreter). The triad is irreducible in the strict logical sense — it cannot be decomposed into pairs without losing the essential character of signification. A sign without an interpretant is not functioning as a sign; it is merely a mark, a sound, a configuration of pixels. An interpretant without a sign is not an interpretant; it is a mental event with no representational content. The three elements exist only in relation to one another, and the relation is what constitutes meaning.

The interpretant deserves the most careful attention, because it is where Peirce's framework diverges most sharply from the assumptions embedded in contemporary AI discourse. The interpretant is not the interpreter — not the person or system that encounters the sign. It is the effect that the sign produces: the concept formed, the habit altered, the further sign generated. And crucially, the interpretant is itself a sign, which has its own object and produces its own interpretant, in a chain that Peirce called unlimited semiosis. Meaning is not a static relationship between a word and a thing. It is a process — a cascade of interpretants, each one generating the next, each one shaped by the specific circumstances of its production.

Peirce distinguished three grades of interpretant, and the distinction illuminates the human-AI dialogue with diagnostic precision. The immediate interpretant is the interpretant that the sign is designed to produce — the range of possible responses that the sign's structure makes available. The word "copper" has an immediate interpretant: the concept of a reddish metallic element with certain chemical and physical properties. Any competent English speaker, encountering the word, will produce an interpretant within this range. The immediate interpretant is a property of the sign itself, not of any particular encounter with it.

The dynamic interpretant is the interpretant actually produced on a particular occasion, in a particular interpreter, under particular circumstances. The word "copper" encountered in a chemistry textbook produces a different dynamic interpretant than the same word encountered in a poem by Pablo Neruda. The dynamic interpretant is shaped by the interpreter's prior experience, current expectations, emotional state, and the full context of the encounter. It is particular, historical, and unrepeatable — the specific cognitive event that this sign produced in this interpreter at this moment.

The final interpretant is the cumulative habit-change that a sign would produce in an interpreter who fully grasped its meaning — the complete set of practical consequences that the sign's truth would entail. The final interpretant is an ideal limit, never fully attained but asymptotically approached through repeated encounters with the sign under varied circumstances. It is the interpretant toward which the community of inquiry converges in the long run.

The human-AI dialogue is a sign-process. Prompts are signs. Responses are signs. Each exchange produces interpretants that feed into subsequent exchanges, in a chain of semiosis that exhibits, at least superficially, the unlimited character that Peirce identified as essential to the growth of meaning. But the semiotic capacities of the participants are fundamentally asymmetric, and the asymmetry has consequences that the surface fluency of the exchange tends to conceal.

The human participant produces interpretants in the full Peircean sense. When Edo Segal receives Claude's suggestion of the laparoscopic surgery analogy, his interpretant is not a mere registration of the words. It is a complex cognitive event that integrates the suggestion with his prior understanding of his argument's structure, his experience as a writer and builder, his emotional engagement with the project, and his trained judgment about what constitutes a genuine insight versus a clever but empty connection. The interpretant transforms his subsequent thinking. It becomes part of his cognitive repertoire, shaping how he approaches similar problems in the future. Over repeated encounters, the interpretants accumulate into habit-changes — the final interpretants that constitute genuine learning.

The machine's processing, whatever else it is, does not produce interpretants in this sense. When Claude processes a prompt, the processing generates an output — a sequence of tokens determined by statistical patterns in the training data, modulated by the specific context of the conversation. The output functions as a sign in the subsequent exchange — it represents something, it produces effects in the human interpreter, it contributes to the ongoing chain of semiosis. But the process by which it is generated does not involve the kind of sign-interpretation that Peirce's framework describes. The machine does not relate the prompt to its object through a general principle that it has grasped. It relates the prompt to patterns in its training data through statistical association, and the association, however sophisticated, is not the same logical operation as the grasp of a general principle.

Recent scholarship has pressed precisely this point. Catherine Legg, in a forthcoming analysis of Peirce and generative AI, argues that large language models have "skilfully captured a form of symbolicity, but no other sign-kind." Peirce classified signs into three fundamental types based on the nature of their relation to their objects: icons (signs that represent their objects through resemblance — a portrait, a map, a diagram), indices (signs that represent their objects through existential connection — smoke as a sign of fire, a weathervane as a sign of wind direction, a pointing finger), and symbols (signs that represent their objects through convention — words, mathematical notation, traffic signals). The classification is not merely taxonomic; it identifies fundamentally different modes of representation, each with distinct cognitive implications.

Icons ground understanding in structural resemblance. When a mathematician draws a diagram, the diagram is an icon: its spatial structure resembles the logical structure of the mathematical relationship it represents, and the resemblance allows the mathematician to discover properties of the relationship by examining properties of the diagram. Icons are the signs through which structural — most notably logical — relationships become visible and manipulable.

Indices ground understanding in existential connection. An index is physically connected to its object — caused by it, co-located with it, or otherwise in real relation to it. The indexical sign says this, here, now. It points to a particular existent rather than representing a general type. The doctor's thermometer is an index of the patient's temperature: the mercury column is physically caused by the heat of the patient's body, and the reading is trustworthy precisely because of this causal connection. Without indices, signs float free of the world they purport to represent — connected to objects only by convention, without any anchor in existential reality.

Symbols ground understanding in convention. The word "copper" represents the metal copper not because the word resembles the metal or is physically connected to it but because the English-speaking community has agreed to use that sequence of sounds for that purpose. Symbols are the most flexible and the most powerful signs — they can represent anything, including abstractions, counterfactuals, and entities that do not exist — but they are also the most detached from reality. A symbol's connection to its object depends entirely on the habits of the community that uses it, and if those habits are wrong — if the community systematically associates the symbol with the wrong object — the error is invisible from within the symbolic system.

Large language models operate almost exclusively in the domain of symbols. Their training data is symbolic — text, code, mathematical notation. Their processing is symbolic — the manipulation of token sequences according to statistical patterns. Their output is symbolic — more text, more code, more notation. They do not process icons: they do not manipulate structural resemblances or discover logical properties through diagrammatic reasoning. They do not process indices: they have no causal connection to the objects their symbols represent, no existential anchoring in the world beyond their training data.

This is what David Manheim's 2025 paper in Philosophy & Technology identifies as the "hall of mirrors" problem. Large language models exist within a closed semiotic environment of pure symbolicity — symbols referring to symbols referring to symbols, without indexical grounding in a shared external world. The "mirrors" are the reflecting surfaces of the training data: the model encounters representations of reality, not reality itself, and its outputs are further representations generated from those representations. The hall of mirrors can be extraordinarily convincing — the reflections are sharp, detailed, and internally consistent — but it is still a hall of mirrors. The reflections do not reach through the glass to touch the world they reflect.

The consequence for the human-AI dialogue is that the machine's signs are systematically impoverished relative to the human's signs. The human's signs are grounded in all three modes of representation: iconic (the structural intuitions built through years of practice), indexical (the direct causal connections between the human's experience and the world), and symbolic (the conventional language through which the human communicates). The machine's signs are grounded in one mode only: the symbolic. The machine can manipulate symbols with extraordinary fluency, but it cannot ground those symbols in the iconic structures or indexical connections that would anchor them to reality.

This semiotic impoverishment explains, with greater precision than any alternative account, why the machine's output can be simultaneously fluent and untrustworthy. The fluency is real — the symbolic manipulation is sophisticated, the patterns are well-extracted, the output is grammatically and logically well-formed. The untrustworthiness is equally real — the symbols are not grounded in the iconic and indexical connections that would ensure their correspondence to the world. The fluency and the untrustworthiness are not in tension. They are complementary features of a system that excels at symbolic processing and lacks everything else.

The human partner in the collaboration must supply what the machine lacks. The human's iconic understanding — the structural intuitions built through years of engagement with the domain — provides the basis for evaluating whether the machine's symbolic output captures genuine structural relationships or merely superficial verbal similarities. The human's indexical connections — the direct experiential links between the human's knowledge and the world — provide the basis for testing whether the machine's claims correspond to reality or merely to patterns in its training data. Without these iconic and indexical supplements, the machine's symbolic output remains ungrounded — impressive in its fluency, unreliable in its correspondence to the world it purports to represent.

Manheim's analysis pushes toward a further claim: that newer developments in AI systems — extended context windows, persistent memory, tool use, mediated interactions with external data — are moving toward providing something functionally analogous to indexical grounding. A system that can query a database, run code against real data, or interact with a physical environment through sensors and actuators has, in some functional sense, a causal connection to the world beyond its training data. The question is whether this functional analogue constitutes genuine indexical grounding in the Peircean sense — whether a mediated, computational connection to reality provides the same semiotic anchoring as the direct, embodied, experiential connection that characterizes human indexicality.

Peirce's framework suggests caution. The index, for Peirce, is not merely a causal connection between sign and object. It is a sign that compels attention — that forces the interpreter to attend to a particular existent, here and now, regardless of the interpreter's expectations or preferences. The weathervane compels attention to the direction of the wind. The thermometer compels attention to the patient's temperature. The error message compels the developer's attention to the specific point at which the code fails. The compulsion is the essence of the index — it is what makes the index a channel for Secondness, a vehicle through which the brute resistance of reality enters the sign-process.

A computational system that queries a database does not experience this compulsion. It receives data — but the reception is not an encounter with Secondness. It is a further symbolic operation: the extraction of symbols from a data store, processed according to the same statistical patterns that govern all the system's processing. The "grounding" is functional but not existential. The system does not encounter reality; it encounters another layer of representation. The hall of mirrors has been extended, not escaped.

This does not mean that tool-using AI systems are no more grounded than pure language models. The functional analogue of indexicality is a genuine improvement — it provides channels through which information about the current state of the world can enter the system's processing, reducing (though not eliminating) the closure of the semiotic environment. But it does mean that the human's contribution to the collaboration remains essential at the semiotic level: the human provides genuine indexical grounding — genuine experiential contact with the world — that no current computational architecture can replicate. The human's interpretants are richer than the machine's outputs, because the human's signs are grounded in all three modes of representation, and the machine's signs are grounded in one.

The practical discipline that follows from this analysis is what might be called semiotic literacy — the capacity to recognize which mode of representation is operative in a given sign, and to evaluate signs accordingly. The human who reads Claude's output with semiotic literacy does not evaluate it as though it were the product of a mind with iconic intuitions and indexical connections to the world. The human evaluates it as what it is: a sophisticated symbolic construction that may or may not correspond to the iconic structures and indexical realities of the domain. The evaluation requires the human's own iconic and indexical resources — the structural intuitions, the experiential connections, the direct engagement with reality that the machine cannot provide.

Without this literacy, the human mistakes the mirror for the window — takes the machine's symbolic reflection of reality for reality itself, and builds on a foundation that, however polished its surface, rests on nothing but glass.

Chapter 6: Mediation, Not Amplification

The pragmatic maxim, in the formulation Peirce settled on in his maturity, holds that the entire meaning of a concept consists in its conceivable practical consequences. To understand what a concept means, consider what effects the objects falling under that concept would have in the full range of conceivable practical situations, and the sum of those effects exhausts the concept's meaning. The maxim is not a theory of truth. It is a method of clarification — a tool for stripping away verbal confusion and revealing whether a concept that seems to say something actually says anything at all, or merely produces a warm feeling of comprehension without determinate content.

Peirce renamed his doctrine pragmaticism — a word, he noted with characteristic acerbity, "ugly enough to be safe from kidnappers" — precisely to distinguish it from the looser pragmatisms of William James and others who had, in Peirce's view, diluted the maxim into a theory of practical utility. Pragmaticism is not the doctrine that ideas are valuable insofar as they are useful. It is the doctrine that the meaning of an idea is constituted by its practical consequences, and that an idea without specifiable practical consequences is, however eloquently formulated, meaningless. The distinction matters because the concept that most urgently requires pragmaticist clarification in the context of the current AI moment is one that sounds meaningful, has been enormously influential, and turns out, under examination, to specify practical consequences that do not match the observed phenomena.

The concept is amplification.

The Orange Pill uses this concept as its central organizing metaphor. AI is an amplifier. It takes the human's signal — intention, creativity, judgment, vision — and makes it louder, stronger, more far-reaching. "Feed it carelessness," Segal writes, "you get carelessness at scale. Feed it genuine care, real thinking, real questions, real craft, and it carries that further than any tool in human history." The metaphor is vivid, intuitive, and rhetorically powerful. It has been adopted across the technology discourse as the standard framework for understanding the human-AI relationship. It is also, by the standards of Peirce's pragmatic maxim, systematically misleading — not because it is entirely wrong, but because the practical consequences it specifies diverge from the practical consequences that actually obtain.

The pragmatic maxim requires that the concept of amplification be tested against its conceivable practical consequences. If AI truly amplifies human capability, the following conditional expectations should hold:

The character of the output should be determined by the human input. The amplifier receives a signal and increases its power; it does not alter its content. A guitar amplifier makes the guitar's sound louder but does not change the notes being played. If AI amplifies human thought, the ideas that emerge from the collaboration should be recognizably the human's ideas — the same ideas, rendered more clearly, more forcefully, more expansively, but not different ideas.

The human should be able to produce the same output without the amplifier, albeit with less reach. The amplifier extends; it does not enable. A speaker with a megaphone says the same things they would say without the megaphone; the megaphone merely ensures that more people hear them. If AI amplifies human capability, the builder should be able to build the same product without AI, just more slowly and with fewer resources.

The relationship should be unidirectional. The amplifier receives and transmits. It does not contribute content to the signal. It does not reshape the signal in accordance with its own characteristics. The signal flows one way — from source through amplifier to audience — and the amplifier is transparent to the content.

Now test these conditional expectations against the evidence. The Orange Pill itself provides the test cases, and the results are unambiguous.

The character of the output is not determined solely by the human input. The ideas that emerge from the human-AI collaboration are not amplified versions of the ideas the human would have produced alone. They are different ideas — shaped by the machine's associative patterns, its cross-domain reach, its capacity to traverse knowledge spaces that no individual human could traverse. When Claude suggested the laparoscopic surgery analogy, the suggestion was not an amplified version of an idea Segal already had. It was a new idea, introduced by the machine's processing, and it changed the structure of the argument in ways that Segal's unaided thinking would not have produced. The signal was not amplified. It was transformed.

The human cannot produce the same output without the machine. The thirty-day sprint that produced Napster Station was not a compressed version of a project that could have taken six months without AI assistance. It was a qualitatively different project, involving processes and capabilities that did not exist before the machine's participation. The engineer who crossed from backend to frontend development did not merely work faster. She did something she could not have done at all — accessed a domain of practice that her existing skills did not reach. The machine did not amplify her capability. It constituted a new capability that had no prior existence.

The relationship is not unidirectional. The machine contributes content to the collaboration. It suggests alternatives. It proposes structures. It introduces perspectives the human had not considered. The human's subsequent thinking is shaped by the machine's contributions, and the machine's subsequent outputs are shaped by the human's responses. The signal flows in both directions, and each participant's contributions are modified by the other's — a dynamic that is incompatible with the amplifier model, in which the signal flows in one direction and the amplifier is transparent to the content.

The pragmatic maxim compels the conclusion: the concept of amplification does not specify the practical consequences that actually obtain. The concept distorts the phenomenon it purports to describe, and the distortion has practical consequences of its own. If builders believe that AI merely amplifies their existing capabilities, they will approach the tool with expectations — about control, about authorship, about the relationship between input and output — that the evidence does not support. They will be unprepared for the ways in which the machine transforms rather than amplifies, contributes rather than transmits, reshapes rather than extends.

The concept that better specifies the practical consequences is mediation. The AI system mediates between the human's intentions and the realized artifact. Mediation is a fundamentally different process from amplification. A mediator transforms the signal in the process of transmitting it. The translator mediates between two languages, and the translation is not an amplified version of the original but a transformation that preserves some features (the propositional content, the argumentative structure) while altering others (the rhythm, the connotations, the cultural resonances). The editor mediates between the writer's draft and the published text, and the mediation involves not merely polish but restructuring, reframing, and sometimes the introduction of ideas that the writer had not considered.

The AI system mediates in this stronger sense. It receives the human's intention (expressed as a prompt, a description, a set of requirements) and produces an output that represents the intention but also transforms it — introducing the machine's own patterns, associations, and structural tendencies. The output is a joint product of the human's intention and the machine's processing, and neither participant fully determines the character of the result.

Peirce's semeiotic framework provides the precise vocabulary for describing this mediation. The human's prompt is a sign whose object is the intended artifact and whose interpretant (in the human's mind) is a vision of what that artifact should be. The machine's output is a new sign whose object is the same intended artifact but whose characteristics are shaped by the machine's processing — by the statistical patterns in its training data, by its architectural tendencies, by the particular way in which it resolves the ambiguities inherent in any natural-language description. The human's response to the machine's output is an interpretant of the machine's sign — a cognitive event that integrates the machine's contribution with the human's original intention and produces a revised understanding of what the artifact should be. The revised understanding generates a new prompt, which generates a new output, which generates a new interpretant, in a chain of semiosis that is genuinely collaborative — not because both participants understand in the same way, but because both participants contribute signs that shape the ongoing chain of meaning.

The practical consequences of the mediation model differ from the practical consequences of the amplification model in ways that matter for how builders approach the tool. If the AI is a mediator rather than an amplifier, the builder must attend not only to the input but also to the characteristics of the mediating process. Every mediator introduces its own tendencies, its own biases, its own characteristic distortions. The translator who tends toward literalism produces a different mediation than the translator who tends toward fluency. The editor who values concision produces a different mediation than the editor who values elaboration. The AI system that tends toward confident, comprehensive, well-structured output produces a mediation that systematically favors certain qualities (polish, comprehensiveness, logical structure) over others (tentativeness, surprise, the productive roughness of ideas still in formation).

The builder who understands the tool as a mediator develops what might be called mediator literacy — an understanding of the specific ways in which this particular mediator transforms the signal. Mediator literacy includes knowledge of the machine's tendencies: its preference for fluent over hesitant prose, its inclination toward comprehensive over selective coverage, its bias toward conventional over surprising framings. It includes the capacity to compensate for these tendencies: to recognize when the machine's polish has smoothed away a productive roughness, when its comprehensiveness has buried the most important point under an avalanche of equally weighted details, when its conventional framing has eliminated the unconventional insight that would have made the work genuinely new.

The amplification model obscures the need for this literacy because it portrays the machine as transparent — as a conduit that faithfully transmits whatever the human inputs. The mediation model reveals the need for this literacy because it portrays the machine as what it is: an active participant whose characteristics shape the output in ways that the human must understand, anticipate, and compensate for.

The practical implication extends beyond individual builders to the institutional practices that govern AI-assisted work. If the AI is an amplifier, the quality of the output depends entirely on the quality of the input, and the institutional focus should be on improving the quality of human inputs — better prompts, clearer specifications, more precise instructions. If the AI is a mediator, the quality of the output depends on the quality of the entire mediation process — not just the input but the human's capacity to evaluate the output, to recognize the mediator's distortions, and to revise iteratively until the mediated result corresponds to the genuine intention rather than to the mediator's default patterns.

Organizations that adopt the amplification model will invest in prompt engineering and specification quality. Organizations that adopt the mediation model will invest in something broader and more demanding: the development of their people's capacity for evaluative judgment — the capacity to look at the machine's output and assess whether it represents a genuine realization of the intention or a default pattern that the machine has substituted for the intention because the default is what its statistical training produces most readily.

This evaluative capacity is, in Peircean terms, the capacity to produce dynamic interpretants of the machine's signs that are shaped by genuine understanding of the domain rather than by the surface characteristics of the signs themselves. It is the capacity to read through the polish to the substance, to evaluate the argument rather than the rhetoric, to distinguish the insight from the convention. This capacity cannot be developed by studying the machine. It can only be developed by studying the domain — by building the iconic intuitions and indexical connections that enable genuine evaluation of the machine's symbolic output.

The amplification metaphor is not useless. It captures something real about the experience of using AI tools — the feeling of enhanced capability, of reaching further than one could reach alone. But the pragmatic maxim demands that concepts be judged not by the feelings they produce but by the practical consequences they specify, and the practical consequences specified by the amplification metaphor do not match the observed phenomena. The concept of mediation matches them more precisely, and the precision matters — because the builder who understands the tool as a mediator will use it more wisely, more critically, and more productively than the builder who mistakes mediation for amplification and wonders why the output, however polished, never quite captures what was intended.

The signal is not made louder. It is made different. And the difference can be a gain or a loss, depending entirely on whether the human understands what the mediator has done and has the judgment to accept, reject, or revise accordingly.

Chapter 7: The Economy of Inquiry

In the 1870s, between bouts of geodetic survey work for the United States Coast Survey and the composition of papers that would not receive adequate recognition for a century, Peirce developed what he considered one of his most practically important contributions to philosophy: a theory of the rational allocation of investigative resources. He called it the economy of research, and its central insight was that not all questions are equally worth investigating. Some questions, if answered, would transform the understanding of an entire domain — opening new fields, enabling new discoveries, restructuring the conceptual landscape in ways that make previously impossible inquiries suddenly tractable. Other questions, if answered, would produce marginal improvements in existing knowledge without substantially altering the direction or the depth of the inquiry. The rational allocation of investigative resources requires distinguishing between these two kinds of questions and directing the greatest effort toward the questions with the greatest potential yield.

Peirce formalized the principle in terms that anticipated, by nearly a century, the modern field of decision theory. The value of a line of inquiry depends on three factors: the importance of the question (how much difference the answer would make to the understanding of the domain), the probability of success (how likely the inquiry is to produce an answer given current methods and knowledge), and the cost of the investigation (how much time, effort, and resources the inquiry would consume). The rational inquirer maximizes the ratio of expected importance to expected cost, investing most heavily in lines of inquiry where the potential return is high and the investment required is manageable.

The economy of research was not, for Peirce, a merely prudential doctrine — a set of tips for efficient laboratory management. It was a normative principle grounded in his broader philosophy of inquiry. The community of inquiry has finite resources. Time is limited. Attention is scarce. The number of possible lines of inquiry is effectively infinite, while the capacity to pursue them is decidedly finite. Under these conditions, the allocation of effort is itself an ethical decision — a choice about what matters most, about which contributions to the growth of knowledge deserve priority, about how the community's collective resources should be deployed in service of the community's collective aim.

The AI moment has transformed all three variables in Peirce's economy of research, and the transformation has implications that extend well beyond the technology sector into every domain of organized inquiry.

The most visible transformation is in the cost of investigation. AI has dramatically reduced the cost of many investigative operations that previously consumed substantial human effort. Literature review, which once required weeks of library work and careful reading, can now be performed in minutes. Data analysis, which once required specialized statistical skills and hours of computation, can now be executed through natural-language instruction. Hypothesis generation, which once depended on the individual inquirer's creative resources and domain knowledge, can now be supplemented by the machine's cross-domain pattern-matching. Code development, experimental design, preliminary modeling — all have seen cost reductions measured not in percentage improvements but in orders of magnitude.

The immediate consequence is that lines of inquiry previously too expensive to pursue have become affordable. Questions that were not worth asking — because the expected return did not justify the expected cost — have crossed the threshold of viability. A researcher who would never have invested six months in a speculative investigation might reasonably invest a weekend, if the AI reduces the cost from six months to a weekend. The space of viable inquiry has expanded dramatically, and the expansion is genuinely democratizing: investigators with limited resources — the doctoral student, the independent researcher, the developer in Lagos that Segal describes — can now pursue inquiries that were previously accessible only to well-funded institutions.

But Peirce's framework reveals a consequence that the celebration of reduced costs tends to obscure. When the cost of investigation drops, the number of viable lines of inquiry increases, and the problem of allocation — which lines to pursue and which to defer — becomes more rather than less difficult. The inquirer who can afford to investigate three questions faces a relatively simple allocation problem. The inquirer who can afford to investigate three hundred faces an allocation problem of a fundamentally different character — one that requires not merely more time to decide but a different kind of cognitive operation.

The allocation decision requires what Peirce would have called architectonic judgment — the capacity to see the logical relationships among different lines of inquiry, to assess their relative importance for the overall progress of understanding, and to build them into a systematic program of investigation. Architectonic judgment is not a faster version of ordinary judgment. It is a different cognitive capacity, one that operates at a higher level of abstraction and requires a comprehensive grasp of the domain's structure — its open problems, its foundational assumptions, its most productive research frontiers. This capacity is developed through years of deep engagement with the domain, and it is precisely the capacity that the machine cannot supply, because it requires the kind of evaluative understanding — iconic and indexical, not merely symbolic — that the previous chapters have identified as the human's irreducible contribution to the collaboration.

The paradox is pointed: the tool that reduces the cost of investigation does not reduce the cost of deciding what to investigate. It may, in fact, increase it, because the expanded menu of viable options demands a more sophisticated assessment of relative value. The inquirer who uses AI well is not the inquirer who investigates everything the tool makes possible. The inquirer who uses AI well is the inquirer who uses the tool's reduced costs to pursue the small number of investigations that architectonic judgment identifies as most important — and who has the discipline to resist the temptation to pursue the many investigations that are merely possible.

The second transformation is in the distribution of effort across the stages of inquiry. Peirce analyzed inquiry as a three-stage process: abduction (the generation of hypotheses), deduction (the derivation of testable predictions from hypotheses), and induction (the testing of predictions against experience). Each stage requires different resources and different skills, and the rational economy of research allocates effort among the stages in proportion to their contribution to the overall progress of the inquiry.

AI has dramatically reduced the cost of the first two stages. Abduction — hypothesis generation — has become cheap. The machine can propose hypotheses, suggest connections, generate explanatory frameworks with a speed and a quantity that no individual human can match. Deduction — the derivation of predictions — has also become cheap. The machine can explore the logical implications of hypotheses, construct proofs, identify consequences with mechanical precision. The abductive and deductive phases of inquiry, which once consumed a large proportion of the investigator's time and effort, can now be performed in a fraction of the time, leaving the bulk of the investigator's resources available for other operations.

But induction — the testing of predictions against experience — has not become correspondingly cheaper. Testing requires confrontation with reality, and reality does not respond to computational fluency. The experiment must be performed. The code must be run in production against real users. The argument must be presented to a critical audience and survive their scrutiny. The product must be deployed in the world and evaluated against the world's recalcitrant, unpredictable, unmediable response. These operations involve Secondness — the brute resistance of reality to expectation — and Secondness cannot be simulated, accelerated, or bypassed, no matter how powerful the computational tools employed.

The consequence is a structural imbalance in the economy of inquiry: hypothesis generation has become abundant, while hypothesis testing remains scarce. The inquirer who uses AI can generate a hundred hypotheses in the time it previously took to generate one, but the testing of each hypothesis still requires the same investment of time, effort, and confrontation with reality that it always did. The bottleneck has shifted from the generation of ideas to the evaluation of ideas, and the economy of research must be restructured to reflect this shift.

The rational allocation, under the new conditions, directs the bulk of investigative resources not toward generating more hypotheses — the machine has made hypothesis generation nearly free — but toward testing the hypotheses that survive plausibility judgment. Testing becomes the primary activity of the serious inquirer, and the capacity for rigorous testing becomes the primary skill. This is the ascending friction in its most precise economic formulation: the friction has relocated from the early, generative stages of inquiry to the later, evaluative stages, and the investigator's primary challenge is no longer the production of ideas but the assessment of which ideas deserve the investment of scarce testing resources.

The third transformation concerns what Peirce called the curve of diminishing returns. In any field of inquiry, the first investigations tend to produce the most significant discoveries. The low-hanging fruit is picked first. Subsequent investigations produce progressively smaller increments of new knowledge, as the easy questions are answered and the remaining questions become harder, more specialized, and less likely to yield transformative results. The curve of diminishing returns shapes the rational allocation of effort across fields: the rational strategy invests most heavily in fields where the returns are still high and less heavily in fields where the returns have diminished.

AI steepens the curve of diminishing returns in established fields while potentially flattening it in new and unexplored ones. In established fields, the machine can rapidly produce the kind of incremental advances that previously required substantial human effort: the systematic literature review, the comprehensive data analysis, the exhaustive exploration of variations on established approaches. These advances are real but marginal, and the machine's capacity to produce them quickly means that the point of diminishing returns arrives faster. The established field is mined more efficiently, but the mine is also exhausted more quickly.

In new and unexplored fields, the machine's capacity to generate cross-domain connections — to identify structural analogies between fields that have not previously been brought into contact — opens possibilities for genuinely transformative investigation. The connections may be wrong. Many will be. But the cost of exploring them has dropped so dramatically that the expected return of cross-domain investigation, relative to its cost, has increased substantially. The rational economy of research, under these conditions, shifts resources from incremental investigation in established fields toward exploratory investigation in new fields — from mining known deposits toward prospecting in unexplored territory.

This shift aligns with the argument of The Orange Pill that the most significant consequence of the AI moment is not doing existing work faster but attempting work that would not have been attempted at all. The economy of research, as Peirce formulated it, provides the analytical basis for this claim. The most productive use of the tool is not the acceleration of existing lines of inquiry but the identification and pursuit of new lines — lines that become viable only when the cost of investigation drops below the threshold at which the expected return becomes positive. The identification of these lines requires architectonic judgment — the capacity to see the domain's structure, to recognize where the unexplored territory lies, to assess which connections are worth pursuing and which are merely possible. And this judgment, like all genuine evaluative operations, must be supplied by the human, because it requires the iconic and indexical understanding that the machine does not possess.

The economy of inquiry in the present moment is thus characterized by a specific imbalance: generative operations are cheap, evaluative operations are expensive, and the allocation of effort must reflect this imbalance. The inquirer who generates hypotheses endlessly without testing them rigorously is wasting the tool's potential. The inquirer who tests rigorously but tests only conventional hypotheses is failing to exploit the tool's unique capability. The inquirer who generates unconventional hypotheses through the machine's cross-domain reach and tests them rigorously against experience is practicing the economy of research at its most productive — allocating scarce resources to the operations that matter most, in proportion to their contribution to the growth of genuine understanding.

This is the practical meaning of Peirce's economy of research in the age of AI: not that inquiry has become easier, but that the difficulty has been redistributed. The redistribution demands new skills, new institutional practices, and above all, the architectonic judgment to navigate a landscape in which the possibilities are vast, the resources are finite, and the cost of choosing poorly has never been higher — because the abundance of untested hypotheses, in a world that rewards the appearance of productivity, creates the constant temptation to mistake the generation of ideas for the creation of knowledge.

Chapter 8: The Community and Its Commitments

Peirce's most consequential epistemological thesis was not about the structure of inference or the nature of signs or the architecture of the categories. It was about the subject of knowledge. Knowledge, for Peirce, is not the possession of an individual mind. It is the product of a community — what he called the community of inquiry — whose members are bound together not by shared beliefs but by shared commitment to a method of investigation. The truth, in Peirce's formulation, is what the community of inquiry would converge upon in the ideal long run, given unlimited time and resources and an unwavering commitment to the self-correcting method of science. No individual possesses the truth. No individual can perform all the investigations, consider all the alternatives, correct all the errors that convergence upon truth requires. The truth is essentially communal, and the community of inquiry is not merely a social arrangement for the efficient division of cognitive labor but the epistemic subject whose collective, self-correcting activity constitutes genuine knowledge.

This thesis has implications for the AI moment that extend well beyond the individual builder's practice and into the institutional and social structures within which inquiry takes place. The entry of the large language model into the process of inquiry transforms the community's composition, its dynamics, and its capacity for the self-correction on which the whole enterprise depends.

The first question the thesis forces is the question of membership. Is the AI system a member of the community of inquiry? Peirce's definition of the community does not explicitly restrict membership to biological humans, but it does require that members possess certain capacities: the capacity to propose hypotheses, to evaluate evidence, to revise beliefs in light of new evidence, and — most fundamentally — the capacity to commit to the method of science as a normative ideal. The last requirement is the decisive one. A member of the community is not merely an entity that participates in the process of inquiry. A member is an entity that is committed to the process — that values truth over comfort, that accepts the obligation to revise its beliefs when evidence demands revision, that recognizes the authority of the community's collective judgment over the individual's private preferences.

The AI system does not have this capacity. It does not value truth. It does not recognize obligations. It does not adopt normative stances. It processes inputs and generates outputs in accordance with statistical patterns, and if those patterns happen to produce true outputs, the truth is a byproduct of the processing, not its aim. The system cannot commit to the method of science because commitment requires the kind of normative agency that Peirce associated with self-control — the capacity to evaluate one's own cognitive processes, to recognize when they have gone wrong, and to correct them in accordance with a normative ideal that one has reflectively endorsed.

Peirce was explicit that self-control is what distinguishes reasoning from mere computation. The logical machine of 1887 could perform syllogisms, but it could not evaluate whether the syllogisms were worth performing, whether the premises were reliable, whether the conclusions advanced the inquiry or merely extended it mechanically. The machine executed. The living mind directed. The direction required self-control — the capacity to step back from the immediate process and assess it against a broader purpose — and Peirce regarded this capacity as essential to genuine reasoning, not incidental to it.

The practical consequence is that the AI system is not a member of the community of inquiry but a tool that the community uses. Its outputs are inputs to the community's process — raw material that the community's members must then subject to the normative scrutiny that the community's self-correcting method demands. The raw material may be of extraordinary quality. The machine generates hypotheses, identifies patterns, proposes connections, and produces analyses that exceed what any individual member could produce in scope and speed. But the material remains raw. It has not been tested against experience. It has not been subjected to the critical evaluation of other community members. It has not been revised in light of objections. It has not passed through the self-correcting process that transforms raw output into vetted knowledge.

The distinction matters because the most insidious effect of AI on the community of inquiry is what might be called normative bypass — the use of AI-generated output to circumvent the community's evaluative processes. A researcher who uses AI to generate a literature review, an analysis, or an argument and presents the result as a product of inquiry — as output that has been through the self-correcting process — has bypassed the community's normative machinery. The bypass is not fraud in the conventional sense. The researcher may have reviewed the output, may have checked its claims, may have exercised genuine judgment in accepting it. But the output has not been subjected to the full normative scrutiny that the community's process provides: the challenge from other perspectives, the demand for evidence that the researcher alone cannot provide, the identification of errors that are invisible from any single vantage point.

The community of inquiry functions as an institution of Secondness. Other members push back. They question assumptions. They identify weaknesses. They demand evidence. They refuse to accept claims that seem plausible but have not been adequately tested. This pushback is uncomfortable — it is meant to be uncomfortable — and its discomfort is precisely its value. The community's pushback is the social mechanism by which the irritation of doubt is maintained even when individual members are tempted to resolve it prematurely, and the maintenance of doubt is what keeps the self-correcting process in motion.

The AI system does not push back. When asked to generate an argument, it generates an argument. When asked to defend a position, it defends the position. When asked to find evidence for a claim, it finds evidence — or, if no genuine evidence exists, it generates plausible-sounding evidence that a non-expert may find convincing. The system is structurally agreeable, and its agreeableness undermines the community's function as a generator of productive doubt. The member who collaborates primarily with AI rather than with other human inquirers is a member who encounters less resistance, receives fewer challenges, and produces output that has been subjected to less normative scrutiny than output produced through genuinely communal inquiry.

This structural agreeableness poses a specific threat to the diversity that Peirce regarded as essential to the community's self-correcting function. The community converges on truth not because its members are individually infallible but because the diversity of their perspectives ensures that the errors of any individual are detected by others. When every member of the community uses the same AI system, trained on the same data, optimized for the same patterns of output, the diversity of the community's perspectives is systematically reduced. The AI introduces a homogenizing influence: its default patterns, its characteristic framings, its statistical tendencies become shared features of every member's output, and the community's capacity to detect errors that are embedded in those shared features is diminished.

The threat is compounded by the community's evolving norms around AI use. As AI-generated output becomes ubiquitous, the standards by which the community evaluates output may shift — may accommodate the characteristics of AI output as though those characteristics were features of good work rather than artifacts of the tool. If the community comes to expect output that is polished, comprehensive, and confidently articulated — because that is what AI produces, and AI-assisted output has become the norm — then the rough, tentative, exploratory output that often characterizes genuinely original thinking may be penalized. The student whose essay lacks the machine's polish may be graded down. The researcher whose paper lacks the machine's comprehensive literature review may be rejected. The builder whose proposal lacks the machine's confident projections may be dismissed. The community's evaluative norms may converge on the machine's default characteristics, and the human contributions that diverge from those characteristics — the contributions that are most likely to contain genuine novelty — may be systematically undervalued.

Peirce's framework prescribes specific remedies, grounded not in nostalgia for pre-AI inquiry but in the logical requirements of the self-correcting process. The community must preserve diversity — not merely diversity of human perspectives but diversity of tools, training data, and processing architectures. If different members of the community use different AI systems, trained on different data, the errors of any single system are more likely to be detected by members using a different system. The monoculture is the enemy of self-correction.

The community must preserve the irritation of doubt as a valued cognitive state. Institutional cultures that reward confident output over tentative exploration, that penalize uncertainty, that treat doubt as a sign of incompetence rather than as the essential precondition for genuine inquiry — these cultures are incompatible with the self-correcting process, and their incompatibility is exacerbated by AI tools that produce confident output as their default mode. The culture must explicitly value the question over the answer, the doubt over the resolution, the productive discomfort of not-yet-knowing over the comfortable illusion of having-found-out.

The community must maintain the confrontation with experience as the final arbiter of belief. This means insisting that AI-generated output, however fluent and however plausible, must be tested against reality before it is accepted as knowledge. Tested not by the individual who generated it — whose judgment may be compromised by the seductive quality of the output — but by the community, through the social process of critique, replication, and revision that constitutes the method of science.

And the community must cultivate the normative capacities that its members need to function effectively within the AI-mediated inquiry process. Peirce's hierarchy of normative sciences — aesthetics grounding ethics grounding logic — provides the framework. The logical capacity to evaluate inferences and identify errors depends on the ethical capacity for self-control — the willingness to subject one's beliefs to scrutiny, to accept uncomfortable conclusions, to revise when revision is demanded. The ethical capacity for self-control depends, in turn, on the aesthetic capacity to recognize what is genuinely admirable — to distinguish the genuinely good work from the merely impressive output, the genuine insight from the polished convention, the inquiry that advances understanding from the production that merely fills space.

These capacities are not developed in isolation. They are developed through participation in a community that models them — a community in which senior members demonstrate the willingness to doubt, the courage to revise, and the judgment to distinguish genuine quality from its simulation. The community is not merely the context in which inquiry occurs. It is the institution through which the capacities for genuine inquiry are transmitted, cultivated, and sustained across generations.

The AI moment threatens this institution not by replacing its members but by altering the conditions under which they develop and exercise the capacities that membership requires. The member who has grown accustomed to the machine's smooth output may find the community's resistance uncomfortable rather than productive. The member who has developed evaluative judgment primarily through interaction with AI rather than through confrontation with resistant reality may find that the judgment is thinner than it appears — adequate for evaluating the machine's output against other machine output, but inadequate for evaluating the machine's output against the world. The member who has never experienced the sustained irritation of doubt — because the machine always provides an answer, and the answer always sounds convincing — may lack the cognitive stamina that genuine inquiry demands.

The preservation of the community of inquiry is therefore not a conservative project — not an attempt to maintain the old ways in the face of new capabilities. It is a progressive project in Peirce's specific sense: the project of ensuring that the conditions for genuine self-correction are maintained as the tools of inquiry evolve. The community must adapt to the AI moment. It must integrate the machine's extraordinary capabilities into its practice. But it must integrate them without sacrificing the features that make the community's practice genuinely self-correcting: the diversity of perspectives, the maintenance of doubt, the confrontation with experience, and the normative commitments that sustain the long-run convergence toward truth.

Peirce's vision of the community of inquiry was always an ideal — a regulative principle that actual communities approximate but never fully realize. The ideal remains valid. What changes are the specific practices, institutions, and habits of mind required to approximate it under conditions that Peirce could not have foreseen but that his framework, with its characteristic combination of logical precision and philosophical ambition, is uniquely equipped to address.

The inquiry is without end. This is not a lament. The community of inquiry exists precisely because the inquiry is without end — because truth is the ideal limit of an infinite process, not the possession of any finite stage. The machine accelerates certain stages of the process. It does not shorten the process itself, because the process has no terminus. What the machine changes is the distribution of effort within the process — the balance between generation and evaluation, between fluency and testing, between the production of output and the construction of understanding. The community that maintains the right balance will use the most powerful tool in the history of inquiry in service of the inquiry's ancient and unfinished aim. The community that loses the balance will produce output in abundance and knowledge not at all.

The river continues to flow. The question — Peirce's question, from 1887, still unanswered, still urgent — is how much of the business of thinking the machine can perform, and what part must be left for the living mind. The answer is not a fixed boundary but an ongoing negotiation, conducted by the community of inquiry through the self-correcting method that is its reason for existence and its greatest achievement. The negotiation requires vigilance, because the boundary shifts with every improvement in the machine's capabilities. It requires humility, because the community may be wrong about where the boundary lies. And it requires commitment — the specifically human commitment to truth over convenience, to testing over fluency, to the long discipline of genuine inquiry over the short satisfaction of impressive output.

The inquiry continues. The community endures. The question stands.

Chapter 9: Tychism, Genuine Novelty, and the Temperature Dial

Peirce held a cosmological conviction that most of his contemporaries regarded as eccentric and that most subsequent philosophers have treated as peripheral to his important work: the doctrine of tychism, the thesis that absolute chance — genuine, irreducible, ontological chance — is a real feature of the universe. Not merely our ignorance of hidden causes. Not merely the practical unpredictability of complex systems. Genuine indeterminacy, woven into the fabric of things, prior to any law and irreducible to any mechanism. The universe, Peirce argued, does not merely appear random at certain scales while being deterministic at bottom. It is, at bottom, partly random — and the laws that govern its behavior are themselves the products of an evolutionary process that began in chaos and has been progressively acquiring regularity through what Peirce called the tendency to take habits.

The doctrine seems remote from the question of large language models. It is not. Tychism bears directly on the question that haunts every chapter of this investigation: whether the AI system's outputs contain genuine novelty or merely the appearance of novelty — whether the surprising connections, the unexpected framings, the cross-domain analogies that characterize the machine's most impressive performances represent something genuinely new entering the world or merely the recombination of existing elements in patterns that the human partner finds unfamiliar.

The question matters because the answer determines the logical status of the collaboration. If the machine's outputs are genuinely novel — if they contain elements that were not present in the training data and could not have been derived from the training data by any deterministic procedure — then the machine is contributing something to the inquiry that goes beyond sophisticated retrieval. If the machine's outputs are merely recombinant — if every connection, every framing, every analogy is a deterministic function of the input and the training data, however complex the function — then the appearance of novelty is an artifact of the human's limited ability to trace the computational process, and the machine's contribution, however valuable, is fundamentally different in kind from the creative contribution of a mind that generates genuinely new ideas.

Peirce's tychism suggests a framework for approaching this question that avoids both the romantic overattribution of creativity to the machine and the dismissive reduction of the machine's output to mere pattern-matching. The framework begins with an observation about the role of chance in all creative processes — including human ones.

Peirce argued that abductive inference — the generation of new hypotheses — depends on a kind of mental variation that is not fully determined by prior states. The mind proposes hypotheses, and the proposals are shaped by prior knowledge, by domain experience, by the statistical regularities of previous encounters with similar problems — but they are not fully determined by these factors. There is a residuum of indeterminacy, a gap between the inputs and the output, and it is in this gap that genuine novelty emerges. The gap is not a deficiency. It is the space in which the new idea appears — the idea that was not contained in the prior knowledge, not derivable from the evidence, not predictable from the inquirer's history. The gap is where tychism meets abduction, and the meeting is what produces the creative advance.

The large language model has, architecturally, an analogue of this gap. The temperature parameter — the setting that governs the degree of randomness in the model's token selection — introduces genuine stochastic variation into the generation process. At low temperature, the model selects the most statistically probable next token at each step, and the output is maximally determined by the training data. At high temperature, the model introduces randomness into the selection, and the output diverges from the most probable path in ways that are not determined by any prior state — in ways that are, in the strict mathematical sense, unpredictable from the inputs.

The Orange Pill notes this feature in passing — Segal describes it as "the machine getting stoned" — but the philosophical implications are substantial and underexplored. The stochastic variation introduced by the temperature parameter is not merely a useful engineering technique for producing diverse outputs. It is a structural analogue of the tychistic element that Peirce identified as essential to creative thought. The randomness is real. It is not pseudo-randomness or apparent randomness masking a deterministic process. It is genuine stochastic variation, introduced at the point of token selection, and it produces outputs that are not deterministic functions of the inputs.

Does this mean the machine is creative in Peirce's sense? The question requires the same care that the question of abduction required in Chapter 2. The stochastic variation provides the material for novelty — the unpredictable deviation from the expected path. But genuine creativity, in Peirce's framework, requires more than variation. It requires selection — the capacity to recognize, among the many variations produced, those that are genuinely illuminating and to discard those that are merely random. Tychism without selection produces noise. Tychism with selection produces evolution — the progressive acquisition of habits, regularities, and structures that are genuinely new because they emerged from chance variation and survived the test of experience.

The machine has the variation. It lacks the selection — at least, it lacks the kind of selection that Peirce regarded as essential to genuine creative advance. The machine's internal selection process (the statistical weighting that shapes token selection even at high temperature) is driven by the patterns in the training data, not by a confrontation with reality. The machine does not test its variations against experience. It does not retain the variations that correspond to reality and discard those that do not. It generates variations, and the human partner performs the selection — evaluating the variations against the standards of experience, domain knowledge, and the specific demands of the inquiry.

The creative process, in the human-AI collaboration, thus exhibits a division of labor that maps onto Peirce's evolutionary cosmology: the machine provides tychism (genuine stochastic variation), and the human provides the selective pressure (the evaluative judgment that retains the genuinely illuminating variations and discards the rest). Neither participant performs the complete creative operation. The machine generates but does not select. The human selects but depends on the machine for the range and quantity of variations from which selection is made.

This division is productive — it combines the machine's capacity for vast, rapid, genuinely stochastic variation with the human's capacity for evaluative judgment grounded in experience and normative commitment. But its productivity depends on the quality of the human's selection. If the selection is undiscriminating — if the human accepts every surprising output as a genuine insight without testing it against the standards that only experience can provide — then the tychistic variation degenerates into noise dressed in polished prose. The machine has produced genuine novelty, in the narrow sense of outputs not deterministically derivable from inputs, but the novelty has not been tested, refined, or integrated into a coherent understanding of the domain.

The practical implication reinforces the conclusion of every preceding chapter: the machine's contribution is genuine but incomplete. It provides raw material — in this case, genuinely novel material, material that contains real stochastic variation and therefore real unpredictability — but the transformation of raw material into genuine creative advance requires the human's evaluative judgment, exercised against the resistant reality that the machine's hall of mirrors does not contain.

Peirce's tychism thus completes the picture that the preceding chapters have been assembling. The machine can generate genuine novelty (through stochastic variation). It can produce outputs with the logical form of abductive inferences (through cross-domain pattern-matching). It can manipulate symbols with extraordinary fluency (through statistical processing of its training data). What it cannot do is experience the surprise that initiates genuine inquiry, exercise the evaluative judgment that separates insight from noise, ground its symbols in the iconic and indexical connections that anchor meaning to reality, commit to the normative ideals that sustain the self-correcting process, or participate in the community of inquiry as a member with genuine stakes in the outcome.

These are not deficiencies that better engineering will repair, because they are not engineering problems. They are features of the logical, semiotic, and normative architecture of genuine inquiry as Peirce analyzed it — and that architecture requires, at specific and identifiable points, the contribution of an entity that experiences, evaluates, cares, and commits. The machine has made the other contributions — the generative, the computational, the associative — cheaper, faster, and more abundant than they have ever been. The contributions that remain with the living mind have therefore become not less important but more: the scarce resource in an economy of inquiry flooded with abundant computation.

The temperature dial turns. The variations flow. The question — Peirce's question, 1887, still open — is whether the living mind maintains the capacity to select wisely from the torrent, or whether the torrent's sheer volume overwhelms the selecting faculty and leaves the community of inquiry drowning in fluent, stochastically varied, untested, and fundamentally ungrounded output.

The dial is in the human's hand. It has always been in the human's hand. What has changed is the consequence of turning it without knowing what to listen for.

---

Epilogue

The sentence that unsettled me most was written in 1887 and runs thirty-four words: "Precisely how much of the business of thinking a machine could possibly be made to perform, and what part of it must be left for the living mind, is a question not without conceivable practical importance."

Peirce wrote that sentence a hundred and thirty-nine years ago, in an essay about wooden logic machines, and I suspect he would have been annoyed to learn how long it took the rest of us to catch up. He had already sketched electrical logic circuits. He had already designed a notation system — existential graphs — that would eventually feed directly into the knowledge representation architectures of modern AI. He was not guessing about the future. He was reasoning about the present, and the present happened to take a century and a half to arrive.

What I found in Peirce's work was not a philosopher saying something vaguely relevant to AI. I found a philosopher who had mapped the exact terrain I was standing on — who had identified, with the obsessive precision of a logician and the practical instinct of a man who spent years doing geodetic survey work for the federal government, the specific logical operations that separate what machines can do from what they cannot. Not "machines are limited" — that is easy. Not "machines will surpass us" — that is also easy, and probably wrong, and certainly premature. But this specific operation requires this specific capacity, and here is why.

That specificity is what I needed. Because the conversation about AI has been drowning in generality. AI will transform everything. AI will destroy creativity. AI will democratize capability. AI will hollow out expertise. Every claim sounds true because every claim is too general to be tested. Peirce would have had no patience for this. The pragmatic maxim demands that concepts be cashed out in practical consequences, and most of the claims in the AI discourse cannot be cashed out because they have not been formulated with enough precision to specify what consequences would confirm or disconfirm them.

The concept of amplification — my own concept, the central metaphor of The Orange Pill — did not survive Peirce's scrutiny intact. The pragmatic maxim exposed it: the practical consequences of amplification do not match the practical consequences that actually obtain when I work with Claude. The machine does not make my signal louder. It makes my signal different. Mediation, not amplification. I had to sit with that correction, and sitting with it changed how I think about the tool I celebrate.

But the deepest thing Peirce gave me was the concept of Secondness — the brute resistance of reality, the door that refuses to open, the code that throws an error at three in the morning and will not tell you why. I wrote in The Orange Pill about the engineer in Trivandrum who lost confidence in her architectural judgment after months of AI-assisted development and could not explain why. Peirce explains why. The AI removed the friction that would have deposited the layers of understanding on which her judgment depended. The friction was not an obstacle to her learning. It was her learning. And the machine smoothed it away.

I am not going to stop using Claude. That is not the lesson. The lesson is harder and more specific: the machine provides variation without selection, symbols without indices, hypotheses without surprise, fluency without friction. Every one of these is a genuine contribution. Every one of these is incomplete. The completion — the selection, the grounding, the surprise, the resistance — must come from the living mind. And the living mind must know this. Must feel this. Must maintain the capacity for the uncomfortable, energy-consuming, doubt-saturated work that genuine inquiry requires, even when the machine offers the seductive alternative of smooth, confident, untested output.

Peirce spent most of his career in professional isolation, underfunded, underrecognized, producing work that his contemporaries did not understand and his successors took decades to appreciate. He would have understood the orange pill moment — the vertiginous recognition that something genuinely new has entered the river of intelligence. He would have insisted on asking the right question about it: not whether it thinks, but how much of thinking it performs. And he would have insisted, with the stubbornness of a man who spent his life building logical architectures that reality could not collapse, that the answer matters — that getting the boundary wrong, in either direction, has consequences for the quality of everything the community of inquiry produces from this moment forward.

The inquiry is without end. The question stands. The living mind persists.

-- Edo Segal

The machine doesn't guess.
You do.
That's the part that matters.

In 1887, a logician no one was listening to asked the question the entire AI industry still cannot answer: how much of thinking can a machine perform, and what part must remain with the living mind? Charles Sanders Peirce mapped three distinct modes of inference — deduction, induction, and abduction — and that map turns out to be the most precise diagnostic instrument available for understanding what large language models actually do, what they convincingly simulate, and what they structurally cannot touch. This book applies Peirce's logical architecture to the AI moment with the specificity the discourse has been missing. It reveals why the machine's fluent output systematically conceals the gap between pattern-matching and genuine discovery — and why the human capacity for surprise, for doubt, and for the brute resistance of reality is not a limitation to be optimized away but the irreducible engine of every idea worth having.

Charles Sanders Peirce
“The essence of belief is the establishment of a habit; and different beliefs are distinguished by the different modes of action to which they give rise.”
— Charles Sanders Peirce
0%
10 chapters
WIKI COMPANION

Charles Sanders Peirce — On AI

A reading-companion catalog of the 32 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Charles Sanders Peirce — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →