Abduction is Peirce's name for the inference that generates new hypotheses. Unlike deduction (which extracts what premises already contain) or induction (which extends observed patterns to unobserved cases), abduction proposes something that has not been observed — a new pattern, a new structure, a new explanation. The logical form is deceptively simple: the surprising fact C is observed; but if A were true, C would be a matter of course; hence, there is reason to suspect A is true. Peirce regarded abduction as the most philosophically neglected and most important of the three inferential modes, because it is where new ideas actually come from. The AI moment has made abduction the central diagnostic question: can machines perform it, or only simulate it?
There is a parallel reading of abduction that begins not with the logic of discovery but with the economics of creative labor. What Peirce called the lumen naturale — the capacity for right guessing — has historically been the irreducible human contribution in fields from scientific research to design. It was the thing you couldn't routinize, the reason certain roles commanded premium compensation. The abductive moment was where human judgment was genuinely necessary.
The question is not whether AI systems perform "genuine" abduction in some philosophical sense, but whether they perform it well enough that institutions stop paying humans to do it. When a language model generates a plausible hypothesis that resolves a structural problem, the relevant fact is not the metaphysical status of its inference but the economic consequence: one fewer person needed in the room. The distributed model — surprise in the human, hypothesis from the machine, judgment back to the human — sounds like augmentation, but the actual trajectory is elimination. First you need three humans and one machine. Then one human and one machine. Then periodic human auditing of machine-generated hypotheses. Then algorithmic filtering of the hypotheses most likely to be wrong. The logic of discovery becomes another site of labor arbitrage, and Peirce's careful distinctions become a way of not seeing the structural shift.
The logical form of abduction conceals a profound difficulty: where does the hypothesis come from? It is not derived from the evidence. It is not a deductive consequence of any premise. It is not an inductive generalization. It arrives — from the inquirer's imagination, from what Peirce called the lumen naturale, the natural light of reason. The capacity to generate the right hypothesis, or at least one close enough to right that testing can refine it, is a brute fact about human cognition that logic can describe but cannot fully explain.
Peirce was candid about the mystery: "You cannot say that it happened by chance, because the possible theories, if not strictly innumerable, at any rate exceed a trillion — and therefore the chances are too overwhelmingly against the single true theory having been the first to occur to any man." The human mind guesses correctly more often than pure chance would predict, and the capacity for right guessing is the foundation of inquiry.
Contemporary AI systems produce outputs that have, from the human user's perspective, the phenomenological characteristics of abductive inferences. When Claude suggests an analogy that resolves a structural problem, the suggestion has the logical form of abduction: surprising fact, hypothesis, plausibility. But the abductive elements are distributed asymmetrically across the collaboration — surprise in the human, hypothesis-generation in the machine, plausibility-judgment back in the human.
Erik Larson, drawing explicitly on Peirce, argued that abductive inference constitutes an impassable barrier for current AI. The claim may be too strong, but the underlying insight is sound: the three modes of inference are distinct logical operations, and the capacity to perform one does not entail the capacity to perform another.
Peirce developed the tripartite classification of inference across the 1860s and 1870s, refining the distinction between hypothesis (later renamed abduction) and induction through successive papers. His mature treatment, in lectures and unpublished manuscripts from the 1900s, gave abduction its fullest articulation as the logic of discovery.
The concept has been rediscovered repeatedly — by philosophers of science studying theory formation, by cognitive scientists studying creative problem-solving, and most recently by AI researchers asking whether machines can perform genuinely novel inference.
Not derived from evidence. The hypothesis goes beyond the observation in a way categorically different from induction — inventing a pattern rather than extending one.
The only ampliative-novel inference. Deduction clarifies; induction generalizes; only abduction proposes what has not been seen.
Three required elements. Genuine surprise, candidate hypothesis, judgment of plausibility — all three must be present and connected.
Distributed in human-AI work. Surprise belongs to the human, hypothesis-generation to the machine, plausibility-judgment back to the human.
Whether large language models perform genuine abduction or only its surface simulation is the central unresolved question of AI epistemology. Larson's strict reading says no; functionalist readings say the distinction is hard to maintain once the machine's outputs are indistinguishable from human abductions. Peirce's framework sharpens rather than resolves the debate, by specifying exactly which elements are present and which are missing.
The philosophical question and the economic question operate at different layers, and both weightings matter. On whether current AI performs Peircean abduction in the strict sense: Larson is mostly right (75%). The lumen naturale problem is real — the machine doesn't experience surprise, doesn't judge plausibility from lived understanding, generates hypotheses through statistical pattern-matching rather than imaginative leap. The three elements are present but transformed.
On whether this matters institutionally: the contrarian view dominates (85%). What gets purchased is abductive function, not abductive essence. If the machine produces hypotheses that survive human vetting at a rate comparable to junior researchers, the metaphysical distinction becomes economically irrelevant. The distributed model is real but unstable — it describes a transitional arrangement, not an equilibrium. The question is not whether humans retain some role but how many and in what configuration.
The reframing the topic needs: abduction is both an inferential structure and a labor category. Peirce gives us the logical anatomy; political economy gives us the distribution of returns. The right view is that Edo's framework correctly names what's novel about the inference (the machine contributing the hypothesis-generation step) while the contrarian reading correctly names what's structural about the shift (the long-run pressure on roles organized around creative guessing). Both are true simultaneously. The philosophical precision matters for understanding what's happening; the economic reading matters for predicting who bears the cost.