The fallacy occurs whenever a useful abstraction — matter, market, intelligence, consciousness — is elevated to the status of a fully determinate entity and the concrete processual reality it was abstracted from is forgotten or suppressed. Whitehead diagnosed the error in classical physics (which treated 'matter' as a simple substance rather than a complex theoretical construct) and in ordinary language (whose subject-predicate grammar smuggles substance commitments into every declarative sentence). Applied to artificial intelligence, the fallacy structures the entire debate: enthusiasts and skeptics agree that intelligence is a fixed property a system either possesses or lacks, and argue only about which. Whitehead's correction dissolves the debate rather than settling it.
The fallacy was introduced in Science and the Modern World (1925) as part of Whitehead's historical critique of seventeenth-century scientific metaphysics. He argued that figures like Descartes and Newton, though they were brilliant mathematicians and physicists, had unwittingly bequeathed to Western thought a picture of nature as composed of inert material substances bearing properties — a picture adequate for certain engineering purposes but disastrously inadequate as a general metaphysics. The fallacy was their error, and it had compounded across three centuries.
Whitehead's critique was not anti-scientific. He insisted that abstractions are indispensable; science cannot proceed without them. The fallacy occurs not in the use of abstractions but in their reification — when the map is treated as though it were the territory, when the statistical model is treated as though it were the phenomenon it models. Contemporary MIT Press scholarship has explicitly applied this warning to computational cognitive science, arguing that literal interpretation of computational models recapitulates Whitehead's error.
The AI discourse exhibits the fallacy with diagnostic clarity. When commentators ask 'Is AI intelligent?' or 'Is AI conscious?' they assume that intelligence and consciousness are determinate substances susceptible to attribution or withholding. The assumption is rarely examined. It structures the debate so completely that both sides — enthusiasts who attribute and skeptics who withhold — agree on its form. Whitehead's framework shows that the form itself is the error. Intelligence is not a substance but a character of certain kinds of actual occasions, varying in degree and mode across the processual complexity of the universe.
The practical stakes are not academic. Policy documents, legal frameworks, and corporate governance structures that rely on the substance-framing produce categories that cannot capture what actually occurs when computational systems and biological systems interact. A regulation written to govern systems that 'are' or 'are not' intelligent will either over-regulate trivial processes or under-regulate genuinely novel ones, because the regulated phenomenon does not have the form the regulation assumes.
Whitehead introduced the concept in his 1925 Lowell Lectures, published as Science and the Modern World. The phrase 'fallacy of misplaced concreteness' first appears in Chapter III of that work, in the context of his critique of seventeenth-century scientific materialism.
The fallacy is closely related to, but distinct from, what Whitehead called the 'fallacy of simple location' — the error of assuming that material has a simple, unambiguous spatial position. Both errors share a common root: the refusal to acknowledge that what appears simple and concrete is actually the product of a complex process of abstraction.
Abstractions are necessary but not concrete. Science and ordinary thought depend on abstractions, but the abstractions are not the reality they describe.
The copula smuggles metaphysics. The word 'is' in 'AI is intelligent' attributes a substance that, on Whitehead's analysis, does not exist in the form assumed.
Both sides of the AI debate commit the fallacy. Enthusiasts who attribute intelligence and skeptics who withhold it agree that intelligence is a substance; they disagree only about attribution.
Dissolution rather than resolution. The fallacy cannot be resolved within its own framing; it must be dissolved by correcting the framing.
Processual reframing opens tractable questions. Once intelligence is understood as a character of process, questions about AI become empirically investigable rather than metaphysically stuck.
Some philosophers of mind argue that Whitehead overstates his case — that some abstractions genuinely do pick out real kinds, and that treating them as such is not a fallacy but good scientific practice. Whitehead's defenders respond that the objection confuses two senses of 'real kind': the pragmatic kind useful for prediction, and the metaphysically fundamental kind, and that the fallacy concerns only the second.