INTELLIGENCE IS A SUBSTANCE is, in Lakoff's analysis, among the most consequential hidden metaphors in the AI debate. English speakers say machines have intelligence, measure how much intelligence a system possesses, compare the intelligence of different systems as one compares horsepower. They ask whether a system is intelligent enough to perform a task, as though intelligence were a quantity measurable against a threshold. Intelligence can be artificial or natural, general or narrow, strong or weak. It is tested, measured, benchmarked. Systems possess it or lack it. Every expression treats intelligence as a commodity: something existing in quantities, coming in grades, manufacturable, measurable, transferable, comparable on a single scale. The metaphor is so deeply embedded that most participants in the AI discourse do not recognize it as a metaphor at all.
The SUBSTANCE frame is the conceptual foundation on which the entire project of artificial general intelligence rests. AGI presupposes that intelligence is a single unified substance that can be manufactured at sufficient quantity and quality to match or exceed the human version. The presupposition is not empirical. It is metaphorical. Every claim about whether a system has achieved "human-level intelligence" presupposes that human intelligence is a level — a point on a single scale — rather than an ecology of context-dependent capabilities enacted by a particular kind of body in a particular kind of world.
The alternative Lakoff's framework proposes, drawing on broader embodied-cognition research, is that intelligence is not a substance but a process. Intelligence is not something an entity has; it is something an entity does. The activity of an organism interacting with an environment, not a material sitting inside a skull waiting to be weighed. A chess grandmaster is intelligent in the context of chess; place her in a forest and ask her to navigate by the stars, and her chess intelligence provides no leverage. A master farmer is intelligent in the context of agriculture; seat him before a differential equation and his agricultural intelligence is irrelevant. Intelligence is always intelligence-in-context, intelligence-for-a-purpose, intelligence-as-enacted-by-a-specific-body-in-a-specific-environment.
The SUBSTANCE metaphor erases this contextuality. It treats intelligence as a general-purpose commodity measurable on a single scale, transferable between containers, comparable across radically different kinds of systems. The erasure has specific consequences for AI research and policy. Benchmarks for measuring AI systems presuppose the single-scale framework; they would not make sense within a context-dependent view. Claims that specific systems have or have not achieved human-level performance presuppose that human performance is a level on a scale; these claims are incoherent within a process view. The alignment problem — the concern that a sufficiently intelligent system might pursue goals misaligned with human values — presupposes intelligence as substance that can be concentrated without the contextual grounding that gives human intelligence its value orientations.
Escaping the SUBSTANCE frame does not require denying that AI systems perform impressive cognitive operations or that comparisons between systems can be meaningful. It requires recognizing that the comparisons are domain-specific rather than measurements of a general substance, and that the impressive performance of AI systems in specific domains does not aggregate into a general-purpose intelligence commodity that can be scaled up to approach or exceed human intelligence in the abstract. The framing shift has substantial implications for how AI is understood, deployed, and governed — implications the dominant substance framing systematically obscures.
The SUBSTANCE metaphor for intelligence has roots in nineteenth-century efforts to measure mental capacity through IQ testing and in twentieth-century behaviorist and cognitivist psychology. Its application to AI crystallized with Alan Turing's imitation game and intensified through successive waves of AI research, each of which presupposed that intelligence was the kind of thing that could be produced in machines at measurable quantity.
Source domain: commodity. Intelligence is mapped onto material substances that exist in quantities and grades.
Single-scale measurement. The frame makes it natural to measure all intelligences on a common scale, regardless of their context or embodiment.
AGI foundation. The frame underlies the project of artificial general intelligence as a coherent goal.
Process alternative. The embodied-cognition framework proposes that intelligence is an activity rather than a substance — always contextual, always enacted.
Governance implications. Benchmarks, comparative claims, and alignment frameworks all presuppose the substance frame in ways that may not survive scrutiny.