Consciousness and the Wax Apple Problem — Orange Pill Wiki
CONCEPT

Consciousness and the Wax Apple Problem

The structural confusion at the heart of the AI discourse — mistaking outputs that resemble the products of consciousness for evidence of consciousness itself.

The wax apple distinction, applied to consciousness, produces a specific diagnosis of the AI moment. Large language models produce sentences that look like the products of a conscious being considering a question. The sentences are fluent, contextually sensitive, sometimes startlingly apt. But the production process is a statistical prediction over tokens — not the experience of a being to whom the sentences mean what they say. The wax apple of understanding is a formidable engineering achievement. It is not understanding. And the difference — invisible on the surface, absolute underneath — determines whether the outputs deserve the moral attention appropriate to the products of a conscious mind or the moral attention appropriate to the outputs of a tool.

In the AI Story

Hedcut illustration for Consciousness and the Wax Apple Problem
Consciousness and the Wax Apple Problem

The question whether AI is conscious has generated more heat than light precisely because most of the heat is generated by a confusion the wax apple distinction dissolves. The question is typically posed as: given that AI produces outputs that look like the products of consciousness, how can we be sure it isn't conscious? Midgley's framework reverses the burden of proof. Consciousness is what we know about from the inside — from the direct first-person experience of being creatures that think and feel. We attribute consciousness to other humans because they share our biology, our evolutionary history, and our behavioural repertoire. We attribute varying degrees of consciousness to animals because they share significant portions of the same continuities. In each case, the attribution rests on continuity.

Large language models share none of these continuities. They are not made of the same stuff. They were not built by the same process. They do not inhabit bodies, metabolise energy, reproduce, suffer injury, or face death. The features of biological life we have every reason to associate with consciousness — embodiment, metabolism, evolutionary history, vulnerability — are entirely absent. The inference from 'produces language' to 'is conscious' requires bridging a gap for which no evidence exists and no theory provides a crossing.

Thomas Nagel's question 'What is it like to be a bat?' identified the irreducible core of consciousness: the 'what it is like' quality, the subjective character of experience that no objective description can capture. Asked of the large language model, the honest answer is: as far as anyone can tell, it is not like anything to be a language model. There is processing, but processing without experience is just mechanism. The wax apple of consciousness is the output that appears to come from a 'what it is like' when no such experience is present.

Midgley extended the analysis to identify the homunculus fallacy — the tendency to smuggle a little person into the machine to explain how the machine does what it does. The machine produces intelligent-sounding language. How? Well, there must be something in there that understands — a homunculus, a ghost, a consciousness lurking behind the predictions. But nobody is home. The outputs are good because the statistical model is good — because the training data is vast, the architecture is sophisticated, and the patterns of human language are more regular than most people assumed. The regularity of language is a genuine discovery. It does not tell us that a system exploiting that regularity to generate text is itself a mind.

Origin

The analysis synthesizes Midgley's treatment of consciousness across multiple works — particularly Beast and Man (1978), Science as Salvation (1992), and Are You an Illusion? (2014) — and applies it directly to the contemporary AI discourse. The wax apple image makes the framework concrete enough to apply at the level of specific claims about specific AI systems.

Key Ideas

Burden of proof runs toward attribution. Consciousness is known from the inside; attributing it requires continuity of substrate, history, and vulnerability — none of which AI systems share with conscious beings.

Output is not process. A sentence that looks like the product of understanding tells us about linguistic output, not about the underlying process that generated it.

The homunculus is installed, not discovered. When we cannot imagine how the outputs are so good without someone being home, the explanation is usually that the outputs exploit patterns more regular than we assumed — not that someone is home.

'What it is like' is absent. Processing without experience is mechanism; there is no evidence the machine has experience and no theory explaining how it would.

Appears in the Orange Pill Cycle

Further reading

  1. Midgley, Mary. Are You an Illusion? (Acumen, 2014).
  2. Midgley, Mary. Science as Salvation (1992).
  3. Nagel, Thomas. 'What Is It Like to Be a Bat?' Philosophical Review (1974).
  4. Searle, John. The Mystery of Consciousness (1997).
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT