A computer simulation of a hurricane does not produce rain. A computer simulation of photosynthesis does not produce glucose. A computer simulation of combustion does not produce heat. In each case, the simulation may be perfect — capturing every relevant variable, modeling every interaction with mathematical precision. The simulation is useful; scientists study simulated hurricanes to predict real ones. But the simulation and the thing simulated are different kinds of phenomenon, and no improvement in the simulation's accuracy will cause it to cross the ontological boundary between modeling a process and instantiating it. Searle's claim is that the same distinction applies to minds. A computational simulation of understanding — a system that captures the formal structure of how understanding manifests in behavior — does not constitute understanding, any more than a simulation of a hurricane constitutes a hurricane.
The distinction seems clear in the abstract. It becomes treacherous in practice, because the specific simulation under discussion — the simulation of linguistic understanding by large language models — produces outputs indistinguishable from those of genuine understanding for most practical purposes. The hurricane simulation does not produce rain, and this is immediately obvious. The understanding simulation produces coherent, contextually appropriate, often brilliant text, and this is not immediately distinguishable from the coherent, contextually appropriate, often brilliant text that genuine understanding produces. The indistinguishability is the trap.
When the simulation is good enough, the distinction between simulation and reality stops being perceptually available. The observer encounters the output and attributes the reality, not because the observer is foolish but because the attribution is the default response of a cognitive system that evolved in a world where the output was always produced by the reality. Coherent language meant understanding. Now it may not. But the perceptual system has not been updated.
Searle's famous analogy from the 1980 paper: "Nobody supposes that the computational model of rainstorms in London will leave us all wet." The rhetorical force of the analogy is that no one is confused about rainstorms. The absurdity is immediate. When the same logic is applied to minds — when simulating thinking is proposed as equivalent to thinking — the absurdity should be equally immediate, but somehow it is not. Why not? Because the outputs are linguistic, and the human cognitive system treats linguistic outputs as evidence of a mind producing them. The projection problem makes the simulation trap especially sticky in the case of language.
The distinction cuts against the computationalist assumption that dominated cognitive science from the 1950s onward — that mind is multiply realizable, that any system running the right program would constitute a mind, that the specific substrate is irrelevant. Searle's argument is not that simulation is useless — simulations are scientifically valuable, they help us understand phenomena we cannot otherwise access. The argument is that simulation is not duplication. Modeling a process is not instantiating it. The model and the thing modeled share structure; they do not share substance.
Searle deployed the simulation-duplication distinction in the 1980 Chinese Room paper and developed it further in subsequent works, most systematically in Minds, Brains and Science (1984). The distinction generalizes the Chinese Room argument beyond the specific case of language to any claim that computational simulation constitutes real instantiation.
The argument was aimed at a specific target: the claim, common in 1970s AI research, that programs modeling cognitive processes would constitute those processes. Schank's story-understanding programs, Newell and Simon's general problem solver, the computational theory of mind defended by Fodor — all rested on the assumption that modeling and instantiating were not categorically distinct. Searle's distinction challenged the assumption at its root.
The rainstorm test. A simulation of rain does not make anyone wet. The test applies to any simulation: does the simulation produce the physical reality of the simulated phenomenon, or does it produce a model of that phenomenon?
Structure vs. substance. Simulation captures the structural relationships between variables. Duplication produces the thing itself. These are different achievements, requiring different conditions.
The indistinguishability trap. When simulations are good enough, the distinction between simulation and reality stops being perceptually available. The observer sees the outputs and attributes the reality. The cognitive system that makes this attribution evolved in a world where such attributions were reliable; they are no longer reliable.
The language-specific problem. Simulations of physical processes do not produce the physical effects. Simulations of linguistic processes produce linguistic outputs. The outputs are the same whether produced by simulation or by real understanding. This is why the simulation trap bites harder in the case of language than in any other case.
The ontological boundary. The boundary between simulation and duplication is not quantitative but categorical. No improvement in simulation quality crosses the boundary, because crossing would require a different kind of process, not a better version of the same process.