Copycat's task was deceptively simple: given that 'abc' changes to 'abd,' what does 'ijk' change to? Any child answers 'ijl' without effort. But solving the problem requires perceiving the relevant abstraction ('last letter,' 'successor,' 'replace') from raw material, deciding fluidly which features matter, and constructing a mapping that adjusts itself to the specific problem. Copycat's architecture deployed hundreds of small independent agents — codelets — exploring the problem space in parallel, competing and cooperating, building and tearing down representations, gradually converging on mappings that satisfied an emergent sense of coherence. The program was stochastic and deeply parallel. Run it twice on the same problem and it might produce different answers, just as two humans might parse 'iijjkk' differently depending on which features caught their attention first.
The critical feature that separated Copycat from every other AI system of its era — and from large language models of ours — was that its representations were not fixed. They reshaped themselves during the process of problem-solving. The concept of 'letter' could expand to include 'group of letters.' The concept of 'last' could shift from positional to structural. The concept of 'successor' could be reinterpreted from alphabetic to some other ordering principle. The reshaping was driven by the problem itself and produced representations that had not existed before the problem was encountered.
Copycat's research program was eclipsed commercially and culturally by systems that pursued exactly the opposite strategy: frozen representations, combinatorial recombination, statistical activation at superhuman scale. The systems that won the market prioritized performance over depth, breadth over fluidity, and produced outputs that convinced millions of users that understanding was present. They passed the market test, the user-satisfaction test, the practical-utility test. They failed only the test Hofstadter cared about most: whether the process that produced the outputs was the same kind of process that produced genuine understanding.
The irony is acute. The research program Hofstadter pursued for decades — prioritizing understanding over performance, depth over breadth, fluid concepts over frozen representations — produced outputs that were less impressive than Claude's by orders of magnitude. But the process that produced them was, in Hofstadter's framework, closer to the process that produces genuine understanding in minds that actually understand what they are doing. Copycat got the dynamics right. Its concepts were alive. They changed under pressure.
Hofstadter began the project in 1983 in a cramped lab at the University of Michigan. His graduate student Melanie Mitchell joined shortly after and carried the work through her doctoral dissertation. The program took five years of painstaking work before it could reliably solve simple analogy problems. The architecture was elaborated in Mitchell's Analogy-Making as Perception (MIT Press, 1993) and in Hofstadter's own Fluid Concepts and Creative Analogies (1995).
Codelet architecture. Hundreds of small, independent agents exploring the problem space in parallel rather than a single deterministic pipeline.
Stochastic convergence. Different runs on the same problem can produce different answers, mirroring human variability.
Representational fluidity. Concepts reshape themselves during processing rather than being retrieved from a fixed space.
Narrow but deep. Copycat operated in a tiny microdomain but modeled the kind of cognition that scales.
The road not taken. Copycat represents an alternative path AI research might have followed — cognitively deep, commercially small.