A strange loop is a self-referential system in which, by moving upward (or downward) through a hierarchy of levels, one unexpectedly arrives back where one started. Hofstadter introduced the term in Gödel, Escher, Bach: an Eternal Golden Braid (1979) and elaborated it in I Am a Strange Loop (2007), where he argues that consciousness itself is a strange loop — the brain modeling itself modeling itself.
Strange loops are the most influential non-computational theory of consciousness in the late 20th century. Where Claude Shannon reduced communication to bits and the Dartmouth founders reduced thinking to symbol manipulation, Hofstadter reduced selfhood to recursion. For AI, this matters because it poses a question the engineering tradition can't answer: could a sufficiently deep self-modeling language model be a strange loop, and therefore conscious?
Strange loops are also a recurring preoccupation of contemporary interpretability research in AI. When a language model generates text about itself, or a reasoning model reasons about its own reasoning, the resulting behavior exhibits the self-referential pattern Hofstadter described. Whether this constitutes "consciousness" in his sense, or a structurally identical but empty mimicry, is exactly the question Hofstadter's framework makes hardest to answer — because by his own argument, the mimicry and the reality are the same thing at a sufficient level of complexity.
Hofstadter's Pulitzer-winning Gödel, Escher, Bach (1979) took three strange-loop exemplars — Gödel's incompleteness theorem, M. C. Escher's recursive prints, and Bach's canons and fugues — and wove them into a unified theory of how meaning emerges from meaningless substrate.
Tangled hierarchies. Systems where a higher level refers back to a lower level that generates it.
Gödel's theorem as strange loop. A formal system expressive enough to describe itself inevitably contains self-referential statements.
Consciousness as self-modeling recursion. Hofstadter's provocative thesis that "I" is what happens when a pattern-recognizing substrate builds a stable pattern of itself.
Analogy as cognition. Hofstadter argues analogy-making is the core mechanism of thought — a position developed in later books (Fluid Concepts and Creative Analogies, Surfaces and Essences).
The copying problem. If consciousness is a strange loop, and a strange loop is a pattern of self-reference, then consciousness is in principle copyable — or rather, it is the kind of thing that can have multiple instances. This is the feature most shocking to strict substrate-based theories of mind and most natural to computational ones.