In his 2025 Philosophy & Technology paper, David Manheim applied Peirce's semiotic classification to large language models and concluded that they exist in what he calls a hall of mirrors. The training data is symbolic — text, code, mathematical notation. The processing is symbolic — the manipulation of token sequences according to statistical patterns. The output is symbolic — more text, more code, more notation. The system does not process icons (structural resemblances) or indices (existential connections to objects). It encounters representations of reality, not reality itself. The reflections in the hall can be extraordinarily convincing — sharp, detailed, internally consistent — but they are still reflections. They do not reach through the glass to touch the world they reflect.
The hall of mirrors is Manheim's name for the semiotic closure that makes AI output simultaneously fluent and potentially ungrounded. The fluency is real — the symbolic manipulation is sophisticated, the patterns are well-extracted, the output is well-formed. The groundedness is questionable — the symbols are not anchored by the iconic and indexical connections that would ensure correspondence with the world.
The fluency and the ungroundedness are not in tension. They are complementary features of a system that excels at symbolic processing and lacks everything else. The same mechanism that produces convincing output produces output that may not correspond to reality, because the mechanism is blind to the distinction — it generates symbols that fit statistical patterns, without any check against the reality the symbols purport to represent.
The human partner in collaboration must supply what the machine lacks. The human's iconic understanding — the structural intuitions built through years of domain engagement — provides the basis for evaluating whether the machine's symbolic output captures genuine structural relationships. The human's indexical connections — the direct experiential links between knowledge and world — provide the basis for testing whether claims correspond to reality.
Manheim's analysis pushes toward a further claim: that tool-using AI systems (with database access, sensors, actuators) move toward providing functional analogues of indexical grounding. A system that can query a database or interact with a physical environment has, in some functional sense, a causal connection to the world beyond its training data. Whether this functional analogue constitutes genuine indexicality in the Peircean sense — whether mediated, computational connection provides the same semiotic anchoring as direct, embodied contact — is the key open question.
Manheim's 2025 Philosophy & Technology paper "Artificial Intelligence and the Hall of Mirrors Problem" is the most developed contemporary application of Peirce's semiotic to large language models.
The analysis draws on Catherine Legg's prior work on AI and Peircean symbolicity, as well as on the broader tradition of enactivist and embodied cognition critiques of symbolic AI.
Closed symbolic environment. LLMs process symbols referring to symbols, without indexical exit to reality.
Convincing reflection. The hall of mirrors produces sharp, detailed, consistent images — but they remain reflections, not windows.
Fluency-groundedness independence. Sophisticated symbolic manipulation and failure to correspond with reality are not in tension — they coexist.
Human supplementation required. The iconic and indexical grounding the machine lacks must be supplied by the human partner.
Whether tool use and sensory interfaces allow AI systems to escape the hall of mirrors by acquiring genuine indexicality is the central debate in contemporary AI semiotics. Manheim's own view is cautious: functional analogues are improvements but do not fully reproduce the existential compulsion Peirce identified as the essence of the index.