Spatial cognition is the systematic study of how humans think through spatial structures — both internally, as mental models, and externally, as diagrams, maps, timelines, and sketches. Tversky's four-decade research program establishes that spatial thinking is not a specialized subdomain of cognition but its foundation: we understand time as space (timelines), abstract relationships as space (organizational charts), causation as space (flowcharts), and arguments as space (the linear march of written text). When we gesture while speaking, we are literally thinking with our hands, encoding spatial relationships that words alone cannot carry. The framework overturns the view of cognition as primarily symbolic or linguistic and reframes it as fundamentally embodied and spatial.
There is a parallel reading in which spatial cognition's primacy becomes AI's structural disadvantage rather than a design challenge to overcome. The issue is not that LLMs fail to preserve spatial structure during training—it's that spatial understanding fundamentally doesn't scale through the mechanisms that made AI economically viable.
Consider what Tversky's framework actually implies for knowledge transmission: if expert cognition is genuinely spatial—distributed across mental maps built through embodied interaction with systems over years—then the 'flattening' into sequential text isn't a lossy compression we can engineer around. It's a phase transition. The senior engineer's spatial feel for a codebase exists at a different ontological level than any artifact that can be efficiently transmitted. When organizations replace that embodied presence with AI-mediated documentation, they're not losing fidelity in translation—they're replacing one kind of knowledge (high-resolution, spatial, evolved through action) with another kind entirely (statistical, pattern-matched, divorced from the material substrate). The economic pressure isn't toward 'better spatial interfaces' but toward work that doesn't require spatial mastery in the first place. If spatial cognition can't be cheaply distributed, then the systems that depend on it get re-architected until they don't. The real AI transformation isn't teaching models to think spatially—it's reorganizing production around what models can already do, which means eliminating the roles where Tversky's kind of cognition mattered most.
Tversky's work stands in a tradition running from Kenneth Craik's mental models through Roger Shepard's mental rotation studies to contemporary embodied cognition. What distinguishes her contribution is the insistence that spatial thinking is not an alternative to abstract thought but the substrate from which abstract thought is built. The metaphors that structure our reasoning — higher and lower, closer and farther, before and after — are spatial at their root.
The AI relevance is direct. Large language models are trained on sequential text, which preserves the linguistic surface of thought but discards much of the spatial structure that the speaker's cognition encoded. When a builder describes a flow to Claude, the words carry spatial relationships implicitly — through prepositions, temporal connectives, and narrative structure — which the model must reconstruct from statistical patterns rather than receive directly. This is why prompt engineering is difficult: the user must encode spatial structure into a sequential medium and trust the model to decode it.
Tversky's framework also illuminates why embodied understanding resists articulation. The senior engineer who feels a codebase the way a physician feels a pulse possesses spatial knowledge that is genuinely spatial — distributed across a mental map that no documentation can fully capture. When that knowledge is transmitted through AI summaries, the summaries inevitably flatten the spatial structure into sequential prose.
For designers of AI interfaces, the implication is that future tools must accept spatial input directly — diagrams, sketches, gestures — rather than requiring users to linearize their spatial thinking into text. Every such translation loses information, and the information lost is often precisely what mattered most.
Tversky earned her PhD at the University of Michigan in the early 1970s and spent much of her career at Stanford, where she developed the research program that would culminate in Mind in Motion (2019). Her early work on memory for spatial arrays and later work on diagrams, narratives, and gesture progressively established that spatial structure is the connective tissue of cognition.
Space as cognitive substrate. Spatial structures are not one domain of thinking among many but the organizing medium through which most thinking occurs.
Internal and external continuity. Mental spatial models and external spatial artifacts (diagrams, maps, sketches) form a single cognitive system — they are not separate but coupled.
Gesture as spatial thought. Hand movements during speech are not communication aids but components of the cognitive process itself, encoding spatial relationships the speaker may not consciously articulate.
Metaphorical extension. Abstract concepts — time, hierarchy, causation, argument — are systematically understood through spatial metaphors that reflect the primacy of spatial cognition.
The substantive question is which spatial understanding survives commodification and which gets selected out. Tversky is fully right (100%) that spatial cognition is foundational—the evidence is overwhelming. She's also right that current text-based AI loses spatial structure in transmission. But the contrarian reading dominates (75%) on the economic trajectory: organizations facing cost pressure don't primarily invest in better spatial interfaces. They restructure work to not require the spatial mastery those interfaces would preserve.
This breaks into layers. For routine spatial reasoning—flowcharts, timelines, organizational hierarchies—the Tversky-informed design agenda is correct and feasible (60/40 in favor of better tools). These are spatial structures that already externalize well and could transfer to AI systems through diagram-native interfaces. But for deep embodied spatial knowledge—the kind built through years of material interaction with a specific system—the contrarian view is stronger (70/30). That knowledge doesn't just resist linearization into text; it resists any form of cheap distribution. The engineer's spatial feel for the codebase is inseparable from the history of building it.
The synthesis isn't 'preserve all spatial knowledge' or 'accept all flattening'—it's recognizing that AI creates economic selection pressure favoring externalizable spatial structures while eliminating dependence on the non-externalizable kind. The question becomes: which spatial knowledge is worth the cost of human cultivation, and which systems do we redesign to not require it? Tversky names what we lose. The contrarian view names why we'll lose it anyway. Both are needed.