Expert mental representations are the cognitive structures that distinguish the expert from the merely experienced practitioner. They are not stored propositions but rich, flexible, deeply interconnected internal models of a domain that encode not just what things are but what they mean, imply, and demand. The chess master perceives a board not as thirty pieces but as a small number of meaningful chunks laden with strategic implication. The surgeon feels the difference between healthy and diseased tissue through proprioceptive signals invisible to the observer. These representations cannot be transferred directly; they must be constructed from the inside through the specific friction of deliberate practice. AI's capacity to produce expert-level output without requiring their construction is the central concern of Ericsson's framework applied to the present moment.
The research foundation for mental representations comes from de Groot's 1946 chess studies, extended in the 1970s by Chase and Simon's work on chunking. When chess masters and novices were shown a board from an actual game for five seconds and asked to reproduce it, the masters' performance was dramatically better. When the same masters were shown a random arrangement that could not arise from play, their advantage vanished. The superior memory was not general but domain-specific and structure-dependent — evidence that what the masters possessed was not raw mnemonic power but an elaborate library of meaningful patterns that made the board perceptually legible in ways the novice could not access.
Mental representations are both declarative and procedural, both semantic and embodied. The radiologist does not merely know that a mass with particular characteristics suggests malignancy; she perceives the mass as suggestive, the perception arriving pre-analyzed and carrying diagnostic weight. The senior engineer does not merely know that certain code patterns indicate architectural fragility; she feels the codebase's pulse, in the words Edo Segal quotes from his own engineering experience. This feeling is not mystical. It is the output of a cognitive architecture so elaborate that it operates below the level of conscious articulation, producing evaluations the expert experiences as intuition.
The critical property of these representations, for the AI transition, is their transferability. They are flexible cognitive structures that transfer across related problems — allowing evaluation of novel positions by analogy to stored patterns, handling of unexpected complications by drawing on deep understanding of principles. This transferability is built through the systematic variation that characterizes deliberate practice: encountering the same deep principles in many different surface configurations, which forces the representations to become abstract enough to apply across novel situations. When AI removes both the struggle and the variation of implementation work, the transferable representations stop being built even as output quality is maintained.
The practical consequence, documented in the emerging empirical literature, is a new kind of cognitive deficit: practitioners whose current performance is high (because the tool is competent) and whose underlying capability is low (because the conditions under which capability develops have been removed). When the tool fails or the situation is novel, the practitioner discovers that the representations needed to handle it independently were never constructed. Hosanagar's 2025 reports on endoscopists whose adenoma detection rates dropped from 28% to 22% when AI assistance was removed is exactly this pattern observed in clinical practice.
The concept of mental representation has a long intellectual history in cognitive science, running from Kenneth Craik's 1943 proposal that the brain constructs 'small-scale models' of reality through Allen Newell and Herbert Simon's information-processing framework. Ericsson's specific contribution was to apply the framework to expert performance and to demonstrate empirically how the representations of experts differ in structure and depth from those of novices in the same domain.
The concept has been extended and refined through decades of cross-domain research, but the core finding has remained stable: what separates experts from others is not raw cognitive power but the domain-specific architecture they have constructed through years of effortful, feedback-rich engagement.
Pattern libraries, not raw memory. Experts' apparent memory superiority is entirely dependent on the structural meaningfulness of the material — evidence that what is stored is pattern, not data.
Below-conscious operation. Mature representations operate below the level of conscious articulation, producing evaluations experienced as intuition that are actually the output of elaborate pattern-matching.
Non-transferable from outside. Representations must be built from the inside through effort; they cannot be taught in any conventional sense because their essence is the internal model, not the observable performance.
Variation-dependent. Transferable representations are built through systematic variation — encountering deep principles in many surface configurations, which AI-mediated work systematically eliminates.
Invisibility of their absence. A practitioner can produce expert-level output through AI assistance without possessing the representations expert production normally implies, and the gap is invisible until a situation demands independent judgment.
Critics of strong mental-representation accounts, following the embodied and enactive cognition traditions (Andy Clark, Evan Thompson), argue that expertise is less about internal models and more about skillful coupling with the environment. Ericsson's framework was compatible with embodied cognition in practice — his account of surgical expertise is thoroughly proprioceptive — but sometimes used representational vocabulary in ways that invited stronger internalist readings than the evidence required.