The most consequential distinction Miller's framework draws is between compression that was built through effortful recoding and compression that was received as a finished product. Both kinds of chunks occupy the same amount of space — one slot in working memory. Both enable the holder to function effectively under routine conditions. But they are not the same thing. An earned chunk preserves the structural knowledge of its contents. The person who built it can decompose it when necessary, reach inside, understand why it works, and reconstruct it for novel conditions. A borrowed chunk carries the surface representation without the internal structure. It is a label occupying a slot that looks, from the outside, exactly like the corresponding chunk in an experienced practitioner's mind. The difference is invisible under normal operations and catastrophic under abnormal ones. When a system built with genuine chunks encounters unexpected failure, the developer can decompose the relevant chunk, inspect its sub-components, identify the inconsistency, and repair it. This is what debugging actually is: selective decompression driven by failure signals. When a system built with labels encounters the same failure, the developer cannot decompress because there is nothing to decompress into.
The metaphor that captures this distinction most precisely is the difference between a map you drew yourself and a map someone handed you. Both get you from point A to point B. Both occupy roughly the same cognitive space. But the map you drew — tracing the roads yourself, noting landmarks, correcting wrong turns — gave you a spatial understanding that the borrowed map did not. When the road is closed, when the bridge is out, when you need an alternative route through unfamiliar terrain, the self-drawn map is a generative representation. You can improvise from it. The borrowed map is static. You can follow it but cannot riff on it. The distinction matters only when things go wrong. Things always eventually go wrong.
Every previous cognitive compression technology created some form of borrowed compression, but the borrowed compressions remained manageable because the tool was reliable, available, and transparent in its failure modes. When a calculator's battery dies, you know it. When GPS loses signal, you know it. The failure is visible, and the fallback — less efficient but functional — is available because the older skill, while atrophied, has not been entirely lost. AI-mediated compression generates failure modes that are neither equally visible nor equally equipped with fallback strategies. When an AI coding assistant generates subtly incorrect code — code that compiles, passes basic tests, appears functional, but contains a logical error that will manifest only under specific conditions — the failure is invisible to the developer who did not build the underlying chunks through recoding.
The specific risk is not that AI makes people less capable in some abstract sense. It is that AI creates a class of practitioners highly capable within the operational parameters of the tool and profoundly fragile outside them. The productivity of a borrowed-chunk holder and an earned-chunk holder is indistinguishable under routine conditions. The difference appears only when novelty arrives — a bug that resists simple diagnosis, a requirement the tool cannot specify precisely, a system behavior that falls outside training distribution. At that point, the earned-chunk holder can draw on structural knowledge built through thousands of recoding episodes. The borrowed-chunk holder reaches inside her compression and finds something she recognizes structurally but does not understand mechanistically. She is, in a precise cognitive sense, less than the sum of her outputs.
The word for this condition is dependency. Borrowed compressions work beautifully as long as the tool providing them continues to work and conditions remain within the tool's design range. They fail catastrophically when either condition is violated. The developer who has built her own chunks through recoding can function without the tool, function in novel conditions, diagnose failures the tool cannot predict. The developer who has borrowed her chunks can do none of these things.
The distinction between earned and borrowed compression is a natural extension of Miller's framework but was not systematically developed in his own writings. The concept draws on subsequent research in expertise, particularly Ericsson's work on deliberate practice and the conditions under which genuine skill develops.
The specific framing — 'earned' versus 'borrowed' — emerged in discussions of AI-mediated learning and appears in both the Miller simulation presented in this book and in Edo Segal's The Orange Pill. It captures a concern that senior practitioners across many fields have articulated in less precise terms for several years.
Same slot, different contents. Earned and borrowed chunks both occupy one slot. The slot cannot distinguish between them. Only performance under novel conditions can.
Decomposability as the mark of ownership. An earned chunk can be unpacked into its components. A borrowed chunk cannot — there is nothing to unpack into because the structural knowledge was never built.
Invisibility of the difference. Under routine conditions, the two kinds of practitioners produce indistinguishable output. The difference becomes visible only when the tool fails or conditions become novel.
The dependency trap. Borrowed compressions create dependency on the tool that provided them. When the tool fails, the practitioner has no fallback because the older skill was never built.
Asymmetric fragility. Earned expertise degrades gracefully when conditions change. Borrowed expertise fails catastrophically at the boundary of its training distribution.
The question of whether AI-mediated practice can generate genuinely earned chunks at higher levels of abstraction is the central empirical question of the moment. One position holds that chunks built through AI collaboration — evaluating outputs, specifying requirements, orchestrating systems — are genuinely earned, just at a higher level than previous chunks. Another holds that these higher-level chunks are structurally unstable because they depend on lower-level chunks the practitioner never built. The resolution requires longitudinal research that does not yet exist.