The most important question Miller's framework poses to the age of artificial intelligence is not whether AI will change what fits inside the seven slots. It already has, more dramatically than any technology in history. The question is whether the humans who benefit from that change will understand what their slots contain — whether the compression will be transparent or opaque, earned or borrowed, owned or owed. A single cognitive slot that in the age of machine code might have contained a few binary operations can now contain, through the mediation of an AI assistant, an entire functioning software system. A single slot that in the age of hand-drafted legal briefs might have contained a single precedent can now contain an entire landscape of case law. A single slot that in the age of manual scientific calculation might have contained a single variable can now contain an entire model. The compression is real. The cognitive liberation is real. The question is whether the density of what fits inside each slot is matched by the depth of understanding the slot-holder possesses.
Every previous compression in human history raised a version of this question. Writing externalized memory, and some critics worried that people would lose the ability to remember. The calculator compressed arithmetic, and some worried about the loss of number sense. GPS compressed navigation, and some worried about spatial skills. The concerns were not entirely wrong — each compression did produce measurable atrophy of the skill it displaced. But each previous tool was reliable, available, and transparent in its failure modes. The atrophy was manageable because the fallback — less efficient but functional — was available.
The AI compression is different in ways Miller's framework makes explicit. Its failure modes are less visible: an AI coding assistant generating subtly incorrect code produces a failure that the user without built chunks cannot see. Its fallback is less available: the developer who has never manually implemented what the AI produces cannot fall back on the manual skill because the skill was never built. Its opacity is structural: the density of compression makes it harder to decompose the chunks without pre-existing structural knowledge.
The question 'what fits inside' is therefore both celebratory and diagnostic. Celebratory because the density of compression enables cognitive achievements that previous generations could not imagine. Diagnostic because the same density exposes the holder to risks that previous compressions did not generate at this scale. A developer who builds a system using AI tools holds in working memory a representation of that system that is either transparent (decomposable when needed) or opaque (functional but inscrutable). The difference determines whether the compression strengthens or undermines the holder's cognitive infrastructure.
Miller's framework does not resolve the question but does provide the vocabulary to ask it precisely. What fits inside your seven slots? Who put it there? And if the tool that packed it fails tomorrow — as tools always, eventually, fail — could you unpack it yourself? These are the questions the AI age poses to every knowledge worker, and they are the questions that determine whether the most powerful cognitive revolution in human history produces a generation of deeper thinkers or a generation of more productive borrowers.
The phrase 'what fits inside' frames the central argument of the Miller simulation presented in this book. It synthesizes Miller's quantitative finding (the seven-slot limit) with his qualitative concern (the quality and origin of the chunks that fill the slots).
The question's urgency in the AI age reflects the unique character of current compression technology: its density exceeds any previous tool, and its opacity creates failure modes that previous tools did not generate.
Density is not understanding. What fits inside a slot can be extraordinary even when the slot-holder's comprehension of the contents is shallow. The two dimensions vary independently.
Previous compressions were transparent. Writing, calculation, and navigation compressions produced atrophy but not opacity. The skill displaced was still latent, still recoverable, still understood.
AI compression risks opacity. The density of what AI can pack into a single slot exceeds the capacity of the holder to decompose it without built structural knowledge.
The diagnostic questions. What fits inside? Who put it there? Can you unpack it alone? These questions identify the difference between resilient and fragile cognitive capital.
The civilizational stakes. Whether AI produces deeper thinkers or more productive borrowers depends on whether individuals and institutions answer these questions honestly rather than by celebrating the density alone.