Recoding was the most important word in George Miller's vocabulary. Chunking described the structure of expertise; recoding described the process by which that structure is built. Miller defined recoding as the deliberate transformation of information from a detailed, explicit, cognitively expensive format into a compressed, abstract, cheap one. The process sounds mechanical but is anything but. Recoding is effortful, often painful, and error-driven. The medical student who has memorized two hundred diseases begins to see patterns — clusters of symptoms that co-occur so reliably that each cluster becomes a single chunk. The student did not decide to build these patterns. They emerged through hundreds of encounters with patients, textbooks, and mistakes. Each mistake was a recoding opportunity: a moment when the existing chunking structure failed to predict reality and had to be revised. The chess master's chunks were built across thousands of games. The experienced programmer's design-pattern vocabulary was assembled across thousands of debugging sessions. Recoding cannot be skipped. It cannot be downloaded. It requires exposure, repetition, failure, and correction.
There is a parallel reading that begins not with the cognitive architecture of expertise but with the political economy of error itself. Recoding, in Miller's formulation, depends on failures — but failures are not neutral events. They have costs: time lost to debugging, projects delayed, clients frustrated, careers derailed. The ten-year rule was never just about cognitive development; it was about ten years of someone else bearing the cost of your mistakes. The medical resident's errors happen on real patients. The junior developer's bugs affect real systems. The apprentice's failed attempts waste real materials.
What AI compression actually eliminates is not the possibility of learning but the economic justification for it. Why should an employer tolerate ten years of error-driven development when AI can produce correct outputs immediately? Why should patients endure residents' mistakes when AI diagnostics exist? The Berkeley study's finding that AI intensifies work without deepening expertise is not a bug but a feature from capital's perspective. The decoupling of output from learning creates a more efficient extraction of value: maximum productivity with minimum investment in human development. The senior engineer lamenting the loss of depth is witnessing not a cognitive tragedy but an economic realignment. Expertise was always expensive, requiring institutions to subsidize years of productive failure. AI offers a way to skip that subsidy. The question is not whether AI-mediated practice can generate its own recoding — it is whether anyone will pay for the time and errors that recoding requires when cheaper alternatives exist.
The error-driven nature of recoding is what makes it so consequential for the AI age. A failure is not merely an inconvenience; it is information. It signals that the learner's existing mental model is inadequate in a specific way and creates pressure to revise the model. Remove the failure, and you remove the information. Remove the information, and you remove the pressure to revise. The developer who uses AI to avoid bugs does not merely avoid frustration — she avoids the specific form of frustration that drives cognitive growth. The Berkeley study documented intensification of work without documenting what happens to the recoding process that work traditionally generated.
The temporal signature of recoding is the so-called ten-year rule, developed most fully by Herbert Simon and K. Anders Ericsson: approximately ten years of deliberate practice are required to build the fifty thousand chunks that constitute genuine expertise in a domain. The number is not arbitrary. It represents the amount of recoding necessary to build a chunking vocabulary comprehensive enough to handle the full range of situations a domain presents. The rule has held across chess, music, medicine, software, and mathematics.
The distinction between output bandwidth and learning bandwidth is one of the most consequential implications of Miller's recoding theory applied to AI. The tools change the output bandwidth of human cognition — the amount of implemented reality a person can produce per unit of time. They do not change the learning bandwidth — the rate at which recoding occurs, which depends on the frequency and quality of the errors the learner encounters. If anything, by reducing error frequency, the tools may decrease learning bandwidth even as they increase output bandwidth.
The senior engineer who feels that depth is losing its market value is sensing the decoupling of output from recoding. In her own career, the two were tightly coupled — you could not produce working code without encountering failures, and the failures were the mechanism through which expertise was built. She now observes a generation of developers who produce working code while encountering far fewer failures, and she intuits, correctly, that the resulting expertise will be structurally different from her own: effective at directing AI tools to produce desired outputs, but less equipped with the deep chunking vocabularies that allow an expert to understand why a system behaves as it does.
Miller introduced recoding as a technical concept in the 1956 paper and developed it more fully in subsequent work on language and memory. The term was meant to emphasize that the transformation from unfamiliar to familiar was not passive absorption but active reconstruction — a re-coding of the material into a format that working memory could handle.
The concept gained its deepest theoretical elaboration in Miller's collaboration with Eugene Galanter and Karl Pribram on Plans and the Structure of Behavior (1960), which located recoding within the broader hierarchical architecture of goal-directed action. Every plan, in the TOTE framework, is itself a candidate for recoding: sequences that have been executed many times collapse into single units, freeing working memory for higher-level planning.
Effortful transformation. Recoding is not passive absorption but active reconstruction. The learner must engage with the material in ways that produce the patterns, not merely read or observe them.
Error-driven. Failures are the signals that drive recoding. When a chunk fails to predict reality, the failure forces revision. Without failures, chunks remain static regardless of how much new information accumulates around them.
Cumulative and irreversible. Chunks build on previous chunks. The expert's vocabulary is constructed layer by layer, with each level of compression depending on the availability of lower-level chunks as raw material.
The substrate of deep expertise. The fifty thousand chunks that constitute mastery in a domain are built through approximately ten years of deliberate practice. The number is a floor, not a target.
Threatened by compression that eliminates errors. When a tool produces correct outputs without requiring the learner to encounter and resolve errors, the recoding process loses its engine. Output grows. Learning stalls.
Whether AI-mediated practice generates its own form of recoding — building chunks appropriate to the new division of labor between human and machine — is the central empirical question of the moment. Defenders of AI-assisted learning argue that evaluating AI outputs, specifying requirements precisely, and iterating on designs all involve their own error signals and their own recoding process. Critics argue that these error signals operate at a higher level of abstraction that presupposes chunks already built at lower levels — that a developer who has never manually debugged a database query cannot meaningfully evaluate an AI-generated query for subtle performance pathologies. The resolution will emerge as the first AI-native generation of professionals encounters conditions that fall outside their tools' competence.
The cognitive architecture Miller described remains fully accurate (100% Edo): recoding through error-driven chunking is indeed how human expertise develops, and this process cannot be downloaded or compressed. The contrarian's economic analysis is equally valid (100%): the material conditions that historically supported ten years of subsidized error are rapidly eroding. These are not competing truths but descriptions of different layers of the same phenomenon.
When we ask about individual cognitive development, Edo's framework dominates (80%): AI tools that eliminate errors do impede the recoding process, creating a generation with high output capacity but shallow chunking vocabularies. But when we ask who will actually develop expertise under these conditions, the economic reading becomes primary (70%): only those with unusual access to resources — time, mentorship, permission to fail — will complete the ten-year journey. The market will stratify into a small class who can afford deep expertise and a large class operating through AI-mediated shallow competence.
The synthesis suggests expertise will not disappear but become luxury goods. Just as handmade objects persist alongside mass production, human expertise will persist alongside AI compression — but as a marker of privilege rather than a general expectation. The ten-year rule will still apply, but only for the few whose economic position allows them to spend a decade failing productively. The rest will operate in a permanently compressed state, producing outputs through borrowed chunks they never earned. The question is not whether recoding will survive but whether societies will treat deep expertise as a public good worth subsidizing or a private advantage available only to those who can afford its true cost.