The error-driven nature of recoding is what makes it so consequential for the AI age. A failure is not merely an inconvenience; it is information. It signals that the learner's existing mental model is inadequate in a specific way and creates pressure to revise the model. Remove the failure, and you remove the information. Remove the information, and you remove the pressure to revise. The developer who uses AI to avoid bugs does not merely avoid frustration — she avoids the specific form of frustration that drives cognitive growth. The Berkeley study documented intensification of work without documenting what happens to the recoding process that work traditionally generated.
The temporal signature of recoding is the so-called ten-year rule, developed most fully by Herbert Simon and K. Anders Ericsson: approximately ten years of deliberate practice are required to build the fifty thousand chunks that constitute genuine expertise in a domain. The number is not arbitrary. It represents the amount of recoding necessary to build a chunking vocabulary comprehensive enough to handle the full range of situations a domain presents. The rule has held across chess, music, medicine, software, and mathematics.
The distinction between output bandwidth and learning bandwidth is one of the most consequential implications of Miller's recoding theory applied to AI. The tools change the output bandwidth of human cognition — the amount of implemented reality a person can produce per unit of time. They do not change the learning bandwidth — the rate at which recoding occurs, which depends on the frequency and quality of the errors the learner encounters. If anything, by reducing error frequency, the tools may decrease learning bandwidth even as they increase output bandwidth.
The senior engineer who feels that depth is losing its market value is sensing the decoupling of output from recoding. In her own career, the two were tightly coupled — you could not produce working code without encountering failures, and the failures were the mechanism through which expertise was built. She now observes a generation of developers who produce working code while encountering far fewer failures, and she intuits, correctly, that the resulting expertise will be structurally different from her own: effective at directing AI tools to produce desired outputs, but less equipped with the deep chunking vocabularies that allow an expert to understand why a system behaves as it does.
Miller introduced recoding as a technical concept in the 1956 paper and developed it more fully in subsequent work on language and memory. The term was meant to emphasize that the transformation from unfamiliar to familiar was not passive absorption but active reconstruction — a re-coding of the material into a format that working memory could handle.
The concept gained its deepest theoretical elaboration in Miller's collaboration with Eugene Galanter and Karl Pribram on Plans and the Structure of Behavior (1960), which located recoding within the broader hierarchical architecture of goal-directed action. Every plan, in the TOTE framework, is itself a candidate for recoding: sequences that have been executed many times collapse into single units, freeing working memory for higher-level planning.
Effortful transformation. Recoding is not passive absorption but active reconstruction. The learner must engage with the material in ways that produce the patterns, not merely read or observe them.
Error-driven. Failures are the signals that drive recoding. When a chunk fails to predict reality, the failure forces revision. Without failures, chunks remain static regardless of how much new information accumulates around them.
Cumulative and irreversible. Chunks build on previous chunks. The expert's vocabulary is constructed layer by layer, with each level of compression depending on the availability of lower-level chunks as raw material.
The substrate of deep expertise. The fifty thousand chunks that constitute mastery in a domain are built through approximately ten years of deliberate practice. The number is a floor, not a target.
Threatened by compression that eliminates errors. When a tool produces correct outputs without requiring the learner to encounter and resolve errors, the recoding process loses its engine. Output grows. Learning stalls.
Whether AI-mediated practice generates its own form of recoding — building chunks appropriate to the new division of labor between human and machine — is the central empirical question of the moment. Defenders of AI-assisted learning argue that evaluating AI outputs, specifying requirements precisely, and iterating on designs all involve their own error signals and their own recoding process. Critics argue that these error signals operate at a higher level of abstraction that presupposes chunks already built at lower levels — that a developer who has never manually debugged a database query cannot meaningfully evaluate an AI-generated query for subtle performance pathologies. The resolution will emerge as the first AI-native generation of professionals encounters conditions that fall outside their tools' competence.