You On AI Encyclopedia · Recoding (Miller) The You On AI Encyclopedia Home
Txt Low Med High
CONCEPT

Recoding (Miller)

The deliberate, error-driven transformation of unfamiliar information into familiar chunks — the process Miller identified as the engine of expertise, and the process that may or may not survive the AI compression of implementation work.
Recoding was the most important word in George Miller's vocabulary. Chunking described the structure of expertise; recoding described the process by which that structure is built. Miller defined recoding as the deliberate transformation of information from a detailed, explicit, cognitively expensive format into a compressed, abstract, cheap one. The process sounds mechanical but is anything but. Recoding is effortful, often painful, and error-driven. The medical student who has memorized two hundred diseases begins to see patterns — clusters of symptoms that co-occur so reliably that each cluster becomes a single chunk. The student did not decide to build these patterns. They emerged through hundreds of encounters with patients, textbooks, and mistakes. Each mistake was a recoding opportunity: a moment when the existing chunking structure failed to predict reality and had to be revised. The chess master's chunks were built across thousands of games. The experienced programmer's design-pattern vocabulary was assembled across thousands of debugging sessions. Recoding cannot be skipped. It cannot be downloaded. It requires exposure, repetition, failure, and correction.

In The You On AI Encyclopedia

The error-driven nature of recoding is what makes it so consequential for the AI age. A failure is not merely an inconvenience; it is information. It signals that the learner's existing mental model is inadequate in a specific way and creates pressure to revise the model. Remove the failure, and you remove the information. Remove the information, and you remove the pressure to revise. The developer who uses AI to avoid bugs does not merely avoid frustration — she avoids the specific form of frustration that drives cognitive growth. The Berkeley study documented intensification of work without documenting what happens to the recoding process that work traditionally generated.

The temporal signature of recoding is the so-called ten-year rule, developed most fully by Herbert Simon and K. Anders Ericsson: approximately ten years of deliberate practice are required to build the fifty thousand chunks that constitute genuine expertise in a domain. The number is not arbitrary. It represents the amount of recoding necessary to build a chunking vocabulary comprehensive enough to handle the full range of situations a domain presents. The rule has held across chess, music, medicine, software, and mathematics.

The distinction between output bandwidth and learning bandwidth is one of the most consequential implications of Miller's recoding theory applied to AI. The tools change the output bandwidth of human cognition — the amount of implemented reality a person can produce per unit of time. They do not change the learning bandwidth — the rate at which recoding occurs, which depends on the frequency and quality of the errors the learner encounters. If anything, by reducing error frequency, the tools may decrease learning bandwidth even as they increase output bandwidth.

The senior engineer who feels that depth is losing its market value is sensing the decoupling of output from recoding. In her own career, the two were tightly coupled — you could not produce working code without encountering failures, and the failures were the mechanism through which expertise was built. She now observes a generation of developers who produce working code while encountering far fewer failures, and she intuits, correctly, that the resulting expertise will be structurally different from her own: effective at directing AI tools to produce desired outputs, but less equipped with the deep chunking vocabularies that allow an expert to understand why a system behaves as it does.

Origin

Miller introduced recoding as a technical concept in the 1956 paper and developed it more fully in subsequent work on language and memory. The term was meant to emphasize that the transformation from unfamiliar to familiar was not passive absorption but active reconstruction — a re-coding of the material into a format that working memory could handle.

The concept gained its deepest theoretical elaboration in Miller's collaboration with Eugene Galanter and Karl Pribram on Plans and the Structure of Behavior (1960), which located recoding within the broader hierarchical architecture of goal-directed action. Every plan, in the TOTE framework, is itself a candidate for recoding: sequences that have been executed many times collapse into single units, freeing working memory for higher-level planning.

Key Ideas

Effortful transformation. Recoding is not passive absorption but active reconstruction. The learner must engage with the material in ways that produce the patterns, not merely read or observe them.

Error-driven. Failures are the signals that drive recoding. When a chunk fails to predict reality, the failure forces revision. Without failures, chunks remain static regardless of how much new information accumulates around them.

Cumulative and irreversible. Chunks build on previous chunks. The expert's vocabulary is constructed layer by layer, with each level of compression depending on the availability of lower-level chunks as raw material.

The substrate of deep expertise. The fifty thousand chunks that constitute mastery in a domain are built through approximately ten years of deliberate practice. The number is a floor, not a target.

Threatened by compression that eliminates errors. When a tool produces correct outputs without requiring the learner to encounter and resolve errors, the recoding process loses its engine. Output grows. Learning stalls.

Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
CONCEPT Book →