Herbert Simon's research on chess masters and physics experts demonstrated that expertise consists of approximately fifty thousand chunks, built over approximately ten years of deliberate practice — the ten-year rule. The number is not arbitrary; it represents the amount of recoding necessary to build a chunking vocabulary comprehensive enough to handle the full range of situations a domain presents. The rule has held across domains as diverse as music composition, medical diagnosis, and software architecture. The question the present moment forces is what happens to the ten-year rule when AI compresses the activities through which recoding occurs. If a significant portion of the fifty thousand chunks were built through the struggle of implementation — through debugging, manual optimization, the slow, painful process of turning intention into working artifact — then eliminating that struggle may also eliminate a significant portion of the recoding experiences that build expertise. This is the recoding crisis: not that AI prevents learning, but that AI changes the kind of learning that occurs.
The developer who works with an AI assistant is learning constantly — learning to evaluate generated code, learning to specify requirements precisely, learning to recognize when the AI's solution is elegant and when it is merely functional, learning to orchestrate complex systems through conversational interaction. These are legitimate skills involving genuine recoding. The developer is building chunks for a new kind of practice. The question is whether these chunks are sufficient — whether the expertise built through AI-mediated practice is robust enough to handle situations where the AI fails, where pre-chunked solutions break down, where the developer must reach inside the compression and understand what it contains.
The evidence, still early but accumulating, suggests that the difference between manual and AI-mediated learning is significant. Studies of programmers working with AI assistance consistently show dramatic short-term productivity gains but also a measurable decline in code comprehension — the ability to read, understand, and mentally model the behavior of code produced. Developers who generate code through AI conversation understand that code less deeply than developers who wrote it manually. This is exactly what Miller's framework predicts: the manual developer was engaged in recoding, building chunks through active manipulation. The AI-assisted developer was engaged in evaluation rather than construction.
The implications cascade through every knowledge profession. Medical students using AI diagnostic tools arrive at correct diagnoses faster than their predecessors but demonstrate weaker differential diagnostic reasoning when the AI's suggestion is wrong. Law students using AI research assistants find relevant precedents more quickly but construct less rigorous legal arguments. Finance analysts using AI-generated models produce more sophisticated quantitative analyses but understand less about the assumptions embedded in those models.
The crisis is not inevitable. It is a design problem. An AI tool that presents pre-chunked solutions as finished artifacts — here is your code, here is your diagnosis, here is your legal brief — is a tool that minimizes recoding. A tool designed differently — one that explains its reasoning, exposes its components, invites the user to modify and reassemble and struggle with the generated artifact — preserves recoding while providing compression. The tool that serves human cognitive development is not the tool that does the most work. It is the tool that does the right work, leaving the user the effortful engagement that builds genuine chunks.
The recoding crisis framing emerges from applying Miller's framework to contemporary AI deployment patterns. The core insight — that error-driven learning requires errors, and that removing errors eliminates the learning mechanism — is implicit in Miller's and Ericsson's work but becomes urgent only under conditions where a tool can reliably produce correct outputs on behalf of the learner.
Empirical documentation of the phenomenon is in its infancy but growing rapidly. Studies from MIT, Microsoft Research, and Stanford have begun tracking the gap between AI-assisted productivity and AI-assisted comprehension.
Fifty thousand chunks in ten years. The empirical regularity that deep expertise requires roughly a decade of deliberate practice to build the chunking vocabulary.
Learning changes kind, not merely amount. AI-mediated practice produces its own kind of expertise, but the kind differs structurally from the expertise built through manual struggle.
The comprehension gap. Developers who generate code through AI produce working software while understanding that software less deeply than developers who wrote it manually.
A design problem, not a technological inevitability. Tools can be designed to preserve recoding (by exposing reasoning and inviting modification) or to eliminate it (by presenting finished artifacts for evaluation).
Cascading across professions. The same pattern appears in medicine, law, finance, and every knowledge domain where AI tools have been deployed at scale.
Whether the comprehension gap matters economically is contested. One position holds that deep understanding will remain differentially valuable in edge cases where tools fail. Another holds that edge cases will become rare enough to be handled by a small number of specialists, with the rest of the profession operating fluently within tool capabilities. The framework presented here takes neither position but insists that the question be asked with Miller's precision rather than in the vague terms that currently dominate the discourse.