For the entire history of professional work, production and development were coupled. The scribe who copied manuscripts developed handwriting and visual memory. The programmer who wrote code developed computational understanding. The carpenter who built furniture developed material intuition. You could not have the output without undergoing the process that produced the output — and the process, by its nature, built the cognitive architecture that constituted expertise. Artificial intelligence has broken this coupling. A developer can now produce working code without undergoing the debugging cycles that build computational understanding. A lawyer can produce competent briefs without reading cases with the attention that struggle demands. A medical student can produce correct diagnoses without developing the pattern-recognition architecture that independent diagnosis requires. The output is available without the development. This is the decoupling, and it is what makes the present moment unprecedented in the history of professional expertise.
The decoupling has a specific mechanism that Ericsson's framework identifies with precision: mental representations are built only through effortful engagement with domain problems, and AI handles the effortful engagement on the practitioner's behalf. The output that would have required the representations to produce can now be generated without them. The representations do not form, because the conditions for their formation — effort, boundary-testing, specific feedback, iterative refinement — have been removed by the tool's default mode of operation.
Because organizations measure output and not development — because clients pay for briefs, not for the lawyer's understanding; because users care whether code works, not whether the developer knows why — the decoupling creates an incentive structure that systematically favors production over development. Every successful AI-assisted production reinforces the practitioner's reliance on the tool and reduces the probability of independent engagement with the same difficulty. The cycle of dependence is the decoupling's self-reinforcing mechanism.
The decoupling also produces systematic miscalibration of self-assessment. The output reflects the tool's understanding, not the practitioner's; but the practitioner, seeing her name on competent output, infers her own competence. A 2026 paper in Frontiers in Medicine documents this in clinical education: trainees who rely most heavily on AI diagnostic tools show the highest accuracy when tools are available and the lowest when unavailable — not because the tools degraded existing expertise but because the tools prevented new expertise from forming. The trainees' clinical experience was extensive; their representations were thin.
The consequences are deferred. They emerge only under specific conditions: when the AI system fails, when the situation falls outside the training distribution, when the practitioner must rely on independent judgment rather than tool-assisted production. These conditions may be infrequent in routine work. They are the conditions under which careers, projects, and in some domains human lives depend on the depth of the practitioner's understanding rather than the quality of the tool's output.
The concept crystallized in the mid-2020s as empirical evidence accumulated from clinical education, software engineering, and legal practice documenting a pattern without historical precedent: practitioners who produced excellent AI-assisted output and performed poorly when the tool was unavailable. The Berkeley study of AI in organizational workflows provided one early empirical anchor; MIT Sloan Management Review analyses proposed institutional responses including structured practice periods preserving the conditions for development within AI-augmented work.
Historical rupture. Every previous technology that enabled production also required the engagement that produced development; AI is the first to break the coupling.
Output reflects the tool. Competent AI-assisted output provides false evidence of the practitioner's own understanding.
Miscalibration compounds. The practitioner cannot detect the growing gap between her production capacity and her representational depth, because the evidence of her production is genuine.
Two classes of practitioners. The AI-amplified expert and the tool-dependent novice produce indistinguishable output and possess radically different understanding — invisible until failure.
Deferred cost. The decoupling's consequences appear only in the high-stakes, non-routine moments where the tool cannot handle the situation and the representational architecture that should have been there is not.
Defenders of AI-assisted workflows argue that the decoupling is exaggerated — that practitioners who work extensively with AI develop new forms of expertise in evaluating and directing the tool, and that these forms are no less valuable than the implementation-level expertise they supersede. The framework's response: the new expertise is real but structurally different, and the conditions under which it develops are not automatically provided by AI use. Whether directing AI builds genuine representational depth depends on whether the directing satisfies the four conditions of deliberate practice. Most AI use does not.