In the team-based system, cognitive labor distribution was simultaneously a mechanism for skill development. The junior developer learned implementation by implementing — writing code, encountering errors, debugging, and gradually internalizing patterns that distinguish robust code from fragile code. The junior designer learned visual communication by designing — creating mockups, receiving critique, revising, and gradually developing trained perception that registers imbalance before conscious analysis can articulate it. Each act of cognitive labor was also an act of learning, and the progression from novice to expert was marked by gradual internalization of capacities initially supported by external structures. The AI's absorption of implementation labor disrupts this internalization process. If the AI handles implementation, the junior developer does not learn implementation through the iterative practice that builds expertise. The cognitive labor redistributed to the AI was also the cognitive labor through which the next generation of practitioners developed the capacities they would need as the human component of the system.
The parallel to the introduction of electronic navigation systems aboard naval vessels is precise. When the Navy automated manual computations the navigation team had previously performed, the immediate effect was increased efficiency. The longer-term effect was a decline in navigational expertise among officers who had trained exclusively on automated systems. Officers who had learned to navigate with manual charts, dividers, and parallel rulers possessed a form of spatial understanding that officers trained on electronic displays lacked. The electronic system computed correctly, but officers who relied on it had not developed the deep spatial intuition manual computation built — intuition that proved essential when electronic systems failed, as electronic systems inevitably do.
The gap is not merely a training problem to be solved by adding instructional modules to the AI-augmented workflow. It is a structural property of the new cognitive architecture — a consequence of redistributing cognitive labor that was also the cognitive labor through which competence was built. Addressing it requires rethinking how expertise develops when the practice that previously built expertise has been absorbed by a machine.
The implications connect to Ericsson's deliberate practice research: expertise requires effortful, targeted engagement at the boundary of capability, guided by specific feedback and sustained over thousands of hours. If the AI performs the effortful, targeted engagement, the builder does not undergo the practice that builds expertise. Edo Segal's ascending friction thesis in The Orange Pill argues that AI relocates difficulty to higher cognitive floors rather than eliminating it. Hutchins's framework adds a complication: the relocated difficulty is itself developmental only if practitioners actually engage with it, and the conditions of AI-augmented work often allow evaluation without the depth of engagement that develops evaluative capacity.
New forms of deliberate practice must be designed — forms that develop judgment, evaluative capacity, and deep domain understanding the AI-augmented system requires of its human component, using methods suited to the new architecture rather than attempting to replicate the old.
The concept draws on Hutchins's long-standing interest in learning as internalization of external process. The gap becomes visible in his framework as a specific structural consequence of redistributing cognitive labor across the human-AI boundary — the redistribution that absorbs the labor also absorbs the developmental opportunity the labor provided.
Labor as developmental practice. In the old system, performing cognitive labor was simultaneously the mechanism of acquiring the capacity to perform it.
The electronic navigation precedent. Officers trained on automated systems developed operational proficiency without the deep understanding manual practice built — a documented pattern with direct AI implications.
Evaluation without engagement. AI-augmented workflows permit approval of outputs without the depth of engagement that would develop evaluative capacity over time.
New forms of deliberate practice. Expertise in the AI age requires deliberately designed developmental engagements — not merely working with the tool, but working in ways that build domain understanding independent of tool availability.
Generational consequences. The gap compounds across generations: practitioners trained exclusively in AI-augmented workflows may lack the depth of understanding to direct the tools they depend on.