Mastery is not a state but a process. In Gee's framework, drawing on learning science across cognitive psychology, linguistics, and skill acquisition, it is a four-stage cycle that repeats across the career of any practitioner in any complex domain. The practitioner performs (attempts the task). The performance fails (necessarily, because perfect performance would have nothing to teach). Feedback arrives (specific information about the shape of the failure). Reflection integrates the feedback into a revised model. The cycle repeats with the improved model, which produces different failures, which generate different feedback, which prompts further reflection. Thousands of iterations produce the geological deposit that constitutes mastery — the thin layers that, individually, are invisible, but cumulatively form the bedrock on which expert judgment stands.
Each stage of the cycle depends on the one before it. Performance without the possibility of failure is mere demonstration — it teaches nothing because it reveals nothing about the gap between current ability and task demand. Failure without feedback is only frustration — the practitioner knows something went wrong but cannot extract the specific information needed to improve. Feedback without reflection is data lost — the information arrives but is not integrated into the practitioner's model of the domain. And reflection without return to performance is philosophy, not practice — the revised model is never tested against reality and so never refined further.
AI interrupts the cycle specifically at the failure stage, and the interruption propagates through every subsequent stage. When Claude writes the function and it works, there is no failure. Without failure, there is no failure-specific feedback. Without feedback, there is no reflection on what the failure revealed. The cycle does not produce zero learning — the practitioner may learn about direction, about evaluation, about how to describe the problem clearly. But the learning that occurs is learning about using the tool, not learning about the domain the tool is operating within.
Segal's geological metaphor in The Orange Pill is precise: every hour spent debugging deposits a thin layer of understanding; the layers accumulate over months and years into something solid. The metaphor captures why the loss of the cycle is not immediately visible. Remove a single layer and nothing changes. Remove a year's worth of layers and the surface still looks the same. The bedrock appears solid until the ground is tested — until a novel problem, an unusual failure, a situation requiring deep judgment reveals that the foundation is thinner than it appeared.
The analogy to well-designed video games illuminates why the failure stage specifically is irreplaceable. In a good game, the failure state is where the game communicates its underlying logic to the player. The player learns how gravity works by falling. She learns how enemies behave by being defeated. She learns how puzzle components interact by assembling them incorrectly. Each failure is a lesson delivered in the most effective format possible: experiential, immediate, specific, and embedded in a context that makes the lesson meaningful. Remove the failure state and what remains is not a game. It is a movie — events the player watches but does not participate in.
The four-stage cycle is not original to Gee — similar frameworks appear in Kolb's experiential learning theory (1984), Schön's reflection-in-action (1983), and Ericsson's work on deliberate practice. Gee's contribution was to show how well-designed video games operationalize all four stages with a precision that formal education rarely achieves, and to articulate why the cycle cannot be compressed or skipped without thinning the learning it produces.
Sequential dependence. Each stage requires the stages before it; skipping any stage degrades every subsequent round.
Failure carries the information. Success confirms the existing model; failure specifies the gap between model and reality.
Geological deposit. Mastery accumulates as thin layers across thousands of iterations; no single iteration is visibly formative.
AI interrupts at failure. The tool is optimized to produce success, which eliminates the stage where the cycle's information content is highest.
Performance differs from observation. The cognitive work of performing with failure is categorically different from the cognitive work of directing a tool that performs on the practitioner's behalf.
A practical question is whether AI can be deliberately designed to support rather than interrupt the mastery cycle — functioning as a tutor that provides hints without providing solutions, that scaffolds the practitioner's passage through difficulty without eliminating the difficulty itself. Some educational AI tools attempt this explicitly. Whether such designs can scale against the structural temptation to provide the complete solution (which users prefer, which managers reward, which markets favor) is the governance question that determines whether the cycle is preserved at civilizational scale or eroded by default.