The friction requirement is the bridge between Ericsson's empirical research and the challenge AI poses to human development. It states, with the precision of a research program that has been testing it across domains for forty years, that difficulty is not optional for development. The specific form of the difficulty must satisfy identifiable conditions — it must be effortful, targeted at the boundary of capability, feedback-rich, and allow iterative refinement — but the presence of difficulty is non-negotiable. When these conditions are present, the cognitive system is forced to adapt, and the representational architecture of expertise is constructed. When they are absent, the system maintains its existing architecture regardless of how many hours of practice accumulate. This is the counterintuitive truth that the desirable-difficulties research of Robert and Elizabeth Bjork has documented across decades of replication: conditions that make practice feel smoother and more productive in the moment frequently make it least developmental, and conditions that make practice feel difficult and discouraging frequently produce the most durable and transferable learning.
There is a parallel reading that begins from who actually experiences friction and who purchases its removal. The friction requirement operates as a class sorting mechanism, not a neutral developmental principle. Those with resources will always buy their children tutors, camps, and coaches who preserve the exact conditions of deliberate practice while smoothing every other rough edge. The child of means gets personalized feedback loops, carefully calibrated challenges, and someone whose job is to maintain engagement at precisely the right difficulty gradient. Meanwhile, those without resources get two choices: the unmediated friction of trying to learn alone, which rarely satisfies Ericsson's conditions because feedback is absent or confounded, or the AI assistant, which removes friction entirely.
The laparoscopic surgery example perfectly illustrates this dynamic — it's a story about credentialed professionals whose institutions could afford million-dollar equipment and whose training programs could be restructured around new difficulties. But most work isn't surgery. The administrative assistant using AI to write emails, the junior developer using Copilot to meet deadlines, the contract writer using ChatGPT to hit quotas — these workers don't get to choose whether friction ascends. They're not directing grand strategies or making architectural decisions. They're using AI because their employers demand productivity. The friction hasn't ascended for them; it's been extracted and concentrated in a different class of worker entirely. The real friction now is economic: the struggle to remain employed as the boundaries of what requires human judgment continuously contract. This isn't a story about humanity choosing whether to preserve developmental conditions. It's about markets sorting who gets development and who gets efficiency.
The requirement has specific content. Interleaved practice — alternating between different skills within a session rather than practicing each in isolation — produces worse immediate performance and dramatically better long-term retention. Variable practice conditions produce rougher sessions and more flexible representations. Spaced practice feels less efficient and produces more durable learning. Delayed feedback builds diagnostic skills that immediate correction short-circuits. In every case, the pattern is the same: the condition that feels harder produces better development; the condition that feels smoother produces better momentary performance and worse long-term outcomes. Performance and learning are not merely different — under well-specified conditions, they are inversely related.
The implication for AI is structural. When a developer uses Claude to produce a working function, the four conditions of deliberate practice are eliminated in a single stroke. The work is not effortful in the developmental sense. The boundary of capability is not targeted. The feedback is a finished product rather than diagnostic information about the gap between attempt and result. There is nothing to iteratively refine. Every condition that Ericsson's research identifies as necessary for representational construction has been removed — not because the tool is flawed but because the tool is excellent. It handles the difficulty so well that the human never encounters it.
The common response is that the friction was always unnecessary — a byproduct of inadequate tools that better tools should rightly remove. Edo Segal's laparoscopic surgery example addresses this directly: when laparoscopic techniques replaced open surgery, the tactile friction of hands-in-body was eliminated, but a different difficulty replaced it — interpreting 2D images of 3D space, coordinating instruments without direct proprioceptive feedback. The friction ascended. Crucially, the new difficulty satisfied all four conditions of deliberate practice, so the surgeons who mastered laparoscopic technique built genuine representations at the new level.
The question Ericsson's framework forces is whether the AI transition follows this pattern. Does the friction ascend — relocating to a higher cognitive level that still satisfies the conditions for deliberate practice? Or does it disappear — leaving practitioners at a level where the developmental conditions are structurally absent? The answer is conditional and specific. When directing AI, evaluating its output, and making judgment calls about what to build satisfy the four conditions, the friction has genuinely ascended. When the direction is routine, the evaluation superficial, and the feedback delayed and confounded, the friction has not ascended but evaporated. The difference is not visible in the output. It is visible only in what the practitioner becomes.
The desirable-difficulties construct was developed by Robert and Elizabeth Bjork at UCLA across multiple decades of motor learning, classroom instruction, and athletic coaching research. Ericsson's framework provides the theoretical architecture that explains why the Bjorks' empirical findings hold: difficulty is the mechanism through which mental representations are constructed, not an inefficiency to be optimized away.
Difficulty as mechanism. The cognitive struggle at the boundary of capability is the engine of representational growth, not a cost to be minimized.
Performance-learning inverse. Conditions optimizing current performance often minimize long-term development; conditions optimizing development often impair current performance.
Four-condition test. Effort, boundary-targeting, specific feedback, and iterative refinement must be simultaneously present for development to occur.
Ascending vs. evaporating. Friction either relocates to a higher cognitive level that preserves the conditions (as in laparoscopic surgery) or disappears (as in default AI use); only the first produces genuine expertise at the new level.
Subjective signal unreliable. Practitioners systematically prefer conditions that feel productive (smooth AI-assisted work) over conditions that produce development (uncomfortable independent struggle).
The friction requirement's validity depends entirely on which question we're asking. If the question is "what conditions produce expertise?" then Ericsson's framework is essentially correct (95%) — four decades of research across domains confirms that these specific difficulties drive representational development. The contrarian view barely registers here; the science is settled. But if the question shifts to "who gets access to developmental friction?" the weighting inverts (80% contrarian) — market dynamics and class structures largely determine whether someone's AI use preserves or eliminates growth conditions.
The middle ground emerges when we ask about professional domains. In fields like medicine, law, or engineering, where credentials gate access and liability enforces standards, friction does tend to ascend (70% Ericsson). Institutions rebuild training around new tools while preserving developmental rigor. But in routine knowledge work — the bulk of employment — friction evaporates rather than ascends (75% contrarian). The junior analyst who once learned by struggling through spreadsheets now prompts AI for answers, never building the representational foundation that would let them evaluate those answers expertly.
The synthetic frame is that friction requirement operates within institutional and economic constraints. It's not that Ericsson is wrong about development or that the contrarian is wrong about access — both are right about different layers of the same phenomenon. The question isn't whether friction is required (it is) or whether it will be preserved (it won't be universally). The question is how societies structure the preservation of developmental conditions when markets incentivize their elimination. The friction requirement isn't just a learning principle; it's a design challenge for institutions that must balance immediate productivity with long-term human capability.