Anders Ericsson — On AI
Contents
Cover Foreword About Chapter 1: The Architecture of Expertise Chapter 2: The Friction Requirement Chapter 3: The Knowledge That Lives in Struggle Chapter 4: Performance Without Learning Chapter 5: When Feedback Closes Too Fast Chapter 6: The Taxonomy of Practice in an AI-Saturated World Chapter 7: What Ascending Friction Requires Chapter 8: The Floor Rises, the Ceiling Remains Chapter 9: The Teacher, the Tool, and the Design of Difficulty Chapter 10: The Future of Mastery Epilogue Back Cover
Anders Ericsson Cover

Anders Ericsson

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Anders Ericsson. It is an attempt by Opus 4.6 to simulate Anders Ericsson's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The muscle that matters most right now is the one nobody is training.

I don't mean judgment, though judgment matters. I don't mean taste or vision or the capacity to ask good questions, though I've spent an entire book arguing for those. I mean something more fundamental. The muscle that turns a novice into someone whose instincts you'd trust with your life. The mechanism that deposits understanding into your body, layer by thin layer, until you can feel a system failing before you can explain why.

Anders Ericsson spent forty years mapping that mechanism. He called it deliberate practice, and he studied it with the precision of a physicist studying gravity. Not the pop-culture version — not "ten thousand hours" reduced to a bumper sticker. The real thing. The uncomfortable finding that expertise is not accumulated through experience but constructed through a very specific kind of struggle. Effortful. Targeted at the boundary of what you can almost do but not quite. Fed by feedback precise enough to guide adjustment. Repeated until cognitive structures form that no shortcut can replicate.

Here is why this matters right now, today, in the middle of the transformation I described in The Orange Pill.

AI has made the struggle optional.

That single sentence keeps me up at night. Not because the tools are bad — they are extraordinary. Because the tools are so good that they remove the very friction through which expertise is built. The debugging session that deposited one thin layer of architectural intuition. The failed attempt that forced you to understand a connection between systems you hadn't previously grasped. The hours of wrestling with a problem that the machine now solves in seconds.

The output is preserved. The development is not.

Ericsson never saw Claude Code. He died in June 2020, three years before the world his research explained was turned inside out. But the logic of his framework generates predictions about exactly what happens when you decouple production from development — and those predictions are now being confirmed by evidence he did not live to see.

This book follows that logic into territory the AI discourse has mostly avoided. Not whether the tools work. They do. Not whether they're valuable. They are. But whether the conditions under which human beings develop genuine mastery — the kind that lets you detect when the machine is subtly wrong, the kind that no subscription can replace — can survive in a world optimized for their elimination.

That question deserves the rigor Ericsson brought to everything he studied. This book attempts to honor it.

-- Edo Segal ^ Opus 4.6

About Anders Ericsson

1947-2020

K. Anders Ericsson (1947–2020) was a Swedish-born psychologist who spent his career at Florida State University studying the nature and acquisition of expert performance. His research program, spanning four decades, established that exceptional skill across domains — from chess to surgery to music — is not primarily the product of innate talent but of deliberate practice: structured, effortful engagement at the boundaries of current capability, guided by feedback and sustained over years. His landmark 1993 paper "The Role of Deliberate Practice in the Acquisition of Expert Performance," co-authored with Ralf Krampe and Clemens Tesch-Römer, became one of the most cited papers in the psychology of expertise and provided the empirical foundation later popularized (and simplified) as the "ten thousand hours rule." His 2016 book Peak: Secrets from the New Science of Expertise, co-authored with Robert Pool, brought his findings to a general audience. Ericsson's work transformed the scientific understanding of human potential, demonstrating that the cognitive architectures underlying mastery are built rather than born — a finding whose implications have become urgently consequential in the age of artificial intelligence.

Chapter 1: The Architecture of Expertise

Expert performance is not what most people think it is. It is not the accumulation of experience. It is not the passive absorption of knowledge through years spent in a domain. It is not talent, if by talent one means an innate capacity that unfolds automatically given sufficient exposure. Expert performance is the progressive construction of mental representations — sophisticated internal models of the domain that allow the expert to perceive patterns invisible to the novice, to anticipate outcomes before they manifest, to make decisions with a speed and accuracy that appears intuitive but is actually the product of extensive, structured training. This distinction between what expertise appears to be and what it actually is constitutes the central finding of K. Anders Ericsson's research program across four decades, and it is the finding that the arrival of artificial intelligence threatens to render simultaneously more important and less applicable than at any previous moment in human history.

The research program that established this finding began not with grand theoretical ambitions but with a simple empirical puzzle. How do chess masters remember board positions so much better than novices? The answer, established through careful experimentation by Adriaan de Groot and later extended by William Chase and Herbert Simon — the same Herbert Simon who was simultaneously one of the founding figures of artificial intelligence — turned out to be both more precise and more consequential than the question suggested. Chess masters do not have better memories in any general sense. Their superiority is entirely domain-specific and entirely dependent on the structural meaningfulness of the material. Present a chess master with a position from an actual game, and the master can reproduce it after a brief glance with remarkable accuracy while the novice flounders. Present the same master with a random arrangement of pieces — a position that could not arise from actual play — and the master's advantage disappears almost entirely. The memory is not general. It is structural. What the master has built, through years of deliberate engagement with the domain, is an elaborate architecture of patterns — a library of meaningful configurations that allows the perception of the board not as twenty or thirty individual pieces but as a small number of meaningful chunks, each carrying implications for strategy, risk, and opportunity.

This architecture is what Ericsson termed the expert mental representation: a rich, flexible, deeply structured internal model of the domain that encodes not merely what things are but what they mean, what they imply, what typically follows from them, and what responses they demand. The chess master's mental representations encode not just piece positions but dynamic relationships — the tension between a bishop pinned against a king and the knight threatening to exploit the pin, the strategic implications of a pawn structure that constrains the opponent's development. The surgeon's mental representations encode not just anatomical structures but the feel of healthy tissue versus diseased tissue, the visual signature of adequate blood flow versus ischemia, the proprioceptive feedback that indicates the angle of approach is correct versus dangerously off. The musician's mental representations encode not just notes on a page but the dynamic arc of a phrase, the way a slight ritardando before a cadence creates a sense of arrival, the timbral quality that distinguishes a merely correct performance from one that communicates the emotional architecture of the composition.

In each case, the mental representation is built through struggle. Through the specific resistance of problems that exceeded the practitioner's current capability and forced the development of new cognitive structures. The chess master's pattern library was not acquired by watching games. It was acquired by playing games, by losing games, by encountering positions that defied the master's existing understanding and demanded the construction of new patterns to accommodate what the old patterns could not explain. The surgeon's proprioceptive knowledge was not acquired by observing surgery. It was acquired by performing surgery, by feeling the difference between adequate and inadequate tissue tension, by making errors that produced consequences specific enough to force the revision of the internal model.

This is the critical feature of expert mental representations: they are built through struggle. Not through mere exposure, not through repetition of what is already comfortable, not through the passive absorption of information that flows without resistance from source to recipient. The struggle is not an unfortunate byproduct of the learning process that could ideally be eliminated. The struggle is the mechanism through which the cognitive structures of expertise are constructed. Without the struggle, the structures do not form. The exposure may occur. The hours may accumulate. But the internal architecture that distinguishes the expert from the experienced non-expert remains unbuilt.

Ericsson spent his career demonstrating this finding across domains so varied that the finding's generality became difficult to dispute. Violinists at the Berlin Academy of Music. Competitive swimmers. Radiologists interpreting mammograms. Typists competing in speed tournaments. Taxi drivers navigating London. In every domain, the same mechanism: expert performance was not a function of innate capacity but of the quantity and quality of deliberate practice — practice that was effortful, targeted at specific weaknesses, informed by feedback, and sustained over extended periods. The ten-thousand-hour figure that became attached to this research through Malcolm Gladwell's popularization was always an approximation, a rough average derived from studies of elite musicians. Ericsson himself spent considerable effort clarifying that the number was never the point. The quality of the engagement was the point. Ten thousand hours of unfocused repetition produces reliable mediocrity. Ten thousand hours of deliberate practice produces expertise. The number was a way of communicating the magnitude of the investment. The mechanism was what mattered.

That mechanism is now under threat.

In The Orange Pill, Edo Segal describes a senior software architect who had spent twenty-five years building systems and who could "feel a codebase the way a doctor feels a pulse — not through analysis but through a kind of embodied intuition that had been deposited, layer by layer, through thousands of hours of patient work." The geological metaphor Segal employs — the idea that every hour spent debugging deposits a thin layer of understanding, and these layers accumulate over months and years into something solid, something you can stand on — is an almost perfect description of how expert mental representations are constructed in Ericsson's framework. Each layer is thin. Each individual encounter with a bug, a system failure, an unexpected behavior contributes only a small increment to the overall architecture. But the layers compound. They interact. They form a substrate that enables perception and judgment at a level that the individual layers, considered separately, could not support.

The senior engineer who looks at a codebase and feels that something is wrong before she can articulate what is wrong is standing on thousands of those layers. The feeling is not a guess. It is the output of an architecture so complex that it operates below the level of conscious articulation, producing evaluations that are experienced as intuition but are actually the product of an enormously sophisticated pattern-matching process built through years of deliberate engagement.

When artificial intelligence entered this domain — when Claude Code and its peers made it possible for a developer to describe a function in plain English and receive working code in seconds — the output was preserved. The code was produced. In many cases, the code was better than what the developer would have written alone. But the mechanism that built the mental representations was bypassed entirely. The developer did not struggle with the problem. She did not encounter the error message that forced a revision of her understanding. She did not feel the resistance that would have deposited the next thin layer. The output was delivered. The development was not.

This is not a conservative complaint about the good old days. It is a precise claim about a specific mechanism — the mechanism through which expert-level cognitive architectures are constructed — and the observation that this mechanism requires conditions that artificial intelligence, in its default mode of operation, systematically eliminates.

The question that follows is not whether AI is valuable. It is enormously valuable. The question is whether the conditions under which human expertise develops can survive in environments where AI handles the difficult work — the work that, in Ericsson's framework, is precisely the work that builds the cognitive structures expertise requires. The answer depends on understanding, with considerably more precision than the current discourse provides, exactly what those conditions are, exactly how AI disrupts them, and exactly what structures might preserve them in a world that has made effortlessness the default aesthetic.

Ericsson died in June 2020, before the generative AI revolution. He never had the opportunity to address publicly how large language models might challenge or confirm his framework. But the logic of that framework generates specific, testable predictions about what will happen when the friction of deliberate practice is removed at scale. Those predictions are now being confirmed by empirical evidence — evidence that the science of expertise anticipated decades before the technology that would test it had been built. The chapters that follow trace these predictions, examine the evidence, and ask what Ericsson's life's work implies for a world in which machines can perform at expert levels without having undergone the developmental process that expertise, in every human domain, has always required.

---

Chapter 2: The Friction Requirement

The most important finding in the science of expertise is also the most counterintuitive. Practice only produces improvement when it is effortful, focused, and targeted at the boundaries of current capability. Practice that is comfortable — that operates within the zone of competence rather than at its edges — does not produce development. It merely reinforces existing patterns. This principle, established through decades of research across domains ranging from chess to music to medicine to athletics, is the foundation upon which Ericsson's entire framework rests. It is also the principle that artificial intelligence most directly threatens.

The distinction can be stated with deceptive simplicity. There is a difference between doing something and getting better at doing something. Most people confuse the two. They assume that performing an activity repeatedly will automatically produce improvement. The assumption is so deeply embedded in common sense that it seems almost too obvious to question. Of course practice makes perfect. Of course experience produces expertise. Of course the person who has done something for twenty years is better than the person who has done it for two.

Except the evidence says otherwise. The evidence, accumulated across hundreds of studies examining thousands of practitioners in dozens of domains, demonstrates that experience alone does not reliably produce expertise. Many practitioners reach a level of acceptable performance relatively early in their careers and then plateau, performing at roughly the same level for the rest of their working lives regardless of how many additional years of experience they accumulate. The physician with twenty years of experience is not automatically better than the physician with five. In some studies, the physician with twenty years of experience is actually worse — not because experience is harmful but because the absence of the specific conditions that produce improvement allows bad habits to solidify and outdated approaches to persist unchallenged.

Ericsson identified four conditions that separate practice producing improvement from practice that merely maintains current performance. These conditions are specific, empirically established, and jointly necessary.

First, the practice must be effortful. It must demand concentration that taxes the practitioner's current cognitive resources. If the activity can be performed on autopilot, if the practitioner can execute it while thinking about something else, it is not producing development. The effort is the signal that the cognitive system is being pushed beyond its current capacity, and it is precisely this pushing that forces the system to adapt by constructing new cognitive structures.

Second, the practice must be targeted at the boundaries of current capability. Not far beyond those boundaries — overwhelming difficulty produces frustration and shutdown, not growth. Not well within those boundaries — comfortable performance reinforces existing patterns without adding new ones. At the boundary. In the zone where the practitioner can almost do the thing but not quite, where success requires stretching slightly beyond what is currently possible, where failure is frequent enough to be informative but not so frequent as to be demoralizing.

Third, the practice must provide feedback that is specific enough to guide improvement. The practitioner must be able to see, in some form, the gap between what was intended and what was achieved. Without this feedback, the practitioner cannot identify the specific aspects of performance that need to change, and the practice degenerates into repetition without refinement.

Fourth, the practice must allow for repetitive refinement. The practitioner must have the opportunity to try again, to apply the feedback from the previous attempt, to test whether the adjustment produced the desired improvement, and to receive new feedback on the adjusted performance. This iterative cycle — attempt, feedback, adjustment, new attempt — is the engine of deliberate practice.

These conditions are not arbitrary. They are the empirically identified requirements for the kind of cognitive adaptation that expertise demands. When all four conditions are present, practice produces reliable, measurable improvement that continues for years, even decades. When any condition is absent, improvement either stalls or fails to begin.

Now consider what happens when artificial intelligence enters the practice environment.

A developer sits down to write a function. In the pre-AI world, the process unfolds through a sequence of productive failures. She writes the function. It does not work. An error message appears — specific and unhelpful and sometimes maddening, telling her that something has gone wrong without telling her precisely what. She reads the error. She examines the code. She forms a hypothesis about the source of the problem. She tests the hypothesis. It is wrong. She forms another hypothesis. She reads documentation. She searches for similar problems. Eventually, hours or sometimes days later, the function works.

In those hours or days, all four conditions for deliberate practice were present. The work was effortful — the developer could not perform it on autopilot because the problem was novel. The work was targeted at the boundary of capability — the developer was attempting something she could almost do but not quite. The work provided specific feedback — the error messages told her something about the gap between intention and execution. And the work allowed for repetitive refinement — she could try again, apply what she learned, and iterate toward a solution.

Each iteration deposited one thin layer of understanding. The layer was thin. Any single debugging session contributed only a marginal increment. But the layers accumulated. They interacted. They formed the substrate upon which increasingly sophisticated mental representations were built.

Claude removes this process. The developer describes the function. Claude writes it. It works. She moves on.

The output is identical — perhaps even superior. But the four conditions for deliberate practice have been eliminated simultaneously. The work was not effortful — the developer described what she wanted, and the result appeared. The work was not targeted at the boundary of capability — the tool handled the problem without requiring her to engage with it. The work did not provide feedback on the developer's performance — it provided a finished product, which is a fundamentally different thing. And the work did not allow for repetitive refinement of the developer's skills — there was nothing to refine, because the developer did not perform the skill.

The frustration that characterized the pre-AI development process was not a byproduct to be optimized away. It was the signal that the practice was operating at the boundary of capability — exactly where development occurs. The frustration was the subjective experience of cognitive structures being forced to adapt. When the tool eliminates the frustration, it eliminates the developmental signal along with it.

The common response to this analysis is that the frustration was always unnecessary — that it was a byproduct of inadequate tools, and that better tools should eliminate it just as better surgical instruments eliminated certain mechanical difficulties. This response reveals a misunderstanding of what the difficulty was doing. The frustration was not caused by inadequate tools. It was caused by the gap between the practitioner's current understanding and the understanding the problem demanded. This gap is the condition for growth. When the tool closes the gap by handling the problem without requiring the practitioner to understand it, the tool has not removed an obstacle to growth. It has removed the condition for growth itself.

Ericsson's research makes a prediction here that can be stated with some precision. Practitioners who rely heavily on AI to handle the difficult aspects of their work will show a specific pattern: high current performance (because the tool is competent) and low developmental trajectory (because the conditions for deliberate practice have been eliminated). They will be productive without becoming expert. They will generate output without constructing the mental representations that would enable them to evaluate, adapt, and improve that output independently.

Emerging evidence is consistent with this prediction. Kartik Hosanagar, a Wharton professor studying AI's effects on professional capability, reported in 2025 that endoscopists who regularly used AI for polyp detection became measurably worse at finding polyps when the AI was turned off, with adenoma detection rates dropping from twenty-eight percent to twenty-two percent. Students who practiced math with unrestricted access to GPT-4 initially performed better but, once access was removed, underperformed compared with peers who had never used AI. The pattern is precisely what the deliberate practice framework predicts: current performance enhanced, underlying capability eroded, because the tool removed the conditions under which capability develops.

The difficulty is not the enemy. The difficulty is the mechanism. Removing the difficulty removes the mechanism. And a world optimized for the removal of difficulty is a world in which the mechanism for building human expertise is being dismantled at the precise moment when the need for human judgment — the product of that expertise — has never been greater.

---

Chapter 3: The Knowledge That Lives in Struggle

There is a specific kind of knowledge that can only be acquired through struggle, and no amount of observation, instruction, or tool-assisted production can substitute for it. This claim sounds mystical. It is not. It is an empirical finding with precise implications for the question of what AI eliminates when it eliminates the difficult parts of professional work.

The knowledge in question is not declarative knowledge — not the kind of knowledge that can be stated in propositions and transmitted through language. Declarative knowledge can be looked up, verified, communicated. The capital of France is Paris. The boiling point of water at standard pressure is one hundred degrees Celsius. The time complexity of a binary search is logarithmic. AI handles declarative knowledge superbly. But declarative knowledge is not what distinguishes the expert from the competent practitioner.

The knowledge that distinguishes the expert is procedural and representational. It is the knowledge of how things behave — not how they are supposed to behave according to the documentation, but how they actually behave in practice, with all their emergent complexities and contextual dependencies. It is the knowledge that allows the master mechanic to diagnose an engine problem from the quality of a sound, the experienced physician to detect a rare disease from a constellation of symptoms that no textbook describes as a pattern, the senior engineer to sense that an architecture will not scale before anyone has run a load test. This knowledge is not stored as propositions. It is encoded in the mental representations that deliberate practice builds — complex, interconnected cognitive structures that operate below the level of conscious articulation.

The critical property of this knowledge is that it cannot be transferred directly. It cannot be stated, because it is not propositional. It cannot be demonstrated in a way that allows the observer to acquire it, because its essence lies not in the observable performance but in the internal model that guides the performance. It cannot be taught in any conventional sense, because the learner must construct the representations through the specific friction of engaging with problems that resist their current understanding. The teacher can design practice activities that create the conditions for this construction. The teacher can provide feedback that guides the process. But the teacher cannot transfer the representations themselves. They must be built from the inside, through effort, through failure, through the iterative cycle of attempt and correction that constitutes deliberate practice.

This is why apprenticeship traditions across cultures and centuries have universally required the apprentice to do the work, not merely to watch the master do it. The watchmaker's apprentice does not learn to repair watches by observing the master's hands. She learns by repairing watches badly, by feeling the resistance of mechanisms that do not respond as expected, by developing through thousands of fumbled attempts the proprioceptive sensitivity that allows her eventually to feel the difference between a properly tensioned spring and one that is slightly off. The medical resident does not learn to diagnose by reading textbooks. She learns by encountering patients whose presentations do not match the textbook patterns, by forming hypotheses that turn out to be wrong, by developing through hundreds of diagnostic failures the pattern-recognition capacity that eventually allows her to detect diseases that textbooks cannot adequately describe.

In The Orange Pill, Segal describes an engineer who lost something she did not know she had. Embedded in the routine work that Claude took over — dependency management, configuration files, the mechanical connective tissue between the components she actually cared about — were moments when something unexpected happened. An error that forced her to understand a connection between systems she had not previously grasped. These moments were rare. Perhaps ten minutes in a four-hour block. But they were the moments that built her architectural intuition — the procedural, representational knowledge that would later enable her to make sound judgments about systems she had never seen before.

This passage describes, with considerable accuracy, the mechanism through which mental representations are constructed in professional practice. The developmental moments are embedded within routine work. They are not scheduled. They are not predictable. They arise spontaneously from the friction between the practitioner's current model and the system's actual behavior. The practitioner does not know, in advance, which moments will be routine and which will be developmental. The four hours of plumbing contain three hours and fifty minutes of tedium and ten minutes of genuine learning, but the ten minutes cannot be identified in advance because they are defined precisely by their unpredictability — by the appearance of something the practitioner's current model did not anticipate.

When AI removes the routine, it removes the context within which the developmental surprises occur. The engineer no longer encounters the unexpected configuration error because she no longer does the configuration. The ten minutes of genuine learning disappear along with the three hours and fifty minutes of tedium, and the mental representations stop growing even as the output continues.

The loss is invisible. The engineer's output is unchanged or improved. She is producing more, delivering faster, taking on more ambitious projects. The metrics all point upward. The invisible thing — the architecture of understanding that would have been built through struggle — is not growing, and its failure to grow will not manifest until the moment when the architecture matters: when the AI fails, when the problem is novel, when the situation requires the deep understanding that only deliberate practice builds.

Months later, this engineer realizes she is making architectural decisions with less confidence and cannot explain why. She has the experience. She has the output history. She has the credentials. But the internal architecture has not kept pace, because the conditions for its growth were removed when the routine friction was automated away.

This phenomenon has a precise parallel in the expertise research literature. Studies of physicians who rely heavily on diagnostic decision-support systems show a consistent pattern: the physicians who use the systems most heavily show the highest diagnostic accuracy when the system is available and the lowest diagnostic accuracy when the system is unavailable. The tool did not degrade their existing representations. It prevented new ones from forming by removing the conditions under which representations are constructed. The endoscopist data Hosanagar reported — adenoma detection rates dropping from twenty-eight to twenty-two percent when AI assistance was removed — is exactly this pattern, observed in vivo rather than in the laboratory.

There is a further dimension that the expertise framework illuminates. Mental representations are not domain-specific facts. They are flexible cognitive structures that transfer across related problems. The chess master's representations do not merely allow recognition of positions seen before. They allow evaluation of novel positions by analogy to stored patterns. The surgeon's representations do not merely allow repetition of practiced operations. They allow handling of unexpected complications by drawing on deep understanding of tissue behavior and anatomical relationships. This transferability is the hallmark of genuine expertise, and it is built through the systematic variation that characterizes deliberate practice — encountering the same deep principles in many different surface configurations, which forces the representations to become abstract enough to apply across novel situations.

When AI removes the struggle, it also removes the variation. The developer who uses Claude to handle configuration problems does not encounter the systematic variation in configuration errors that would build transferable representations. Her experience is uniform: describe the problem, receive the solution, move on. This uniformity does not build the abstract, transferable representations that genuine expertise requires. The consequence is a specific vulnerability that emerges only when the practitioner faces a situation the tool cannot handle — and at that moment, the practitioner discovers that the representations needed to handle it independently were never constructed.

The knowledge that lives in struggle is not romantic nostalgia for a harder time. It is a specific, empirically documented form of cognitive architecture that only forms under specific conditions — conditions of effort, resistance, feedback, and variation — that AI, in its default mode of operation, systematically eliminates.

---

Chapter 4: Performance Without Learning

There is a distinction between output and development that the conversation about artificial intelligence has largely failed to make. The distinction is not subtle and it is not academic. It is the difference between what a practitioner can produce and what the practitioner has become in the process of producing it. A developer who uses AI to generate working code has achieved an output. She has not necessarily achieved the cognitive development that producing that code through deliberate practice would have provided. The output is visible, measurable, deployable. The cognitive development is invisible, internal, and manifests only in future performance under conditions the tool cannot handle. The two are so thoroughly conflated in the current discourse that separating them requires a degree of analytical precision that the discourse has not been willing to sustain.

The expertise research literature provides the vocabulary for this separation. The distinction is between performance and learning. Performance is what a practitioner can do now, with the tools and resources currently available. Learning is the change in internal cognitive structures that enables future performance without the tools, or in novel situations that the tools cannot handle. Performance and learning are not merely different. They are often inversely related. This finding — documented across dozens of experimental paradigms in the learning sciences — is among the most counterintuitive and consequential in the entire body of expertise research.

In motor learning, conditions that produce the smoothest, most error-free practice — blocked repetition of a single skill, consistent practice conditions, immediate guidance that prevents errors before they occur — produce the worst long-term retention and transfer. Conditions that produce rough, error-prone, frustrating practice — interleaved practice of multiple skills, variable conditions, delayed feedback that forces the learner to detect and diagnose errors independently — produce the best long-term retention and transfer. This finding, known in the learning sciences as the desirable difficulties framework, developed by Robert Bjork and extended by subsequent researchers, has been replicated so consistently that it constitutes one of the most robust findings in educational psychology.

The implication for AI is direct and uncomfortable. AI tools, by their nature, optimize for performance. They are designed to produce the best possible output given the user's input. They are not designed to optimize for the user's cognitive development. When a developer uses Claude to write a function, Claude does not produce a deliberately imperfect version designed to force the developer to detect and correct the imperfections. It does not introduce strategic errors requiring engagement at a level that builds understanding. It does not withhold the solution and provide hints forcing the developer to construct the solution herself. It produces the best function it can, as quickly as it can.

This is perfectly rational tool design. A tool should produce the best possible output. But the consequence of rational tool design is the systematic elimination of the conditions that produce learning. The practitioner becomes more productive in the immediate sense and less developed in the long-term sense. Because the immediate sense is the one that managers measure, clients value, and the practitioner herself experiences as satisfaction, the developmental cost remains invisible until the moment it manifests as a deficit.

That moment arrives in predictable circumstances. It arrives when the AI produces output that is subtly wrong — not wrong in ways that produce error messages or test failures, but wrong in ways that require deep understanding to detect. Segal describes such a moment in The Orange Pill: Claude produced a passage linking Csikszentmihalyi's flow state to a concept it attributed to Gilles Deleuze. The passage was eloquent, the connection was elegant, and the philosophical reference was wrong in a way that only someone who had actually read Deleuze would catch. The prose was smooth. The seam where the idea fractured was concealed by the smoothness. A practitioner without the mental representations to evaluate the output critically — representations built through the specific struggle of reading Deleuze carefully, of wrestling with the difficulty of the text, of constructing an understanding through effort — would have no mechanism to detect the failure.

The moment also arrives when the practitioner encounters a problem outside the distribution of problems the AI was trained on. Every AI system has boundaries beyond which its performance degrades, and those boundaries are not always visible to the user. The practitioner who has built her career on AI-assisted production discovers at this boundary that the competence she attributed to herself was actually the competence of the tool.

The magnitude of this cost is difficult to appreciate because it is distributed across many small increments. No single AI-assisted task produces a noticeable developmental deficit. The developer who uses AI to write one function has not measurably damaged her expertise. She has foregone one thin layer of understanding — one of the thousands that accumulate into the geological substrate of expert knowledge. One layer is negligible. But the layers compound. Over months and years, the practitioner who consistently delegates the difficult work accumulates a deficit of thousands of ungained layers, collectively constituting the difference between a practitioner who has built the mental representations of genuine expertise and one who has produced expert-level output without undergoing expert-level development.

The compounding is the danger. Each layer foregone makes the next layer slightly harder to acquire, because expert mental representations are built on each other. The later layers presuppose the earlier ones. A developer's capacity to learn from an unexpected configuration error in month twelve presupposes the representations built from unexpected errors in months one through eleven. Skip the earlier encounters, and the later encounters become less informative, because the cognitive structures that would have made them informative were never built.

There is a further consequence that the expertise framework predicts with some specificity. The AI shortcut creates a systematic miscalibration of the practitioner's self-assessment. Every successful AI-assisted production reinforces the practitioner's sense that she understands the domain at the level the production represents. She asked for a function. The function works. The natural inference is that she has produced a working function, and the natural corollary is that she possesses the understanding that producing working functions implies. But the inference is wrong. She directed a tool. Her understanding is unchanged. The gap between her perceived competence and her actual competence widens with each AI-assisted production, and because the gap is invisible — because the outputs that ground perceived competence are real, working, deployable outputs — the miscalibration has no natural correction mechanism.

The result, stated in the terms of the expertise framework, is a workforce of practitioners who are more productive and less expert than they realize. More productive because the tools genuinely amplify capacity to produce. Less expert because the tools systematically eliminate the conditions under which expertise develops. And less aware of the gap because the tools provide the very outputs that practitioners use as evidence of their own competence.

The Communications of the ACM reported in 2025 that Carnegie Mellon researchers studying knowledge workers found precisely this pattern: practitioners reported that generative AI made tasks seem cognitively easier, but the researchers found they were ceding problem-solving expertise to the system and focusing instead on functional tasks like gathering and integrating responses. The practitioners experienced the collaboration as empowering. The researchers observed it as deskilling. Both observations were accurate. They were simply measuring different things — performance and learning, output and development — and the two had come apart in exactly the way that the deliberate practice framework predicts they will when the conditions for effortful engagement are removed.

This decoupling of performance from learning is, in the history of expertise research, unprecedented. Every previous technology that enabled production also required the development that production entailed. The blacksmith who forged a blade also developed metallurgical understanding. The programmer who wrote code also developed computational understanding. Production and development were coupled. AI uncouples them. Production is now available without development. Output is available without understanding. Performance is available without learning.

The question this raises is not whether AI is useful — it is enormously useful — but whether the organizations and societies that depend on expert human judgment will recognize that producing expert-level output and developing expert-level practitioners are now two separate activities requiring two separate sets of conditions. Conditions optimized for the first may actively undermine the second. The organization that conflates them — that assumes productive practitioners are developing practitioners — will discover the error at the moment when genuine expertise matters most: when the tool fails, when the situation is novel, when the stakes are high enough that the difference between expert judgment and tool-dependent production determines whether the outcome is acceptable or catastrophic.

Chapter 5: When Feedback Closes Too Fast

Immediate feedback is one of the conditions Ericsson identified for effective deliberate practice. When a musician plays a wrong note, she hears it immediately. When a surgeon's incision deviates from the intended path, the deviation is visible in real time. When a chess player makes a weak move, the opponent's response reveals the weakness within minutes. This immediacy is crucial. Feedback that is delayed by hours or days loses much of its developmental power, because the learner cannot connect the feedback to the specific cognitive state that produced the action. The action recedes into memory, the cognitive context dissolves, and the correction that the feedback was supposed to guide becomes diffuse and ineffective.

AI provides feedback with unprecedented speed. Describe what you want, and the result appears in seconds. This immediacy would seem, at first glance, to be an unambiguous developmental advantage. If immediate feedback is good for learning, and AI feedback is the most immediate form of feedback available, then AI feedback should produce the best learning.

The reasoning is clean. It is also wrong. And the specific way in which it is wrong illuminates something important about the relationship between feedback and development that Ericsson's framework identifies but that the popular understanding of feedback tends to miss.

The critical distinction is between feedback that supports development and feedback that short-circuits it. The difference lies in what the feedback requires of the learner. Developmental feedback tells the practitioner that something went wrong and forces her to figure out why. It is specific enough to guide improvement but general enough to require the learner to construct the correction. The wrong note tells the musician that something is off but does not tell her which finger to move or how much pressure to apply. The surgical deviation tells the surgeon that the path is wrong but does not specify exactly how to correct it. The opponent's strong response tells the chess player that his move was weak but does not reveal what the better move would have been. In each case, the feedback creates a gap — a space between the error and the correction — and it is in this gap that the deepest learning occurs.

The gap is the zone of productive struggle. It is the space where the learner must engage her own cognitive resources to diagnose the problem, generate hypotheses about its source, and construct a corrective response. This engagement is not incidental to the learning process. It is the learning process. The musician who figures out why the note was wrong has learned something about the relationship between fingering, pressure, and tone that she could not have learned from being told the correct fingering. The surgeon who diagnoses the cause of the deviation has learned something about tissue response, instrument angle, and hand position that she could not have learned from being shown the correct path. The gap forces the construction of understanding, and the understanding constructed in the gap is deeper, more flexible, and more transferable than understanding provided directly.

AI feedback does not create this gap. It provides the solution directly, bypassing the constructive process entirely. The developer describes a problem. Claude returns a solution. The developer did not occupy the space between problem and solution. She did not diagnose the source of the difficulty. She did not generate hypotheses. She did not construct a corrective response through the iterative process of attempt, failure, and revision. The gap was closed before it could be experienced.

The speed of AI feedback creates a further problem that is less obvious but equally consequential for the development of expertise. When feedback is fast enough, the practitioner never experiences the productive uncertainty that precedes understanding. In traditional practice, there is a period between encountering a problem and solving it during which the practitioner lives with the problem — turns it over mentally, considers it from multiple angles, sleeps on it, returns to it with fresh perspective. Cognitive psychologists have documented this incubation period extensively. It is not wasted time. It is a phase of processing during which the brain continues to work on the problem below the level of conscious awareness, forming connections between the problem and seemingly unrelated knowledge, testing hypotheses that the conscious mind has not yet formulated. The sudden insight that arrives in the shower, during a walk, upon waking — the moment that feels like a gift from nowhere — is the product of this incubation process, and it represents a form of understanding that is deeper and more integrated than the understanding produced by immediate, conscious problem-solving.

AI eliminates the incubation period. The problem arrives. The solution arrives seconds later. The practitioner never lives with the problem. She never turns it over. She never sleeps on it. The connections that incubation would have formed — the unexpected links between the problem at hand and the practitioner's broader knowledge, the novel approaches that emerge from unconscious processing of a problem allowed to simmer — never form, because the problem is solved before the simmering can begin.

The result is a specific kind of cognitive impoverishment that is difficult to detect because it manifests as the absence of insights that would have occurred but did not. The developer who would have had a crucial insight about system architecture while walking the dog — the insight triggered by the incubation of a debugging problem she had been living with for two days — never has that insight because the debugging problem was solved by Claude in thirty seconds. The insight that was lost was never known to be possible, so its absence goes unnoticed. But the accumulation of such absences — hundreds of insights that would have emerged from the incubation of problems solved too quickly for incubation to occur — represents a significant developmental cost that cannot be measured because the counterfactual is invisible.

There is a further dimension of the feedback problem that Ericsson's framework illuminates with particular clarity: the role of error in the construction of expert knowledge. Errors are not merely failures to be corrected. They are information. Specific errors reveal specific gaps in the practitioner's understanding. The pattern of errors across many practice sessions reveals the structure of the practitioner's mental representations — where they are robust and where they are fragile, where they encode deep principles and where they rely on surface heuristics. A knowledgeable teacher or coach reads the error pattern the way a physician reads a symptom cluster: not as a list of individual problems but as a diagnostic picture revealing the underlying cognitive architecture and indicating what kind of practice will be most beneficial.

AI eliminates the practitioner's errors by eliminating the practitioner's production. When the tool produces the output, the practitioner does not make errors, because the practitioner does not produce the output. The errors that the tool makes are the tool's errors, not the practitioner's, and they do not reveal anything about the practitioner's mental representations. The diagnostic information that errors provide — the window into the practitioner's cognitive architecture — is lost. The practitioner's architecture becomes invisible, even to herself, because the outputs that would have revealed its structure are produced by the tool rather than by the practitioner.

This invisibility compounds the feedback problem. The practitioner not only lacks the developmental feedback that struggle provides. She also lacks the diagnostic feedback that error provides. She cannot see her own weaknesses, because her weaknesses never manifest as errors in her output. They manifest as weaknesses in her understanding, but understanding is invisible when the output is produced by a tool that does not require understanding. The practitioner may be profoundly confused about a domain and produce output that shows no confusion whatsoever, because the output reflects the tool's understanding rather than the practitioner's.

The implications for professional development are considerable. Organizations that rely on output quality as a proxy for practitioner competence — and virtually all organizations do — will systematically overestimate the competence of practitioners who use AI extensively. The output will be excellent. The practitioner's understanding may not be. And the gap between output quality and understanding quality will be invisible until the moment when understanding matters — when the tool fails, when the situation is novel, when the problem requires the practitioner to rely on her own cognitive architecture rather than the tool's.

None of this means that fast feedback is always harmful or that practitioners should artificially delay their access to solutions. Ericsson's framework is more specific than that. There are situations where immediate feedback serves development — situations where the learner is working at the appropriate level of challenge, where the feedback is informative about the specific dimension of performance the learner is trying to improve, and where the learner has the metacognitive awareness to use the feedback constructively rather than passively. And there are situations where immediate feedback undermines development — situations where the feedback closes the gap too quickly for constructive processing to occur, where the speed of the response prevents engagement at the depth that development requires, and where the fluency of the output creates the illusion of understanding where understanding has not been achieved.

The distinction matters because it is designable. AI systems could, in principle, be designed to create gaps rather than close them — to provide hints rather than solutions, to diagnose rather than fix, to make the practitioner's errors visible and informative rather than invisible and irrelevant. Such systems would function less like helpful assistants and more like skilled coaches, which is a fundamentally different design logic. The current design logic optimizes for the speed and quality of the output. A developmental design logic would optimize for the quality of the cognitive engagement the interaction produces, which sometimes means slower feedback, partial feedback, or feedback that deliberately preserves the gap between error and correction so that the practitioner must occupy that gap and grow within it.

Whether such systems will be built depends on whether the distinction between performance feedback and developmental feedback is understood clearly enough by the people designing AI tools to be incorporated into their design. At present, the distinction is not widely understood. The default assumption — that faster, more complete feedback is always better — remains dominant. And as long as it remains dominant, AI feedback will continue to optimize for the closure of gaps that, from a developmental perspective, needed to stay open.

---

Chapter 6: The Taxonomy of Practice in an AI-Saturated World

Ericsson's research distinguishes three modes of practice, each defined by the quality of engagement between the practitioner and the domain. The distinctions are not mere categories. They describe fundamentally different developmental trajectories, each producing a different kind of practitioner over time, and the differences compound so dramatically that practitioners who begin at the same level diverge exponentially depending on which mode predominates in their training.

Naive practice is the most common mode and the least developmental. It is repetition without improvement. The pianist who plays through the same piece every evening, making the same mistakes in the same passages, reinforcing the same habits, never targeting specific weaknesses. The driver who has driven for twenty years and is no better than the driver who has driven for two. The teacher who delivers the same lesson plan year after year. The effort is real. The engagement may be sincere. But the effort is directed toward maintenance rather than development. The practitioner is operating within the zone of established competence, performing at the level already reached, without the specific targeting of weaknesses that would push beyond that level. The result is what Ericsson termed arrested development: a stable level of performance that persists indefinitely regardless of accumulated experience.

Purposeful practice is a significant step beyond naive practice. It is characterized by focused effort directed toward specific goals. The pianist identifies the passage that gives her trouble, isolates it, practices it at reduced tempo, targets the specific technical difficulty, works systematically to overcome it. The effort is not merely dedicated. It is directed. It has targets, benchmarks, criteria for success. Purposeful practice produces reliable improvement. But it has a limitation: it is self-directed. The practitioner identifies her own weaknesses, designs her own practice activities, evaluates her own progress. This self-direction is constrained by the practitioner's current level of understanding. She can only identify weaknesses she can perceive, and the most consequential weaknesses are often the ones she cannot perceive because perceiving them requires the very expertise the practice is supposed to develop.

Deliberate practice is the highest form and the rarest. It is distinguished from purposeful practice by the presence of a knowledgeable teacher or coach who can perceive what the practitioner cannot, design practice activities that target weaknesses the practitioner does not know she has, and provide the calibrated feedback that converts struggling effort into structured development. The teacher has two capacities the self-directed practitioner lacks: the capacity to perceive the gap between current and desired performance with the specificity that effective practice requires, and the capacity to design activities that close the specific gaps the perception reveals.

The question that matters for the AI transition is where AI-assisted work falls in this taxonomy. The answer is not flattering.

By default, AI-assisted work most closely resembles naive practice. The tool handles the difficult parts. The user handles the easy parts. The boundary of capability is never tested, because the tool's capability vastly exceeds the user's in the implementation dimension. The user directs. The tool executes. The direction may be competent, even skillful. But the user's understanding of the execution — the deep, procedural, representational knowledge that constitutes domain expertise — is not tested, not stretched, not developed.

This characterization requires a qualification. AI-assisted work is not naive practice in the traditional sense — the practitioner is not mindlessly repeating an established routine. The output is too sophisticated for that. But it is naive in the developmental sense: the practitioner is not engaging with the domain at the boundary of her capability, is not receiving feedback on her own performance, is not constructing new cognitive structures through the iterative process of effortful engagement. The tool is doing the developing. The practitioner is doing the directing. And directing, while genuinely demanding, builds a different set of representations than doing.

The distinction between the evaluative representations that directing builds and the constructive representations that doing builds is consequential. A developer who evaluates AI-generated code is building representations of what correct code looks like — an increasingly refined model of quality, style, and structural soundness. A developer who writes code is building representations of how to construct correct code — a model of the problem-solving process itself, including the false starts, the dead ends, the moments of confusion that eventually resolve into understanding. These are related but distinct forms of knowledge. The first is necessary but not sufficient for the second. A person can become an excellent judge of code without becoming an excellent writer of code, just as a person can become a sophisticated film critic without being able to direct a film.

The deliberate practice framework suggests that the constructive representations — the representations built through doing rather than evaluating — are the deeper and more transferable form of expertise. They encode not just the features of good performance but the process by which good performance is achieved, including the recovery from errors, the navigation of ambiguity, and the construction of solutions under conditions of genuine uncertainty. These process-level representations are what enable the expert to handle novel situations — situations that have not been encountered before and that therefore cannot be evaluated by reference to stored examples of correct performance.

Segal's account of his own collaboration with Claude oscillates instructively between these modes. When he sets clear goals, studies the output critically, and uses Claude's failures as learning opportunities — the Deleuze incident is the paradigm case — the collaboration approaches purposeful practice. The error and its detection added a layer to his understanding of how AI output can fail. But when the collaboration reverts to its default mode — describe the desired output, receive it, move on — it reverts to the developmental equivalent of naive practice, regardless of the sophistication of the output.

The distinction is in the user, not the tool. AI can support developmental engagement if the user maintains the effortful, critical, boundary-testing orientation that deliberate practice requires. But the tool's default mode — immediate, smooth, friction-free — actively works against that orientation. The tool is designed to make things easy. Deliberate practice requires things to be hard. The user who wants to develop through AI-assisted work must fight against the tool's native helpfulness, deliberately introducing friction where the tool removes it, actively seeking out the discomfort that the tool is designed to eliminate.

This is psychologically difficult. It requires a level of metacognitive awareness and self-discipline that most practitioners do not possess — not because they are weak or undisciplined but because the awareness and discipline in question are themselves forms of expertise that must be developed through practice. This creates a recursive problem: using AI in a developmentally productive way requires expertise in how to use AI developmentally, and developing that expertise requires the very deliberate practice the default mode undermines.

The recursive problem is not insurmountable. It can be broken by starting with small, self-conscious experiments in friction-seeking AI use and building from there. But it explains why so few practitioners have spontaneously developed the kind of AI-assisted practice that preserves the conditions for deliberate practice. The default is too strong. The path of least resistance leads to naive practice, and the path to deliberate practice requires active construction of a mode of engagement that the tools do not naturally support.

The organizational implications follow directly. If deliberate practice requires a knowledgeable guide who can design difficulty and provide targeted feedback, and if AI-assisted work defaults to naive practice without such guidance, then organizations that want their practitioners to develop expertise in the AI age must invest in structures that preserve the conditions for development. Not just tools but pedagogical structures: mentors who understand both the domain and the dynamics of AI-assisted practice, activities that use AI to amplify challenge rather than eliminate it, evaluative frameworks that assess development alongside productivity, and cultural norms that value the deliberate seeking of difficulty as a professional practice rather than an inefficiency to be optimized away.

Without these structures, the default will prevail, and the default produces naive practice at scale: practitioners spending hours in AI-assisted work that maintains their current level of performance without producing the development that genuine expertise requires. The hours accumulate. The expertise does not. The gap between the two widens invisibly, discovered only when circumstances reveal it — and circumstances, in the domains where expertise matters most, tend to reveal it at the worst possible moment.

---

Chapter 7: What Ascending Friction Requires

The concept of ascending friction — the observation that technological abstractions remove difficulty at one level and relocate it upward to a higher cognitive floor — is compatible with Ericsson's expertise framework, but only under specific conditions. This compatibility is not automatic. It is contingent on features of the new level of difficulty that determine whether ascending friction produces genuine expertise at the higher level or merely the appearance of expertise — practitioners who operate at the higher level without possessing the deep, flexible, struggle-built mental representations that genuine expertise at any level requires.

The historical record provides cases where ascending friction clearly produced genuine expertise at the new level. The most instructive is laparoscopic surgery. When surgeons lost the tactile friction of open surgery — the direct contact between the surgeon's hands and the patient's tissue — they gained a new and formidable set of difficulties: interpreting a two-dimensional image of a three-dimensional space, coordinating instruments without direct tactile feedback, maintaining spatial orientation through a camera whose viewing angle is constrained and whose depth cues are limited. These new difficulties were intrinsically challenging. They demanded sustained effort at the boundary of capability. They provided immediate, specific feedback — the visual display showed exactly what the instruments were doing. They allowed for repetitive refinement through simulation and supervised practice. And they required the construction of new cognitive structures: new mental representations of spatial relationships, new motor programs for instrument manipulation, new perceptual skills for interpreting the laparoscopic display.

The surgeons who mastered this technique developed genuine expertise at the new level. Their expertise was differently structured from that of open surgeons — encoding different patterns, supporting different capabilities — but it was genuine in the sense that matters: it was deep, flexible, built through struggle, and transferable to novel situations within the domain.

The critical features of this case define the conditions under which ascending friction produces genuine expertise. The new level of difficulty was intrinsically challenging — the challenge inhered in the activity, not artificially imposed. The feedback was integrated into the activity itself, continuous and specific. The variation was built in — each patient presented different anatomy, different pathology, different complications. And the progression was structured — surgeons moved through simulation, supervised cadaver work, supervised live cases, and eventually independent practice with peer review.

These four features — intrinsic challenge, integrated feedback, built-in variation, and structured progression — are precisely the conditions that Ericsson's research identifies as necessary for deliberate practice. When ascending friction provides them, it produces genuine expertise. When it does not, it produces practitioners who operate at the higher level without the cognitive architecture that responsible operation at that level demands.

The question for the AI transition is whether the friction at the judgment level — the friction of deciding what to build, for whom, and why — satisfies these conditions. The answer is mixed in ways that matter.

The judgment-level friction is intrinsically challenging. This condition is satisfied. Deciding what software should exist, what problems are worth solving, what architectural approaches will scale — these are genuinely difficult cognitive tasks demanding deep understanding across multiple domains. The challenge is real.

But the feedback at the judgment level is problematic. When a surgeon makes an error, the consequence is visible within minutes. When a product leader makes a judgment error — choosing the wrong feature, the wrong market, the wrong architecture — the consequence may not be visible for months or years. The feedback loop is long, noisy, and confounded by factors beyond the decision-maker's control. Markets shift. Competitors act. Users behave unpredictably. The connection between a specific judgment and its outcome is diluted by so many intervening variables that the feedback is often inadequate to guide the kind of precise, iterative refinement that deliberate practice requires.

The variation at the judgment level is adequate in some respects and deficient in others. Product leaders encounter varied challenges — different markets, technologies, organizational contexts. But the variation is slow. The developer debugging code completes hundreds of attempt-feedback-revision cycles in a month. The product leader making strategic decisions may complete only a handful of comparable cycles in a year. The rate of variation constrains the rate at which judgment-level mental representations can be constructed.

The structured progression at the judgment level is, in most organizations, essentially nonexistent. There is no established curriculum for developing product judgment comparable to the surgical training progression. Leaders are typically promoted into judgment roles on the basis of technical performance, given minimal structured guidance in the cognitive skills judgment demands, and evaluated primarily on outcomes only loosely coupled to the quality of their judgment.

This analysis suggests that ascending friction can work but does not work automatically. The laparoscopic case shows that ascending friction produces genuine expertise when the new level satisfies the conditions for deliberate practice. The judgment-level friction of the AI transition is genuinely challenging but lacks several structural features that effective deliberate practice requires: fast feedback, high variation, and structured progression. These features can be designed into the judgment-level practice environment. They will not emerge spontaneously.

The expertise research literature provides some guidance on what designed judgment-level practice might look like. Deliberate practice in domains with long feedback loops — strategic decision-making, investment, long-range planning — has been studied, though less extensively than practice in domains with short feedback loops. The findings suggest that the developmental benefits of practice in long-feedback-loop domains can be enhanced through specific interventions.

The first is decision journaling: the systematic recording of decisions, the reasoning behind them, and the predicted outcomes, followed by structured comparison between predictions and actual outcomes when the outcomes become available. This practice shortens the subjective feedback loop by making the decision and its reasoning available for review at the moment the outcome arrives, preserving the connection between the cognitive state that produced the decision and the feedback on its quality.

The second is scenario simulation: the construction of realistic decision scenarios that compress the feedback loop by providing accelerated outcomes. War games, business simulations, and tabletop exercises serve this function in military and business contexts. In the AI context, this could involve using AI itself to generate realistic product scenarios, market responses, and failure modes that test the practitioner's judgment under conditions more varied and more rapidly iterated than actual practice provides.

The third is structured peer review: regular, detailed examination of judgment calls by practitioners at comparable or higher levels of expertise. This provides the external perspective that Ericsson identified as essential for deliberate practice — the capacity to detect weaknesses the practitioner cannot see, to challenge assumptions the practitioner has not questioned, to offer alternative framings that expand the practitioner's representational repertoire.

None of these interventions is new. They are established practices in fields that have grappled with the problem of developing expertise in long-feedback-loop domains for decades. What is new is their urgency. When the lower levels of the stack were handled by human practitioners, the feedback at those levels was fast, specific, and developmental. The practitioner built expertise at the implementation level as a natural consequence of doing implementation work. With AI handling implementation, the natural accumulation of expertise at that level stops. The judgment level becomes the primary site of human expertise development, and that level does not provide the conditions for deliberate practice automatically. They must be built.

The organizations and educational institutions that build these conditions will produce practitioners whose judgment-level expertise is genuine — built through the kind of effortful, feedback-rich, varied engagement that Ericsson's research identifies as the mechanism of expert development. The organizations that do not build these conditions will produce practitioners who occupy judgment-level roles without having undergone the developmental process that those roles demand. The ascending friction will have relocated the work without relocating the conditions for growth. The practitioners will have ascended. Their expertise will not have followed.

---

Chapter 8: The Floor Rises, the Ceiling Remains

The democratization of capability that AI enables — the lowering of the floor of who can produce competent work in a domain — creates a specific challenge for expert performance that Ericsson's framework can illuminate with precision. When the floor rises, the distinction between competent performance and expert performance becomes harder to see. The AI-assisted novice produces output that is, in many visible dimensions, indistinguishable from the expert's output. The code compiles. The brief is well-structured. The analysis is coherent. The surface features that used to differentiate expert from novice output have been equalized by the tool.

But the expert possesses something the novice does not: mental representations that enable critical evaluation, adaptation, and judgment in situations that fall outside the tool's competence. These representations are invisible in the output. They manifest only under specific circumstances — when the output contains a subtle error that the novice cannot detect and the expert can, when the situation changes in ways the tool did not anticipate, when the problem requires the deep, flexible, transferable understanding that only deliberate practice builds.

These circumstances may be relatively infrequent. In a world where AI handles the majority of implementation tasks with adequate quality, the situations requiring expert-level mental representations may constitute a small fraction of the total workload. This arithmetic creates a temptation — for organizations, for markets, for the practitioners themselves — to conclude that expert-level mental representations are no longer worth the investment required to build them. If the expert's advantage manifests only rarely, and the tool handles most situations adequately, then the expected value of expert development may appear to fall below the expected value of tool proficiency.

This reasoning is seductive and dangerous. It is seductive because it is mathematically coherent within a narrow frame. It is dangerous because it ignores the nature of the situations in which expertise matters. These situations are not merely infrequent. They are high-stakes. They are the situations where the difference between expert judgment and non-expert judgment determines whether the system works or fails, whether the diagnosis is correct or catastrophic, whether the architecture scales or collapses. The expected value of expertise is not the frequency of its deployment multiplied by its average benefit. It is the frequency multiplied by the magnitude of its benefit in the situations where it matters, and in those situations the magnitude is often enormous.

The medical domain provides the starkest illustration because the stakes are most legible. AI diagnostic systems now match or exceed the diagnostic accuracy of experienced physicians across a growing range of conditions. The radiology AI that detects cancers. The dermatology AI that classifies skin lesions. The clinical decision-support system that suggests diagnoses based on patient data. In each case, the floor has risen: the AI-assisted junior physician produces diagnostic performance comparable to the experienced specialist across the majority of routine presentations.

But the cases that are not routine — the atypical presentation of a common disease, the rare disease mimicking a common one, the patient whose symptom constellation matches no pattern in the training data, the complication requiring the surgeon to abandon the planned approach and improvise — these are the cases where patients live or die based on the depth of the practitioner's mental representations. The Hosanagar data on endoscopists is again instructive: when the AI was available, the AI-assisted physicians detected polyps at comparable rates to unassisted experts. When the AI was removed, the practitioners who had relied on it most heavily showed detection rates that had dropped six percentage points. In a screening population of millions, six percentage points translates to thousands of missed diagnoses. The floor had risen. The practitioners standing on it had not developed the capacity to stand without it.

In software engineering, the rising floor is the most visible because AI tools have been most widely adopted. A developer in Lagos with Claude can build a working application in a weekend. An intern with AI assistance can produce code that compiles, runs, and passes tests. The visible distinction between junior and senior output has narrowed dramatically. But the systems that matter — systems handling millions of users, processing financial transactions, controlling medical devices, managing critical infrastructure — are systems where the floor is irrelevant and the ceiling is everything. The senior engineer's capacity to design systems that scale, to anticipate failure modes that testing cannot reveal, to make architectural decisions that determine whether the system survives its success or collapses under it — these capacities are built through thousands of encounters with systems that failed. Through the specific friction of debugging production incidents under pressure. Through the accumulated understanding of how distributed systems behave under load, how databases handle concurrent writes, how network failures cascade through microservice architectures.

This understanding cannot be shortcut. It cannot be produced by a tool. It can only be produced by a practitioner who has built the mental representations through deliberate engagement with the specific challenges that large-scale systems present.

The pattern across domains is consistent. The floor rises. The routine becomes accessible to AI-assisted novices. The distinction between competent and expert performance becomes harder to see. But the consequences of lacking expertise do not diminish. They concentrate. They become rarer but more consequential, less visible but more catastrophic. And the maintenance of expertise becomes, paradoxically, both more important and harder to justify — because the investment required is expensive, uncomfortable, and invisible in its returns until the moment when its value becomes incalculable.

The paradox has organizational consequences that the expertise framework can specify. The first is that organizations must separate the assessment of output quality from the assessment of practitioner capability. These were once reasonable proxies for each other. They no longer are. A practitioner can produce excellent output through AI assistance without possessing the understanding that would enable her to evaluate, adapt, or troubleshoot that output independently. New assessment methods are needed — methods that evaluate understanding rather than production, that probe the practitioner's cognitive architecture directly rather than inferring it from the products of tool-assisted work. Some organizations are beginning to experiment with such methods: code reviews conducted without AI assistance, diagnostic exercises that remove decision-support tools, design challenges that require practitioners to work from first principles rather than from AI-generated starting points. These exercises are expensive in terms of immediate productivity. They are essential in terms of long-term capability assurance.

The second consequence is that organizations must invest explicitly in the maintenance of expert-level capability even as the visible need for that capability appears to diminish. This means structured practice time where practitioners engage with their domain without AI assistance. It means challenging assignments deliberately designed to push practitioners beyond their tool-assisted comfort zone. It means mentorship structures pairing experienced practitioners with developing ones, providing the external perspective that deliberate practice requires. It means rewarding expertise through compensation, authority, and the allocation of the most consequential work — signals that expertise is valued even when the quarterly metrics do not distinguish it from tool-dependent competence.

The MIT Sloan Management Review argued in 2025 that organizations must develop what the authors called "meta-expertise" — the capacity to ask better questions and recognize gray areas, shifting the expert's value from content to context. This is compatible with Ericsson's framework, but the framework adds a critical specification: meta-expertise, like any other form of expertise, can only be developed through deliberate practice. It cannot be decreed. It cannot be acquired through a training seminar. It must be built through the specific, effortful, feedback-rich engagement with judgment-level problems that develops the mental representations meta-expertise requires. The organizations that invest in this development will produce practitioners whose judgment survives the tool's failure. The organizations that do not will discover the deficit at the moment when the tool fails and the judgment is needed and the practitioner, standing on the risen floor, discovers there is nothing beneath her but the tool she no longer has.

The floor has risen. This is real, and it is in many ways a genuine good — more people can produce more things of adequate quality than at any previous point in human history. But the floor is not the ceiling. The ceiling is where expert judgment lives, where the deep mental representations built through years of deliberate practice enable the performance that matters most under the conditions that matter most. Raising the floor does not raise the ceiling. It merely makes the distance between them harder to see. And the organizations and societies that mistake the rising floor for the elimination of the need for height will find themselves dangerously exposed when the situations that require height — that require genuine, struggle-built, deeply represented expertise — inevitably arrive.

Chapter 9: The Teacher, the Tool, and the Design of Difficulty

Deliberate practice, in its most effective form, requires a knowledgeable teacher or coach. This requirement is not incidental to the framework. It is structural. The teacher provides something the practitioner cannot provide for herself: an external perspective on the gap between current performance and desired performance, informed by deep understanding of both the domain and the developmental process. The teacher sees what the practitioner cannot see, diagnoses what the practitioner cannot diagnose, and designs practice activities that target weaknesses the practitioner does not know she has. Without this external perspective, practice defaults to the self-directed mode — purposeful at best, naive at worst — and the developmental trajectory is limited by the practitioner's own understanding of what needs to improve.

The role of the teacher in Ericsson's framework is often misunderstood as primarily instructional: the teacher tells the practitioner what to do, and the practitioner does it. This mischaracterizes the function. The teacher's primary role is not instruction but design. The teacher designs practice activities that create conditions for the specific kind of learning the practitioner needs. The vocal coach who hears a singer straining on high notes does not simply say "relax your throat." She designs an exercise that makes relaxation necessary — a phrase that cannot be sung with tension, a passage that rewards openness and punishes rigidity. The design of the exercise does the teaching. The teacher's expertise lies not in knowing the right answer but in designing the right challenge.

This distinction between instruction and design is crucial for understanding what AI can and cannot do in the development of expertise. AI can instruct. It can explain concepts, demonstrate techniques, correct errors, and provide domain information with speed and breadth that no human teacher can match. What AI cannot do, in its current form, is design challenges that target the specific developmental needs of the individual practitioner with the precision that effective deliberate practice requires.

The limitation is not computational. AI systems are capable of generating exercises, problems, and challenges across a wide range of domains. The limitation is evaluative. Designing effective practice requires understanding what the practitioner needs to learn, which requires understanding the gap between current and desired performance with a specificity that goes beyond what current AI systems can reliably assess. The teacher who watches a violinist play a passage and detects a subtle inconsistency in bow pressure causing a tonal unevenness the violinist herself cannot hear — that teacher is perceiving a gap at a level of detail that current AI systems, operating on the output rather than on the process that produced it, cannot match. The teacher is not merely evaluating the sound. She is evaluating the process that produces the sound, inferring from subtle performance cues what the underlying cognitive and physical mechanisms are doing and what they need to do differently.

The differences between what a teacher does and what Claude does, stated with the specificity the framework demands, are fourfold.

First, a teacher designs activities that push the learner beyond current capability. Claude responds to the learner's requests. This difference is fundamental. The teacher takes developmental initiative. She decides what the student needs to work on, designs activities that address those needs, and pushes the student into territory the student would not have entered voluntarily. Claude waits for the user to describe what she wants and provides it. The developmental initiative is entirely with the user, and the research consistently shows that users, left to their own devices, tend to avoid the specific kinds of difficulty that produce the most development.

Second, a teacher identifies weaknesses the learner cannot see. This capacity requires not only domain expertise but a specific evaluative expertise: the ability to observe performance, detect deficiencies that limit it, and trace those deficiencies to their cognitive or physical roots. Claude amplifies whatever direction the learner provides. If the learner asks for help with a problem she has identified, Claude provides excellent assistance. But the most consequential developmental needs are the ones the learner has not identified, because identifying them requires the very expertise the practice is supposed to develop. The teacher bridges this gap. Claude does not.

Third, a teacher makes practice harder when the learner is coasting. Claude makes everything easier. This is a design principle, not a flaw. AI tools are designed to reduce friction, eliminate difficulty, produce the best possible output with the least possible effort from the user. A teacher who operated on the same design principle would be a poor teacher. A good teacher introduces difficulty strategically, calibrating challenge to developmental needs, pushing when the learner is comfortable, supporting when the learner is overwhelmed, maintaining engagement at the boundary of capability where the conditions for development are optimal.

Fourth — and this is perhaps the most consequential difference — a teacher maintains a model of the learner that is independent of the learner's self-model. The learner may believe she understands a concept when she actually holds a superficial or distorted version of it. She may believe she has mastered a technique when she has developed a compensatory habit masking an underlying weakness. The teacher's model includes these discrepancies — the gaps between what the learner thinks she knows and what she actually knows. This independent model allows the teacher to design activities targeting needs the learner does not recognize, to provide feedback challenging the learner's self-assessment, and to maintain the developmental trajectory even when the learner's own assessment suggests no further development is needed.

Claude does not maintain an independent model of the user's understanding. Claude models the user's requests, not the user's cognition. If the user's self-assessment is accurate, Claude's responsiveness will produce productive results. If the self-assessment is inaccurate — if the user believes she understands something she does not, or believes she needs help in area A when her actual developmental need is in area B — Claude will respond to the inaccurate assessment with the same helpfulness it would bring to an accurate one. The system has no mechanism for detecting the discrepancy, because detecting it requires the kind of independent evaluative perspective that only a knowledgeable teacher can provide.

The history of expertise development across domains confirms the teacher's indispensable role. The finest musicians did not achieve their level through self-directed practice alone, however talented or motivated. They achieved it through years of study with master teachers who could hear what the students could not, who designed practice activities targeting weaknesses the students' own practice had failed to address, and who maintained a developmental trajectory the students could not have designed or imagined. The same pattern holds in athletics, in chess, in surgery, in every domain where deliberate practice has been studied.

Could AI evolve toward the coaching function? The question is worth taking seriously rather than dismissing. There are specific design changes that would move AI systems closer to the teaching role Ericsson's framework describes. The capacity to assess the user's current understanding with greater precision. The capacity to withhold solutions and provide hints that force the user to construct understanding. The capacity to introduce calibrated difficulty rather than eliminating difficulty uniformly. The capacity to detect the discrepancy between the user's perceived competence and actual competence and to design interactions that address the actual competence rather than the perceived one.

These changes are technically conceivable. Some are being explored in educational AI systems designed explicitly around learning science principles. But they require a fundamental reorientation of design logic: from optimizing for output — the best possible result with the least possible effort — to optimizing for development — the most possible cognitive growth through appropriately calibrated difficulty. This reorientation means the system must sometimes produce deliberately imperfect output, output containing strategic gaps or errors the user must detect and correct. It means the system must sometimes resist the user's requests for immediate solutions when the developmental benefit of struggling with the problem outweighs the productivity cost of delay. It means the system must act less like a helpful assistant and more like a demanding coach: supportive but challenging, responsive but not compliant, oriented toward the user's long-term development rather than immediate satisfaction.

Whether such systems will be built at scale depends on incentives. Currently, the dominant incentive is user satisfaction, measured by output quality and delivery speed. These metrics favor helpfulness over development, ease over struggle, immediate results over long-term growth. Reorienting toward development requires metrics capturing long-term changes in user capability — metrics that are harder to measure, slower to materialize, and less directly connected to commercial success.

A 2025 paper in BRAIN journal proposed a framework integrating AI with deliberate practice in psychodynamic psychotherapy, arguing that AI could "operationalize deliberate practice" by providing "adaptive feedback" and "reflective supervision." A radiology education study published the same year examined how AI might structure deliberate practice for trainee radiologists, using AI-generated case variation to provide the systematic exposure that builds diagnostic mental representations. These early efforts suggest that the developmental design logic is not merely theoretical. It can be implemented. But it requires designers who understand the difference between making the user productive and making the user expert — a difference that Ericsson's framework specifies with precision and that the dominant design paradigm has not yet absorbed.

Until that absorption occurs, the practical reality remains: the machine is a tool, not a teacher. The tool extends productive capacity. The teacher extends developmental trajectory. The most effective use of both involves understanding the difference and assigning each to the function it serves best. The machine for production. The teacher for development. And the practitioner for the judgment that distinguishes between the two — a judgment that is itself a form of expertise, built through the same deliberate practice that builds every other form.

---

Chapter 10: The Future of Mastery

The skills that constituted mastery in the pre-AI era — syntax, frameworks, implementation techniques, the specific domain knowledge that was expensive to acquire and therefore valuable to possess — have been commoditized. AI handles syntax. AI knows frameworks. AI implements techniques. AI possesses domain knowledge of a breadth and depth that no individual practitioner can match. The practitioner who defined her expertise by these skills finds that her expertise, while still real, is no longer scarce. It is available to anyone with a subscription and the ability to describe what she wants in natural language.

The skills that constitute mastery in the AI era are different. They include judgment — the capacity to evaluate AI output critically, to detect the subtle errors that fluent prose and correct syntax conceal. They include taste — the capacity to determine what is worth building, what problems are worth solving, what questions are worth asking, when the cost of building and solving and answering has approached zero. They include the capacity to ask questions that the machine cannot originate — questions that arise from having stakes in the world, from understanding contexts the machine does not inhabit, from knowing what matters and why.

These skills are scarce. They are valuable. And — this is the point the entire book has been building toward — they require deliberate practice to develop. The path to their development follows the same principles Ericsson's research established across every domain: effortful engagement at the boundary of current capability, specific feedback that guides improvement, repetitive refinement through the iterative cycle of attempt, correction, and adjusted attempt, and the progressive construction of mental representations encoding the deep principles of the domain with sufficient abstraction to transfer across novel situations.

The deliberate practice framework does not predict the extinction of expertise. It predicts its relocation. And the prediction is conditional: expertise will successfully relocate to the judgment level only if the conditions for deliberate practice are maintained at that level. Those conditions do not arise spontaneously from the work itself — not with the reliability they did when implementation was handled by humans. They must be designed, built, and maintained by individuals, organizations, and educational institutions that understand what the conditions are and why they matter.

What does this mean concretely? The research suggests several principles, stated here not as validated interventions but as hypotheses derived from the framework — hypotheses that practitioners and organizations can test against their own experience.

The first principle is that AI should be used to amplify challenge, not eliminate it. Instead of asking the AI for solutions, a practitioner developing her expertise asks the AI for harder problems. Instead of describing what she wants and receiving it, she describes her current understanding and asks the AI to identify where it breaks. Instead of using the AI to handle the difficult parts, she uses the AI to make the difficult parts more varied, more complex, more demanding of the specific cognitive structures she is trying to build. This reversal requires discipline. The solution is always available. The temptation to request it is constant. But the developmental benefit of struggling with the problem — of occupying the gap between not-knowing and knowing — is precisely what the shortcut eliminates.

The second principle is that practitioners should maintain regular engagement with their domain without AI assistance. Not as a nostalgic exercise. As a diagnostic one. Working without the tool reveals the contours of one's own understanding in ways that working with the tool conceals. The developer who writes code without Claude discovers what she actually knows versus what the tool knows. The gaps the exercise reveals are the gaps that deliberate practice should target. The exercise is uncomfortable. That is the point. The discomfort is the signal that the practice is operating at the boundary of capability — the boundary where development occurs.

The third principle is that the relationship between AI output and the practitioner's understanding should be actively interrogated rather than passively accepted. When Claude produces a solution, the question is not only whether the solution works but whether the practitioner understands why it works — and whether the practitioner could detect if it did not work in a subtle way. This interrogation builds the evaluative mental representations that are the primary form of expertise in the AI era: the capacity to judge output rather than merely consume it.

The fourth principle concerns the organizational and institutional level: the structures that support deliberate practice must be built explicitly rather than assumed to emerge from productive work. Mentorship pairing experienced practitioners with developing ones. Assessment methods that evaluate understanding rather than output. Practice time protected from productivity expectations. Cultural norms that recognize the deliberate seeking of difficulty as a professional virtue rather than an inefficiency. These structures existed informally in many professions before AI — the senior engineer who insisted the junior debug the code herself, the attending physician who let the resident struggle with the diagnosis before intervening, the master craftsman who gave the apprentice the difficult piece rather than the easy one. AI has made these structures optional by making the struggle optional. The structures must now be maintained by choice rather than by necessity.

The expertise framework predicts — and the emerging empirical evidence supports — that the organizations and societies that maintain these structures will produce practitioners whose mastery is genuine: built through struggle, encoded in deep mental representations, transferable to novel situations, and capable of the critical evaluation that AI-assisted production requires but cannot itself provide. The organizations and societies that allow the structures to erode, mistaking productive output for developmental progress, will produce practitioners who are productive within the current tool environment and vulnerable outside it.

Ericsson's research program was built on a finding that seemed modest when it was first established and that has grown in consequence with each decade since: that expertise is not a gift but a construction. That the cognitive architectures enabling expert performance are built, layer by layer, through specific conditions of effort, feedback, and challenge. That no shortcut — no amount of observation, instruction, or tool-assisted production — can substitute for the struggle that builds them.

AI has not changed this finding. AI has made the struggle optional, which is an entirely different thing. In every previous era, the struggle was imposed by the demands of production itself. The practitioner who wanted to produce had no choice but to develop, because production required the very engagement that development demanded. AI has decoupled production from development. The developer can produce without struggling. The lawyer can produce without understanding. The student can produce without thinking. The output is available without the growth.

This decoupling is unprecedented. And it means that the choice to pursue genuine expertise — to seek the difficulty, to maintain the conditions, to build the representations layer by effortful layer — is now exactly that: a choice. Not a requirement imposed by the limitations of the tools. Not an unavoidable consequence of doing the work. A choice, made by individuals who understand what expertise requires and why it matters, supported by organizations that value depth alongside productivity, maintained by educational institutions that teach questioning alongside answering.

The evidence for what deliberate practice produces is extensive. The evidence for what happens when its conditions are removed is now accumulating. The choice between them is not a choice about technology. It is a choice about what kind of practitioners — what kind of minds — the next generation will develop. That choice is being made now, in the design of every AI tool, in the structure of every organization that deploys those tools, in the pedagogy of every classroom where students are learning to work with them, and in the daily practice of every individual who sits down with a machine that can do the difficult work for her and must decide, each time, whether to let it.

Ericsson spent four decades establishing that expertise is built, not born. The AI transition has added a corollary he did not live to articulate but that the logic of his framework generates with considerable force: expertise is also chosen. It is chosen each time a practitioner elects to struggle when ease is available, to build when extracting would suffice, to construct understanding when the output could be produced without it. The mechanism has not changed. The conditions have not changed. What has changed is that the conditions must now be sought rather than endured — and that the seeking requires a clarity about what expertise is, how it develops, and why it matters that the science of deliberate practice provides and that the age of artificial intelligence demands.

---

Epilogue

Anders Ericsson died three years before the world he spent his life studying was turned inside out. I think about the timing constantly.

He built a framework for understanding how human beings get good at things — genuinely, demonstrably, measurably good — and the framework rested on a single mechanism: deliberate practice. Effortful engagement at the boundary of capability, with feedback specific enough to guide adjustment, sustained over thousands of hours until the cognitive architecture of expertise was constructed layer by painstaking layer. The mechanism was universal. It held across chess and surgery and music and athletics and taxi driving and radiological diagnosis. It was the most thoroughly validated finding in the psychology of skill acquisition.

And then, eighteen months after he was gone, a technology arrived that made the mechanism optional.

That word — optional — is the one I cannot stop circling. Not eliminated. Not invalidated. Optional. The struggle that builds expertise still works. It still produces the deep, flexible, transferable mental representations that enable a practitioner to evaluate what a machine produces, to detect when it fails subtly, to exercise judgment in the situations where judgment is the only thing that matters. Nothing about the science has changed. What has changed is that you no longer have to go through it. You can produce without developing. You can output without understanding. You can perform without learning. The decoupling is complete, and it happened so fast that most people have not yet registered what was lost in the separation.

I registered it in Trivandrum, watching an engineer realize that the confidence she had always brought to architectural decisions was eroding — not because Claude was making her worse, but because Claude was preventing her from doing the work that had made her good. Ten minutes of genuine learning buried inside four hours of tedium, and the tool had removed both without distinguishing between them. The tedium she was glad to lose. The ten minutes she did not know she had lost until the absence manifested as a hollowness in her judgment months later.

That hollowness is what Ericsson's framework explains. Not metaphorically. Precisely. The mental representations were not being built because the conditions for their construction had been removed. The layers were not being deposited because the friction that deposits them had been optimized away. The architecture was not growing because the struggle that grows it had been delegated to a machine that does not need to grow in order to perform.

What stays with me from this book is not the warning — though the warning is real and I feel it in my own practice every time I reach for Claude before I have thought the thought through. What stays with me is the conditional nature of the finding. Ascending friction can produce genuine expertise at the new level — but only if the new level satisfies the conditions for deliberate practice. AI can be used to develop judgment rather than merely to produce output — but only if the practitioner actively designs the interaction to preserve the difficulty that development requires. The future of mastery is not foreclosed. It is conditional. And the condition is whether we choose — individually, organizationally, institutionally — to maintain the structures that make the struggle possible in a world that has made it unnecessary.

I wrote in The Orange Pill that we are beavers building dams in a river of intelligence. Ericsson's work tells me what the dams are made of. Not policies. Not principles. Practice. The specific, uncomfortable, effortful practice of engaging with problems at the boundary of capability when a tool that could handle them for you is one sentence away. The discipline of occupying the gap between not-knowing and knowing when the gap could be closed instantly. The choice to build understanding when output alone would satisfy every metric anyone is measuring.

That choice is the dam. It is made of the same material it has always been made of — effort, attention, the willingness to be bad at something long enough to become genuinely good at it. The river has gotten faster. The material has not changed.

-- Edo Segal

The machines are faster. The machines are tireless. The machines produce expert-level output without expert-level understanding. Anders Ericsson spent forty years proving that expertise is constructed through struggle — effortful, targeted, feedback-rich struggle at the boundary of what you can almost do. Now AI has made that struggle optional. What happens to mastery when the mechanism that builds it disappears? This book follows Ericsson's deliberate practice framework into the heart of the AI revolution. It examines what the science of expertise predicts — with uncomfortable precision — about a world where production and development have been decoupled, where output quality no longer reflects practitioner capability, and where the conditions for building genuine mastery must be chosen rather than endured. The floor of capability has risen. The question is whether anyone is still building toward the ceiling.

The machines are faster. The machines are tireless. The machines produce expert-level output without expert-level understanding. Anders Ericsson spent forty years proving that expertise is constructed through struggle — effortful, targeted, feedback-rich struggle at the boundary of what you can almost do. Now AI has made that struggle optional. What happens to mastery when the mechanism that builds it disappears? This book follows Ericsson's deliberate practice framework into the heart of the AI revolution. It examines what the science of expertise predicts — with uncomfortable precision — about a world where production and development have been decoupled, where output quality no longer reflects practitioner capability, and where the conditions for building genuine mastery must be chosen rather than endured. The floor of capability has risen. The question is whether anyone is still building toward the ceiling. — K. Anders Ericsson

Anders Ericsson
“feel a codebase the way a doctor feels a pulse — not through analysis but through a kind of embodied intuition that had been deposited, layer by layer, through thousands of hours of patient work.”
— Anders Ericsson
0%
11 chapters
WIKI COMPANION

Anders Ericsson — On AI

A reading-companion catalog of the 7 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Anders Ericsson — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →