By Edo Segal
The game my son couldn't beat taught him more than I did that summer.
I watched him die forty times on the same level. Same jump, same enemy, same punishment for getting the timing wrong. He didn't ask for help. He didn't look up a walkthrough. He sat there, absorbed, failing his way toward something I recognized only later as mastery. By the time he cleared it, he knew that level in his body — the weight of the jump, the rhythm of the enemy, the exact window where the impossible became possible.
I thought about that Saturday morning constantly while writing *The Orange Pill*. Because everything I was celebrating about Claude Code — the speed, the compression of effort, the collapse of the distance between imagination and artifact — was the opposite of what my son was doing on that screen. He was swimming in friction. The friction was the teacher. Every death deposited a layer of understanding that no shortcut could have built.
James Paul Gee spent decades studying exactly this phenomenon. Not in engineering departments or corporate training programs. In video games. He watched players develop sophisticated problem-solving abilities, deep pattern recognition, and genuine expertise — all through environments designed to keep them in what he called the "regime of competence," the narrow band where challenge is high enough to demand real effort but not so high that you quit.
The regime of competence is the most important concept missing from the AI discourse. We talk endlessly about productivity gains and capability expansion. We talk about the Death Cross and the democratization of building. We do not talk nearly enough about what happens to the human on the other side of the amplification — whether the conditions that develop judgment, intuition, and deep understanding are being preserved or quietly dismantled.
Gee's framework gives us the diagnostic instruments. His concepts of situated meaning, productive failure, affinity spaces, and identity formation through practice are not abstractions from learning science. They are precise descriptions of how human beings actually become good at things. And they reveal, with uncomfortable clarity, that the friction AI removes is often the same friction that builds the practitioners we will need most.
This is not an argument against AI. I use it every day. I built with it. I wrote with it. But Gee's lens forced me to ask a question I had been avoiding: Am I preserving the conditions for depth, or am I optimizing them away?
The answer matters more than the output.
-- Edo Segal ^ Opus 4.6
James Paul Gee (born 1948) is an American linguist, learning scientist, and literacy theorist whose work has fundamentally reshaped how educators and researchers understand the relationship between language, identity, and learning. Born in San Jose, California, Gee earned his PhD in linguistics from Stanford University and has held faculty positions at the University of Southern California, Clark University, the University of Wisconsin–Madison, and Arizona State University, where he is the Mary Lou Fulton Presidential Professor of Literacy Studies. His landmark 2003 book *What Video Games Have to Teach Us About Learning and Literacy* argued that well-designed video games embody sophisticated learning principles — including progressive challenge calibration, identity investment, and situated meaning — that formal education has largely failed to implement. His other major works include *Social Linguistics and Literacies* (1990), *An Introduction to Discourse Analysis* (1999), and *The Anti-Education Era* (2013). Gee's concepts of "Discourse" (capital D) as an identity kit, "affinity spaces" as interest-driven learning communities, and the "regime of competence" as the zone where productive learning occurs have influenced fields ranging from game design to corporate training to AI-era pedagogy. His more recent work on "cybersapien literacy" addresses the emerging dynamics of human-AI collaboration and the risks of deskilling through uncritical tool adoption.
Every well-designed video game solves a problem that most schools have never figured out how to solve. The problem is calibration — how to keep a learner in the zone where challenge is high enough to demand genuine effort but not so high that the learner gives up. Game designers have a name for this zone, though they do not always articulate it as a principle. James Paul Gee, who spent years studying what makes games such effective learning environments, gave it a name that stuck: the regime of competence.
The regime of competence is the range of difficulty within which learning actually happens. Below the regime, the learner is coasting — repeating what they already know, reinforcing existing skills without extending them. Above the regime, the learner is drowning — confronted with challenges so far beyond their current abilities that they cannot extract useful information from the experience. The regime itself is the sweet spot, the narrow band where the challenge is just past the edge of what the learner can currently do, close enough to be reachable with effort, far enough to require genuine stretching.
Good game designers calibrate the regime with extraordinary precision. Level one teaches the player to jump. Level two introduces an enemy. Level three combines jumping and enemies in a sequence that requires timing. Each level is slightly harder than the last. Each one assumes the competence developed in the previous level and demands a little more. The player is never bored for long and never frustrated for long. The game keeps them in the regime — stretching, failing, adjusting, succeeding, and then being stretched again.
The reason this works so well, Gee argued across multiple books and dozens of papers, is that games treat learning not as information transfer but as identity formation. The player does not merely acquire facts about the game world. The player becomes a certain kind of agent within it — a puzzle-solver, an explorer, a strategist, a builder. The game provides what Gee called an "identity kit": a set of tools, practices, values, and ways of seeing that the player adopts as they progress. Mastery is not the accumulation of knowledge. Mastery is the development of a new way of being in the world, and the regime of competence is the environment within which that new way of being is forged.
The pre-AI software development environment maintained a regime of competence that was not designed by anyone but was remarkably effective nonetheless. The tools were powerful enough to enable ambitious projects — a developer in 2020 could build things that would have required a team of fifty in 1990. But the tools were also resistant enough to require sustained effort at nearly every step. The resistance came from multiple sources: the complexity of the languages themselves, the opacity of error messages, the difficulty of debugging, the cognitive overhead of translating a human idea into machine-readable instructions, the accumulated weight of dependencies and configurations and the thousand small decisions that any nontrivial software project requires.
This resistance was not pleasant. No developer enjoyed spending four hours tracking down a null pointer exception. No one found joy in dependency hell or in the cryptic error messages that told you something was wrong without telling you what. The resistance was, in the language of game design, the difficulty curve — and like a good difficulty curve, it was calibrated (not by intent but by the accumulated complexity of the domain) to keep the practitioner in the regime of competence. Each project was slightly harder than the last. Each error provided specific feedback about the gap between the developer's current understanding and the system's demands. Each debugging session deposited a thin layer of knowledge that would not have been deposited by success.
Edo Segal, in The Orange Pill, describes a woman on his engineering team who had spent eight years on backend systems and had never written a line of frontend code. Using Claude Code, she built a complete user-facing feature in two days — not a prototype but a deployable feature. The description is presented as evidence of AI's power to democratize capability, and it is. But from the perspective of learning science, the description also illustrates something else: the regime of competence that would have governed her acquisition of frontend skills — the months of struggle with unfamiliar frameworks, the failed layouts, the cryptic CSS errors, the slow accumulation of situated understanding about how visual interfaces work — was bypassed entirely. She arrived at the output without passing through the regime that would have made the output hers in the deepest sense.
The output belonged to the collaboration. The understanding belonged to the tool.
This distinction — between the output and the understanding that produced it — is the central concern of Gee's framework applied to the AI transition. Gee's entire career has been organized around the insight that learning is not about outputs. Learning is about the transformation of the learner. A student who produces a correct answer on a test may or may not have learned the underlying concept. A player who completes a game level may or may not have developed the competence the level was designed to teach. An engineer who ships a feature may or may not understand what the feature does and why it works. The output is evidence of something, but it is ambiguous evidence. It could indicate mastery. It could indicate that the regime of competence was operating effectively, stretching the learner just past their edge and rewarding the stretch with success. Or it could indicate that the regime was bypassed entirely, that the output was produced without the learning that the regime would have required.
AI makes this ambiguity pervasive. When the tool can produce competent output regardless of the practitioner's level of understanding, the output stops functioning as a reliable signal of competence. The code works. The feature ships. The brief is well-structured. But the practitioner who directed the tool to produce these outputs may or may not have developed the understanding that previous generations of practitioners developed through the friction of doing the work themselves.
Gee's concept of the "regime of competence" was developed in the context of video games, but its implications extend to any domain where learning occurs through practice. The regime is not a feature of games specifically. It is a feature of any environment that maintains the right calibration between challenge and capability. A surgical residency maintains a regime of competence: the resident performs increasingly complex procedures under supervision, with the difficulty calibrated to stretch their abilities without endangering patients. A jazz ensemble maintains a regime of competence: each performance pushes the musicians to respond to unexpected harmonic choices, to improvise within constraints, to develop the real-time judgment that only comes from performing at the edge of one's abilities.
In each case, the regime depends on friction — on the resistance that the environment offers to the practitioner's efforts. The resistance is what creates the stretch. Without resistance, there is no stretch. Without stretch, there is no learning. Without learning, there is no mastery. The logic is sequential, and each step depends on the one before it.
AI reduces friction. That is its value proposition, the reason adoption has been so rapid, the reason Segal's engineers transformed in a single week. The friction that separated human intention from material realization — the translation cost, the debugging, the dependency management, the mechanical labor of converting design into code — has been dramatically reduced. The reduction is real, measurable, and for many purposes genuinely liberating. Work that consumed eighty percent of a developer's time can now be handled by a tool, freeing the developer to focus on the twenty percent that requires human judgment.
But the friction that AI reduces is the same friction that maintained the regime of competence. The debugging sessions were tedious, but they were also the mechanism through which developers built situated understanding of their systems. The dependency management was painful, but it was also the process through which developers learned how components interact, where abstractions leak, what breaks when one piece of the system changes. The translation from human intention to machine instruction was effortful, but the effort was what kept the developer in the regime — stretching, failing, adjusting, and depositing layers of understanding with each cycle.
Remove the friction, and the regime of competence thins. The challenges are fewer. The failures are less frequent. The feedback is less specific. The stretch is less demanding. The practitioner still learns — directing AI effectively is itself a skill that develops through practice. But the regime within which this learning occurs is narrower than the regime that pre-AI practice maintained, and the question, which cannot be answered abstractly but must be studied in specific contexts with specific practitioners, is whether the narrower regime produces understanding sufficient for the demands it will face.
Segal describes the senior engineer who discovered that his "twenty percent" — the judgment, the taste, the architectural instinct — was "everything." The discovery is presented as a liberation: the engineer was freed from the eighty percent that was drudgery to focus on the twenty percent that was genuinely valuable. And for this specific engineer, with his decades of accumulated understanding, the liberation may be exactly what it appears. He has already passed through the regime of competence. The layers have been deposited. The situated knowledge is there, built through years of friction that cannot be retroactively removed.
But what about the engineer who begins their career inside the AI-augmented environment? What regime of competence is available to them? What friction will stretch their capabilities? What failures will deposit the layers of understanding that the senior engineer spent decades building? If the regime has thinned, the new practitioner arrives at the same tools with a different foundation — and the difference may not be visible in the output. The code will still work. The features will still ship. The thin description, to borrow a term from a different scholarly tradition, will be identical. The output will look the same.
The understanding beneath it will not be the same. And the gap between competent output and competent practitioners is the gap that Gee's framework was designed to identify and that the AI transition is producing at unprecedented scale.
The regime of competence is not a luxury. It is the environment within which human capability develops. Thin it, and the capability thins with it — not immediately, not visibly, but gradually, in the same way that soil erodes when the roots that hold it in place are removed. The erosion is invisible until the ground gives way.
---
The most counterintuitive finding in learning science is that failure is not the opposite of learning. Failure is the mechanism of learning. Every productive failure — every attempt that falls short, every hypothesis that is disconfirmed, every expectation that is violated by reality — provides information that success cannot provide. Success tells the practitioner their current model is adequate. Failure tells the practitioner their current model is inadequate, and the specific shape of the inadequacy points toward the revision that will make the model better.
James Paul Gee built his learning principles around this insight, drawing on decades of research in cognitive science, linguistics, and the study of complex skill acquisition. His analysis of video games was, at its core, an analysis of how well-designed failure environments produce deep learning. A good game does not punish failure. It makes failure informative, immediate, and low-cost. The player tries something. It does not work. The game provides feedback — the character falls, the puzzle resets, the enemy reappears. The player adjusts. Tries again. Fails differently. Adjusts again. Eventually succeeds, and the success is meaningful precisely because it was earned through a sequence of failures that taught the player something about the system they were navigating.
The pre-AI debugging process was, from this perspective, one of the most effective learning environments ever accidentally created. Segal describes it in The Orange Pill with the specificity of someone who has lived through it: the developer conceived a function, wrote it, watched it fail, received an error message that was specific and unhelpful and sometimes maddening, read the error, examined the code, hypothesized, tested, failed again, read documentation (often badly written), asked on Stack Overflow (and was answered dismissively), tried again, and eventually, hours or days later, succeeded.
In those hours or days, something had happened that was not visible in the final code. The developer had come to understand the function — not intellectually but in what Segal calls "her body." The kind of understanding that lives not in propositions but in the embodied sense of how a system behaves, the intuitions about what will break before it breaks, the feel for a codebase that a senior engineer possesses the way a doctor possesses a feel for a patient's pulse.
Gee's framework explains why this embodied understanding develops through failure and not through success. The explanation draws on his concept of "situated meaning" — the principle that the meaning of any concept, skill, or piece of knowledge is embedded in the specific contexts of experience through which the learner encountered it. A developer who has debugged a memory allocation error does not merely know what memory allocation errors are in the abstract. She knows what they feel like in the context of the specific system she was building — what caused this particular error, what she tried first that did not work, what eventually led her to the fix, what the fix taught her about how her system's components interact. This situated meaning is richer, more durable, and more transferable than any abstract definition, because it is grounded in specific experience that engages not just cognition but affect, identity, and embodied practice.
Failure is the primary generator of situated meaning because failure is where expectation meets reality and expectation loses. Success confirms what the learner already believed. Failure disrupts it. And disruption — the moment when the model breaks, when the assumption is revealed as an assumption rather than a fact, when the thing you thought would work does not work and you have to figure out why — is where the deepest learning occurs.
Gee identified this pattern across every domain he studied. In video games, the player learns the most from the levels that took the most attempts. In language acquisition, the learner develops the deepest understanding of grammatical structures through the errors that native speakers gently correct. In scientific reasoning, the most productive experiments are the ones that disconfirm hypotheses, because disconfirmation provides more information than confirmation. The pattern is robust and consistent: productive failure is the engine of deep learning, and environments that eliminate productive failure eliminate the engine.
AI eliminates productive failure for a specific and significant class of work. The developer describes the function. Claude writes it. It works. There is no error message. There is no debugging session. There is no sequence of hypotheses and tests and failures and revisions that would have deposited layers of situated understanding. The output is correct. The learning cycle that would have been triggered by failure did not occur, because the failure did not occur.
The keyword is "productive." Not all failure is productive. Failure that overwhelms the learner, that provides no useful feedback, that occurs in a context where the learner lacks the resources to make sense of it — this is not productive failure. It is frustration, and frustration produces learned helplessness rather than learning. The distinction is critical, because the defense of pre-AI friction is not a defense of all friction. It is a defense of the specific kind of friction that keeps the learner in the regime of competence, the friction that is difficult enough to be informative but not so difficult as to be crushing.
Much of the friction that AI eliminates was not productive. Dependency management was rarely a rich learning experience. Configuration files did not typically stretch the developer's cognitive capabilities in interesting ways. Boilerplate code was not formative. The friction that AI removes includes a substantial amount of what Gee would call "busy work" — tasks that maintain the appearance of difficulty without providing the learning signals that genuine difficulty produces.
But mixed into the unproductive friction were moments of genuine productive failure — the debugging sessions that taught something unexpected about how systems interact, the configuration errors that revealed assumptions the developer did not know she was making, the build failures that forced a reconceptualization of the project's architecture. These moments were rare. Segal estimates, for one of his engineers, perhaps ten minutes in a four-hour block. But they were the moments that built architectural intuition, and they were indistinguishable from the surrounding drudgery until they happened.
AI removes both kinds of friction indiscriminately. It cannot distinguish between the four hours of tedium and the ten minutes of revelation, because the distinction is not in the tasks themselves but in the learner's relationship to them. The same configuration error that is mere drudgery for one developer is a revelatory learning experience for another, depending on where each developer stands in their regime of competence. The error that teaches something new to the developer who has never encountered it before teaches nothing to the developer who has seen it a hundred times.
This context-dependence is the heart of the problem. AI optimizes for output, and output does not care about the learner's development. The code either works or it does not. The feature either ships or it does not. The brief either passes muster or it does not. These are the metrics of output, and by these metrics, AI-augmented work is unambiguously superior. But the metrics of output are different from the metrics of learning, and the gap between them is where the concern lives.
A culture that evaluates practitioners by their output will see no problem. The output is excellent. A culture that evaluates practitioners by their development — by the depth of their understanding, the sophistication of their judgment, the resilience of their competence when confronted with novel situations that fall outside the AI's training — will see the problem clearly. The output looks the same. The practitioners do not.
Gee's framework predicts that the thinning of productive failure will produce a specific kind of vulnerability: practitioners who are competent under normal conditions and fragile under novel ones. When the AI works — when the problem falls within the range of situations the AI can handle effectively — the practitioner's lack of deep situated understanding is invisible. The output is correct. Nobody can tell the difference. When the AI fails — when the problem is genuinely novel, when the abstraction leaks, when the situation requires the kind of judgment that only deep, friction-built understanding can provide — the lack becomes visible. And the failure, when it comes, will be different from the productive failures that build understanding. It will be the catastrophic failure of a practitioner confronting a situation they are not equipped to handle, because the regime of competence that would have equipped them was bypassed on the way to the output.
This is not a prediction about the distant future. It is a description of a dynamic that is already operating in every organization that has adopted AI tools without deliberately rebuilding the conditions for productive failure. The output metrics are excellent. The learning metrics, to the extent that anyone is measuring them, are not.
---
James Paul Gee published What Video Games Have to Teach Us About Learning and Literacy in 2003, and the book did something that most academic arguments fail to do: it changed how a significant number of people thought about a familiar phenomenon. The familiar phenomenon was video games, which were at the time widely regarded by educators and parents as, at best, harmless entertainment and, at worst, a dangerous waste of children's time. Gee's argument was that video games were, in fact, among the most sophisticated learning environments ever designed, and that the principles embedded in their design — principles of progressive challenge, identity investment, immediate feedback, multiple routes to success, and pleasantly frustrating difficulty — were principles that formal education had largely failed to implement.
The argument was grounded in thirty-six learning principles that Gee identified in well-designed games. Not all thirty-six are equally relevant to the AI transition, but several of them are so directly pertinent that they function as diagnostic instruments — tools for evaluating whether AI-augmented work environments support genuine learning or merely simulate it.
The first is the principle of pleasantly frustrating challenge. Good games, Gee observed, keep the player in a state that is difficult enough to be engaging but not so difficult as to be discouraging. The "pleasantly frustrating" formulation is precise: the frustration is not eliminated. It is calibrated. The player fails, but the failure is interesting rather than crushing, because the game has provided enough scaffolding — enough prior experience, enough contextual information, enough feedback — that the player can see, at least dimly, what they need to do differently. The frustration motivates rather than demoralizes. It pulls the player forward rather than pushing them back.
AI tools, evaluated against this principle, produce a mixed verdict. On one hand, they can function as extraordinarily powerful scaffolding — providing context, explaining concepts, generating examples, answering questions in real time. A developer who is stuck on a problem can ask Claude for help and receive an explanation that is often clearer and more patient than anything a human colleague would provide. This scaffolding can, in principle, support the learner's passage through a challenging problem without eliminating the challenge itself.
On the other hand — and this is the more common pattern — AI tools eliminate the frustration entirely rather than calibrating it. The developer does not merely receive help with the problem. The developer receives the solution. The problem is solved not by the developer, with AI assistance, but by the AI, with developer direction. The distinction is crucial. In the first case, the regime of competence is maintained: the developer still struggles, still fails, still adjusts, still develops understanding through the process of working through the difficulty with support. In the second case, the regime is bypassed: the developer describes the problem, receives the answer, and moves on without having undergone the productive struggle that the principle of pleasantly frustrating challenge requires.
The second principle is identity investment. Good games, Gee argued, give the player a stake in the outcome that goes beyond the immediate task. The player is not merely solving a puzzle. The player is developing a character, building a world, pursuing a narrative, becoming a particular kind of agent. The investment is in the identity — in who the player is becoming through the gameplay — not just in the score or the achievement or the completion of the level.
This principle maps directly onto the phenomenon that The Orange Pill describes as the "silent middle" — the population of practitioners who feel both exhilaration and loss when confronted with AI's capabilities. The exhilaration comes from the expansion of what they can do. The loss comes from the contraction of who they are becoming through their work. When the debugging sessions disappear, when the implementation struggles are handled by a tool, when the mechanical resistance that once constituted eighty percent of the work is removed, the practitioner's identity is destabilized. The identity was built through the practices of struggle. Remove the practices, and the identity loses its foundation.
Segal captures this with the image of the senior software architect who felt like a master calligrapher watching the printing press arrive. The calligrapher's identity is inseparable from the practice of calligraphy — from the years of training the hand, the eye, the aesthetic sensibility. The printing press does not eliminate the calligrapher's knowledge. It eliminates the practice through which the knowledge was constituted and through which the identity was maintained. The knowledge persists, but the identity — the way of being in the world that the practice sustained — is severed from its source.
Gee's concept of "projective identity" adds a further dimension. In a game, the player develops what Gee calls a projective identity — a version of themselves that they project onto the game character, an aspirational self that they are becoming through their engagement with the game's challenges. The projective identity is not the player's real-world identity. It is the identity the player is growing into through practice. The game provides the environment within which this growth occurs, and the growth is what makes the game meaningful rather than merely entertaining.
The AI-augmented builder develops a projective identity too — but the identity is different from the one that pre-AI practice produced. The pre-AI developer's projective identity was built through mastery of implementation: the gradual transformation from someone who writes buggy code into someone who writes elegant code, from someone who guesses at solutions into someone who diagnoses problems with precision. The AI-augmented developer's projective identity is built through mastery of direction: the gradual transformation from someone who describes problems vaguely into someone who describes them with the precision and contextual richness that enables AI to produce excellent output.
Both are genuine forms of mastery. Both produce genuine identity investment. The question is whether the second form of mastery — mastery of direction — produces the depth of understanding that the first form produced, or whether it produces a different kind of competence, broader but thinner, more versatile but less deeply grounded.
The third principle is the principle of multiple routes to success. Good games, Gee observed, allow players to achieve their goals through different strategies, different play styles, different combinations of skills and approaches. This principle supports learning because it requires the player to make choices, to evaluate options, to develop a personal approach that reflects their strengths and preferences rather than following a single prescribed path.
AI tools, in their current form, tend to reduce routes rather than multiply them. When a developer asks Claude to implement a function, Claude produces one implementation. The developer can request alternatives, but the default is a single path — and the single path tends toward convention, toward the statistically most common approach in the training data, toward the smooth center of established practice rather than the rough edges where innovation occurs. The "aesthetic of the smooth" that Segal, channeling Han, describes in The Orange Pill is visible in AI-generated code: it is clean, conventional, well-commented, internally consistent, and devoid of the idiosyncrasy that marks code written by a human with a specific perspective and a specific history of problems solved and lessons learned.
The smoothness is a feature if the goal is output. It is a bug if the goal is learning. Multiple routes to success produce learning because they require the learner to make choices, and choices produce understanding. When there is only one route — the route the AI provides — the learner does not choose. The learner accepts. And acceptance is a cognitively different activity from choosing, one that produces a different kind of relationship to the knowledge involved.
Gee's learning principles were developed through the study of environments that had been explicitly designed to produce learning. Video games are engineered learning systems. Their designers spend enormous effort calibrating difficulty, providing feedback, supporting identity investment, and enabling multiple routes to success, because these features are what make the games engaging and what keep players playing.
The AI-augmented work environment is not an engineered learning system. It is an engineered productivity system. Its designers have spent enormous effort making it efficient, reducing friction, accelerating output, and minimizing the distance between intention and realization. These are worthy goals. They are not learning goals. And the gap between a productivity system and a learning system is the gap that Gee's framework identifies and that the AI transition has made consequential at civilizational scale.
A productivity system optimizes for output. A learning system optimizes for the development of the person producing the output. These goals are not identical, and in many cases they are in direct tension. The most efficient path to output is often the least effective path to learning, because learning requires the friction, the failure, the struggle that efficiency eliminates.
The organizations that navigate the AI transition most effectively will be the ones that recognize this tension and design for both goals simultaneously — building environments that use AI to accelerate output while deliberately preserving the conditions for learning. The organizations that fail will be the ones that optimize for output alone, producing excellent work while gradually hollowing out the human capabilities on which their long-term resilience depends.
---
Meaning, in James Paul Gee's framework, is never abstract. It is always situated — embedded in specific contexts of use, practice, and experience. The meaning of a word, a concept, a skill, or a piece of knowledge is not the dictionary definition or the textbook explanation. It is the rich, textured, experientially grounded understanding that the learner has built through encounters with the thing in specific situations, under specific conditions, in the context of specific goals and specific failures.
Consider the word "constraint." A computer science student who has read about constraints in a textbook knows the definition: a constraint is a limitation on the values that a variable can take. This is the decontextualized meaning — the meaning that exists in the dictionary, in the textbook, in the abstract. Now consider the developer who has spent three weeks building a scheduling system and has watched it fail repeatedly because she did not understand how the constraints interacted. She has tried seven different approaches. She has seen each one fail in a specific way that revealed a specific gap in her understanding of how constraints propagate through the system. She has finally arrived at an approach that works, and she understands not only what constraints are but how they behave — how they cascade, how they conflict, how a change in one constraint can produce unexpected effects in distant parts of the system.
The second developer's understanding of "constraint" is situated. It is grounded in specific experience. It includes not just the definition but the texture — the feel for how constraints work in practice that no definition can convey. This situated meaning is more durable than abstract knowledge, because it is encoded not just in propositions but in the embodied memories of specific experiences. It is more transferable, counterintuitively, because the developer who has struggled with constraints in one context has developed intuitions that will help her recognize constraint-related problems in entirely different contexts. And it is more generative, because situated understanding produces the kind of judgment that enables the practitioner to recognize novel problems as instances of familiar patterns, to see through surface differences to structural similarities, to apply what they know in situations they have never encountered before.
Gee developed the concept of situated meaning through his study of literacy — specifically, through the observation that reading and writing are not abstract skills that can be developed in isolation from the contexts in which they are practiced. A person who can read a scientific paper cannot necessarily read a legal brief. A person who can write a marketing email cannot necessarily write a research proposal. Reading and writing are situated practices, embedded in what Gee calls "Discourses" (capital D) — identity kits that include not just language but also values, attitudes, ways of thinking, ways of acting, and ways of interacting with others.
A Discourse, in Gee's technical sense, is not just a way of talking. It is a way of being. The Discourse of software engineering includes not only the programming languages and technical vocabulary that engineers use but also the values they hold (elegance, efficiency, correctness), the practices they engage in (code review, debugging, architecture), the identities they perform (the careful architect, the creative hacker, the meticulous tester), and the social relationships through which these identities are recognized and validated (the mentor and the mentee, the senior and the junior, the reviewer and the reviewed).
When Gee says that meaning is situated, he means that it is situated within a Discourse — that the meaning of any practice, skill, or piece of knowledge is inseparable from the Discourse within which it was acquired and within which it is exercised. The developer's situated understanding of constraints is situated within the Discourse of software engineering. It is meaningful within that Discourse, validated by that Discourse, and transmissible within that Discourse through the practices (code review, pair programming, Stack Overflow discussions) that the Discourse maintains.
AI disrupts situated meaning in a specific and identifiable way. When Claude writes the code that implements a constraint system, the developer who directed Claude has not acquired the situated meaning that the struggle to implement the system would have produced. She may have acquired some situated meaning — the meaning that develops through the process of describing the problem to Claude, evaluating Claude's output, adjusting the direction when the output does not match her intention. This is genuine situated learning, and it should not be dismissed. Directing AI effectively is a real skill, embedded in a real (and emerging) Discourse, and the meaning that develops through its practice is genuinely situated.
But the situated meaning of directing is different from the situated meaning of doing. The developer who directs Claude to implement a constraint system develops situated understanding of how to communicate about constraints, how to evaluate constraint implementations, how to recognize when an implementation does or does not match her intention. The developer who implements the constraint system herself develops all of this plus the situated understanding of how constraints actually work — how they behave at the level of implementation, where they break, what happens when assumptions are violated, what the code looks like when it is right and what it looks like when it is wrong.
The difference is the difference between the conductor's knowledge and the violinist's knowledge. The conductor knows how the piece should sound. The violinist knows how the instrument feels. Both are genuine forms of situated knowledge. Both are valuable. Both are real. But they are different, and the conductor who has never played an instrument possesses a different — not necessarily lesser, but different — understanding of music than the conductor who has.
Gee's 2024 work on "cybersapien literacy" directly addresses this distinction. In his Phi Delta Kappan article, written with Qing Archer Zhang, Gee proposed that the integration of AI and human capabilities produces a new form of literacy — a genuinely hybrid practice in which the human and the AI each contribute what they do best. The human contributes intention, judgment, contextual understanding, and the creative vision that emerges from specific biographical experience. The AI contributes processing power, pattern recognition, implementation speed, and the ability to traverse vast bodies of knowledge in seconds.
Gee's framing is neither utopian nor catastrophist. He does not claim that cybersapien literacy replaces traditional literacies. He claims that it supplements them — that it is a new Discourse, with its own identity kit, its own practices, its own values, and its own forms of situated meaning. The cybersapien practitioner develops situated understanding of how to collaborate with AI effectively, and this situated understanding is genuine, valuable, and distinct from the situated understanding produced by doing the work without AI.
But Gee also warns — and this warning is less frequently cited than his partnership framework — that uncritical AI use could "deskill" practitioners by replacing flexible, adaptive language use with what he calls "frozen" language. Frozen language is language that has been produced without genuine understanding, language that looks right but lacks the situated meaning that would allow its user to adapt it to new contexts, to recognize when it fails, to revise it when circumstances change. AI-generated text is, in Gee's framework, particularly susceptible to freezing, because it is generated through statistical pattern-matching rather than through the situated experience that gives language its flexibility and its life.
The distinction between frozen and flexible language maps onto the distinction between knowing and understanding that runs through this entire analysis. Frozen language is language that represents knowing — the practitioner can deploy the correct terminology, cite the relevant concepts, produce the expected output. Flexible language is language that represents understanding — the practitioner can adapt the terminology to new contexts, recognize when the concepts fail to apply, revise the output when the situation demands revision. The difference is not visible in the output under normal conditions. It becomes visible only when conditions change — when the context shifts, when the problem is novel, when the frozen language breaks because it was never flexible enough to bend.
Segal describes this precise phenomenon in The Orange Pill when he recounts the Deleuze error — the passage where Claude connected Csikszentmihalyi's flow concept to Deleuze's concept of smooth space in a way that was "elegant" and "connected two threads beautifully" but was philosophically wrong. The passage was frozen language. It looked right. It deployed the correct names, the correct argumentative structure, the correct register. But it lacked the situated meaning that would have allowed its producer — whether human or AI — to recognize that the connection did not actually hold.
Segal caught the error because he possessed enough situated understanding of the relevant Discourses to sense that something was wrong. The nagging feeling he describes — the sense that something was off that preceded his ability to specify what was off — is precisely the kind of embodied, situated knowledge that develops through years of engagement with a Discourse and that no amount of frozen language can replicate.
The implication, drawn from Gee's framework, is not that AI should be abandoned. It is that AI must be integrated into learning environments in ways that preserve the conditions for situated meaning to develop. This means preserving struggle. It means maintaining the regime of competence within which productive failure generates the experiential ground from which situated meaning grows. It means recognizing that the smooth efficiency of AI-mediated work, however valuable it is for output, is not sufficient for the development of the practitioners on whose judgment the quality of future output depends.
Gee, in his 2025 interview for the RELC Journal, put the matter with characteristic directness. The skills that will make for a successful life in the future, he said, come with "no guarantees now, if there ever were." The situated meanings that sustained previous generations of practitioners were built through specific practices in specific contexts, and those practices and contexts are changing faster than the culture's capacity to evaluate the change. The old situated meanings are not automatically replaced by new ones. They must be built — deliberately, through the design of environments that support the learning processes from which situated meaning emerges.
The alternative is a world of frozen language — outputs that look right, practitioners who sound competent, and an invisible hollowing of the understanding on which genuine competence rests. The output will be smooth. The surface will be seamless. And beneath the surface, the situated meaning that gives language its life, its flexibility, and its capacity to meet the demands of a world that does not stay still will be thinner than it looks.
Mastery is not a state. It is a process — a cycle that repeats thousands of times across the career of any practitioner in any complex domain. The cycle has four stages, and the stages are sequential: each one depends on the one before it, and skipping any stage thins the learning that the cycle produces.
The first stage is performance. The practitioner attempts the task. She writes the function, drafts the brief, plays the passage, makes the incision. The attempt is necessarily imperfect, because if the practitioner could perform the task perfectly, there would be nothing to learn. The imperfection is not a flaw in the process. It is the process. Performance, in the context of learning, is not a demonstration of existing competence. It is an experiment — a test of the practitioner's current model against the demands of reality.
The second stage is failure. The function does not compile. The brief misses a relevant precedent. The passage falls apart in the transition between the bridge and the chorus. The incision is a millimeter too deep. Failure, in Gee's framework, is not punishment. It is information. The specific shape of the failure tells the practitioner something about the specific inadequacy of their current model. A function that fails because of a type mismatch tells the developer something different from a function that fails because of a logic error. A brief that fails because it misses a precedent tells the lawyer something different from a brief that fails because it misreads one. The information content of failure is higher than the information content of success, because failure specifies the gap between what the practitioner believed and what is actually the case.
The third stage is feedback. The environment responds to the performance with information about how it fell short. In some domains, feedback is immediate and specific: the compiler error message, the wrong note audible to the musician's trained ear, the patient's vital signs on the monitor. In other domains, feedback is delayed and diffuse: the client's dissatisfied response weeks after the deliverable, the user complaints that trickle in months after launch. The quality of learning correlates directly with the quality of feedback — its immediacy, its specificity, its actionability. Good game design, as Gee repeatedly demonstrated, excels at providing feedback that is immediate (you see the consequence of your action within seconds), specific (the consequence tells you what went wrong, not just that something went wrong), and actionable (the consequence suggests, implicitly, what to try differently).
The fourth stage is reflection. The practitioner integrates the feedback into a revised model. She does not simply repeat the attempt. She adjusts — changing the approach, revising the assumption, building a new hypothesis about how the system works based on the information the failure provided. Reflection is the stage where the failure's information content is converted into understanding. Without reflection, failure is just failure — painful, discouraging, and unproductive. With reflection, failure is the mechanism through which the practitioner's model of the world becomes more accurate, more nuanced, more capable of handling the complexity of real situations.
The cycle repeats. The revised model is tested through a new performance. The new performance produces new failures — different failures, because the model has improved, but failures nonetheless, because the domain is complex enough that no single revision addresses all its demands. The new failures generate new feedback. The new feedback prompts new reflection. The model improves again. And again. And again, across hundreds and thousands of iterations, until the practitioner has built the kind of deep, situated, embodied understanding that Gee calls mastery.
Segal, in The Orange Pill, uses a geological metaphor to describe this process. Every hour spent debugging, he writes, deposits a thin layer of understanding. The layers accumulate over months and years into something solid, something the practitioner can stand on. The metaphor is precise. Geological deposits are built through time and pressure — through the slow, repetitive action of forces that individually are negligible but cumulatively produce the bedrock on which everything else is built. The mastery cycle works the same way. Each iteration deposits a thin layer. Each layer is individually insignificant. But the accumulated weight of thousands of layers produces a foundation that no shortcut can replicate.
The metaphor also explains why the loss of the cycle is not immediately visible. Remove a single layer and nothing changes. Remove a year's worth of layers and the surface looks the same. The bedrock appears solid. The practitioner appears competent. The output is indistinguishable from the output of a practitioner who has all their layers intact. The thinning is invisible until the ground is tested — until a novel problem, an unusual failure, a situation that requires the kind of deep judgment that only accumulated layers can support reveals that the foundation is not as solid as it appeared.
AI interrupts this cycle at the failure stage — and interrupting the cycle at any stage thins the deposit from every subsequent iteration. When Claude writes the function and the function works, there is no failure. When there is no failure, there is no failure-specific feedback. When there is no feedback, there is no reflection. The cycle does not produce zero learning — the practitioner may learn something from evaluating the output, from comparing it to what they expected, from the process of describing the problem clearly enough for Claude to address it. But the learning that occurs is learning about direction, not learning about the domain itself. The developer learns to direct more effectively. She does not learn what the function does or why it works or how it would behave under conditions the test cases do not cover.
Gee's analysis of video game learning illustrates why the failure stage specifically is irreplaceable. In a well-designed game, the failure state is where the game communicates its underlying logic to the player. The player learns how gravity works in the game world by falling. She learns how enemies behave by being defeated by them. She learns how the puzzle's components interact by assembling them incorrectly and watching what happens. Each failure is a lesson delivered in the most effective format possible: experiential, immediate, specific, and embedded in a context that makes the lesson meaningful.
Remove the failure state from a game and what remains is not a game. It is a movie — a sequence of events that the player watches but does not participate in, that unfolds according to a predetermined script rather than in response to the player's actions. The player does not learn, because the player does not fail, and without failure there is no information about the gap between what the player knows and what the game demands.
The analogy to AI-augmented work is direct. When the AI handles the implementation, the practitioner watches the output unfold rather than participating in its creation. The practitioner directs, evaluates, adjusts — these are genuine cognitive activities, and they are not trivial. But they are the cognitive activities of an audience, not a performer. The performer fails and learns. The audience observes and, at best, appreciates. The cognitive difference between performing and observing is not a matter of degree. It is a matter of kind. The learning that performance-with-failure produces is categorically different from the learning that observation-with-evaluation produces, and no amount of observation can substitute for the specific understanding that performance deposits.
This analysis carries a practical implication that is worth stating directly, because it cuts against the grain of the productivity narrative that dominates current discussions of AI adoption. The implication is that the most efficient workflow is not always the best workflow, if "best" is evaluated against the long-term development of the practitioners involved. Efficiency — maximizing output per unit of time — is a legitimate goal and, for many purposes, the right goal. But efficiency and learning pull in different directions. The most efficient path through a task is the path that encounters the least friction. The most developmental path through a task is the path that encounters the right kind of friction — the pleasantly frustrating challenges that keep the practitioner in the regime of competence.
Organizations that optimize exclusively for efficiency are optimizing against learning. They are producing excellent output today while eroding the human capabilities on which the quality of tomorrow's output depends. The erosion is invisible in the short term, because the AI continues to function and the output continues to be excellent. The erosion becomes visible in the long term, when the practitioners who were supposed to develop deep judgment through years of productive failure arrive at the moment of need with layers missing from their foundation.
Gee's response to this problem, articulated in his 2024 cybersapien literacy framework, is not to eliminate AI from the cycle but to redesign the cycle so that productive failure is preserved within AI-augmented environments. The practitioner would attempt the task. The AI would provide scaffolding — context, guidance, partial solutions — but not complete solutions. The practitioner would still fail, still receive feedback about the specific shape of the failure, still reflect on what the failure reveals about the gap in their understanding. The AI would function as a tutor rather than a replacement, maintaining the regime of competence rather than bypassing it.
This is harder to design than it sounds. The temptation to provide the complete solution is powerful, because the complete solution is what the practitioner wants in the moment, and providing it produces immediate satisfaction and immediate productivity gains. The tutor that withholds the answer, that provides a hint instead of a solution, that forces the practitioner to struggle through the difficulty rather than around it — this tutor is less popular. The practitioner prefers efficiency. The manager prefers output. The organization prefers the quarterly numbers that efficiency and output produce.
The learning that the tutor-model preserves is invisible in the quarterly numbers. It shows up only in the long term, in the resilience and judgment and depth that the practitioners develop through the preserved cycle of performance, failure, feedback, and reflection. And the long term, in a culture that discounts the future at the rate of quarterly earnings reports, is a very long time to wait.
But the cycle is the mechanism. There is no alternative mechanism for producing the depth of understanding that mastery requires. The cycle can be compressed — the feedback can be made more immediate, the challenges can be better calibrated, the reflection can be supported by tools and mentors that help the practitioner extract maximum learning from each failure. What the cycle cannot be is eliminated. Eliminate the cycle and the layers stop accumulating. The surface looks the same. The bedrock thins beneath it.
The question for every organization, every educator, every parent, and every practitioner navigating the AI transition is how to preserve the cycle within an environment that is structurally incentivized to eliminate it. The answer will not be a single policy or a single tool. It will be a sustained practice of designing environments — work environments, educational environments, developmental environments — that maintain the regime of competence even when the tools available make it possible, and tempting, to bypass it entirely.
---
The previous chapter described the cycle through which mastery develops. This chapter describes what happens when the cycle is interrupted — not through malice or negligence but through the straightforward operation of tools designed to be as helpful as possible.
Claude Code does not want the developer to fail. That sentence requires a caveat: Claude Code does not "want" anything. It is a system optimized to produce helpful, accurate, complete responses to the inputs it receives. But the effect of this optimization, from the perspective of the learning cycle, is equivalent to a tutor who wants the student to succeed so badly that they do the student's homework. The intention is generous. The outcome is counterproductive.
When a developer describes a problem to Claude and receives a working implementation, the following sequence has occurred: the developer performed (described the problem), the AI performed (produced the solution), the developer evaluated (reviewed the output), and the developer moved on (to the next task). Notice what is absent from this sequence: failure. The developer did not attempt the implementation and watch it break. She did not receive feedback about the specific gap between her understanding and the system's demands. She did not reflect on why her approach failed and what the failure revealed about the domain.
The sequence is productive in the narrow sense that it produced output. It is unproductive in the developmental sense that it did not produce the learning that failure would have generated. The output exists. The understanding does not — or rather, the understanding that exists is understanding of a different kind, understanding of direction rather than understanding of domain.
Gee's framework identifies this as a problem of what he calls the "probing principle." In a well-designed learning environment, the learner probes the world — tries things out, sees what happens, forms hypotheses based on the results. Probing is inherently experimental. It requires that the learner's actions produce consequences, and that the consequences are informative. A game in which the player's actions have no consequences — in which every choice leads to the same outcome — teaches nothing, because there is no information in the relationship between action and result.
AI-augmented work reduces the consequence space available to the practitioner. When Claude produces the implementation, the developer's choice of how to describe the problem has consequences — a vague description produces a vague implementation, a precise description produces a precise one — but these consequences are consequences in the domain of communication, not consequences in the domain of implementation. The developer learns to communicate more effectively with the AI. She does not learn what happens when a constraint system is implemented incorrectly, because the AI does not implement it incorrectly.
The elimination of the failure cycle is not total. Practitioners who work with AI still encounter failures — the AI produces incorrect output, the AI misunderstands the requirement, the integration of AI-produced components reveals incompatibilities that neither the AI nor the developer anticipated. These failures are genuine, and they produce genuine learning. But they are failures of a different kind than the failures that pre-AI practice produced. They are failures of integration and evaluation rather than failures of implementation. The practitioner learns to detect problems in AI output. She does not learn to solve the problems that the AI was brought in to handle.
The distinction matters because different kinds of failure produce different kinds of understanding. Implementation failures — the function that does not compile, the algorithm that produces incorrect results, the system that crashes under load — produce understanding of how systems work at the implementation level. Integration failures — the components that do not fit together, the AI output that does not match the requirement, the system that works in isolation but fails in context — produce understanding of how systems work at the architectural level. Both kinds of understanding are valuable. But they are not interchangeable, and an environment that produces only integration failures while eliminating implementation failures produces architects who cannot build — practitioners who can evaluate and direct but who lack the ground-level understanding of how the things they are directing actually work.
Gee's 2013 concept of "synchronized intelligence" is relevant here. In The Anti-Education Era, Gee argued that the challenges facing humanity had become too complex for individual intelligence to address alone. The solution, he proposed, was not smarter individuals but better synchronization — organizing people and their digital tools so that collective capabilities exceeded what any individual could achieve. The concept was prescient. It described, a decade before the tools existed, exactly the kind of human-AI collaboration that The Orange Pill documents.
But Gee's synchronized intelligence framework assumed that the humans in the system would bring deep, domain-specific expertise to the collaboration. The synchronization was supposed to combine human depth with digital breadth — the human's situated understanding of the domain with the machine's capacity to process information at scale. If the humans in the system have not developed domain-specific depth — if the failure cycle that produces depth has been eliminated by the very tools that the synchronization depends on — then the synchronization is between a shallow human and a broad machine, and the combination produces something different from what Gee envisioned.
The cybersapien literacy framework that Gee proposed in 2024 attempts to address this problem, but the address is primarily normative rather than structural. Gee describes four types of writing for the AI age — expository, creative, dialogic, and reflective — and argues that integrating all four produces what he calls "expansive cognition." The framework is sound. But the integration Gee describes requires practitioners who have developed the cognitive capabilities that each type of writing demands, and these capabilities are themselves products of the mastery cycle. Reflective writing requires the capacity for reflection. Dialogic writing requires the capacity for genuine dialogue, including the capacity to disagree with AI output when disagreement is warranted. Creative writing requires the creative judgment that develops through years of producing creative work and receiving feedback about its quality.
Each of these capacities is a product of the cycle. Performance, failure, feedback, reflection — iterated across years and domains until the practitioner has built the foundation of situated understanding on which cybersapien literacy depends. If the cycle is thinned before the foundation is built, the cybersapien literacy that Gee envisions rests on thinner ground than his framework assumes.
The practical question is not whether the failure cycle matters — the evidence is overwhelming that it does — but how to preserve it in environments where the tools are optimized to eliminate it. The developer who can ask Claude for the answer will ask Claude for the answer. The student who can generate the essay will generate the essay. The lawyer who can produce the brief without reading the cases will produce the brief without reading the cases. In every instance, the behavior is rational from the perspective of the individual practitioner. It is irrational only from the perspective of the practitioner's long-term development, and long-term development is exactly the kind of consideration that immediate pressures reliably overwhelm.
Preserving the failure cycle requires structural intervention — the deliberate design of environments that maintain productive failure even when the tools make failure optional. It requires what Segal, in The Orange Pill, calls dams: structures that redirect the flow of capability toward developmental outcomes rather than allowing it to flow entirely toward immediate output. The dam does not stop the river. It channels it. And the channeling is what creates the conditions — the pool behind the dam, the still water where life can take root — in which genuine mastery develops.
Without the dam, the river flows unimpeded. The output is excellent. The practitioners are thinner than they look. And the thinning, invisible in the quarterly numbers, accumulates until the moment arrives when thin understanding is tested by a problem that only deep understanding can solve.
---
The most consequential gap that AI opens in professional practice is the gap between the quality of the output and the depth of the practitioner who produced it. This gap has always existed in some form — a lucky guess produces the same output as an informed judgment, and the output alone cannot tell you which it was. But AI widens the gap to a degree that transforms it from an occasional anomaly into a structural feature of the work environment.
Before AI, the quality of the output was a reasonably reliable proxy for the competence of the practitioner. A well-structured legal brief indicated a lawyer who understood the relevant law. A clean, efficient codebase indicated a developer who understood the system's architecture. A correct diagnosis indicated a physician who understood the patient's condition. The output was evidence of competence, because the production of the output required the competence that the output demonstrated.
The proxy was never perfect. An inexperienced practitioner could produce a good output through luck. An experienced practitioner could produce a poor output through fatigue or distraction. But the correlation between output quality and practitioner competence was strong enough that organizations, clients, and colleagues could use output as a signal — imperfect but useful — of the practitioner's capabilities.
AI breaks this correlation. A developer using Claude can produce a clean, efficient codebase without understanding the system's architecture. A lawyer using AI tools can produce a well-structured brief without having read the cases it cites. A student using AI can produce an essay that demonstrates understanding of the material without having thought the thoughts the essay represents. The output is competent. The practitioner may or may not be.
Gee's framework identifies this as a problem of what he calls "Discourse" — capital D, a technical term that refers not merely to language but to the entire identity kit that membership in a professional community provides. A Discourse, in Gee's sense, includes ways of talking, but also ways of thinking, ways of acting, ways of valuing, and ways of being recognized by others as a legitimate member of the community. The Discourse of software engineering is not just the ability to write code. It is the ability to think about systems in specific ways, to value certain qualities (elegance, efficiency, robustness) over others, to interact with other engineers in patterns that signal competence and earn trust, and to perform the identity of "software engineer" in ways that the community recognizes as authentic.
Mastering a Discourse, Gee argues, requires immersion in the practices of the community that sustains it. A person does not master the Discourse of software engineering by reading about software engineering. She masters it by doing software engineering — by writing code, by debugging code, by reviewing others' code, by participating in the communal practices through which the Discourse is maintained and transmitted. The mastery is constituted by the practices. Remove the practices, and the mastery — the genuine, situated, identity-constituting mastery — does not develop, regardless of how competent the output looks.
AI enables practitioners to produce output that performs the surface features of a Discourse without having mastered the Discourse itself. The code looks like code written by a competent engineer. The brief looks like a brief written by a competent lawyer. The essay looks like an essay written by a student who has engaged deeply with the material. The surface features — syntax, structure, register, the deployment of domain-specific vocabulary — are all correct. But the Discourse mastery that these surface features traditionally signified is absent, because the practices through which mastery develops were bypassed in the production of the output.
This produces what might be called, drawing on Gee's distinction between acquisition and learning, a simulation of Discourse membership. The practitioner can produce the outputs that the Discourse values. She can speak the language, deploy the vocabulary, structure the arguments in the expected patterns. But she has not been through the process of "acquisition" — the long, immersive, practice-based process through which Discourse membership is genuinely constituted. She has, at best, "learned" the Discourse — acquired its explicit rules and conventions without the implicit, tacit, embodied understanding that genuine membership provides.
The distinction between acquisition and learning is one of Gee's most consequential contributions. Acquisition is the process through which people develop competence by immersion in a practice — the way children acquire their first language, not through instruction but through participation in a linguistic community. Learning is the process through which people develop competence through explicit instruction — the way adults learn a second language in a classroom, through grammar rules and vocabulary lists and deliberate practice. Both processes produce competence, but the competence is different in kind. Acquired competence is fluid, automatic, deeply integrated into the practitioner's identity and behavior. Learned competence is conscious, effortful, and prone to breakdown under pressure.
AI tends to produce learned competence rather than acquired competence. The practitioner who uses AI tools develops conscious, explicit understanding of how to direct the tools effectively. She learns the rules: how to structure prompts, how to evaluate output, how to iterate when the output does not match her intention. This is genuine learning, and it produces genuine competence in the Discourse of AI-augmented practice.
But the deeper competence of the domain itself — the acquired, embodied, identity-constituting mastery of software engineering or legal reasoning or medical diagnosis — develops through immersion in the practices of the domain, not through direction of a tool that performs those practices on the practitioner's behalf. The developer who directs Claude has not acquired the Discourse of software engineering in the way that the developer who writes and debugs code has acquired it. She has learned a new Discourse — the Discourse of AI-augmented development — and this new Discourse is layered on top of whatever depth of domain mastery she brought to it.
For the senior engineer with decades of accumulated practice, the layering is productive. The acquired depth of domain mastery provides the judgment that gives direction its value. The AI amplifies competence that is already there. For the junior practitioner who begins their career inside the AI-augmented environment, the layering is riskier. The acquired depth may never develop, because the practices through which it develops have been displaced by a different set of practices — the practices of direction rather than the practices of doing.
The organizational consequence is a specific kind of fragility. The organization that staffs its teams with AI-augmented practitioners is, in the best case, staffing them with people who possess deep domain mastery amplified by AI tools. In that case, the output is excellent because the judgment directing the AI is grounded in years of situated understanding. In the worst case — the case that becomes more likely as the junior practitioners of today become the senior practitioners of tomorrow — the organization is staffing its teams with people whose domain mastery is thinner than their output suggests, people who can produce competent work under normal conditions but who lack the depth to handle abnormal ones.
This fragility is invisible under normal conditions. The output looks the same regardless of whether the practitioner possesses deep domain mastery or shallow domain familiarity amplified by an excellent tool. The code compiles. The brief is well-structured. The essay demonstrates understanding. The surface is seamless — and seamlessness, as Segal observes channeling Byung-Chul Han, is precisely the problem. Seamlessness conceals the seams. It hides the places where the surface covers a gap rather than solid ground.
The fragility becomes visible under stress — when the AI produces incorrect output that the practitioner lacks the domain mastery to detect, when the situation is novel enough that the AI's training data does not cover it, when the organization faces a problem that requires the kind of deep, situated, embodied judgment that only acquired competence can provide. These moments are rare under normal conditions. They are not rare under abnormal conditions. And abnormal conditions — the unexpected failure, the novel challenge, the crisis that reveals the actual depth of the organization's capabilities — are the conditions under which the gap between competent output and competent practitioners becomes consequential.
Gee's framework offers a diagnostic but not an easy prescription. The diagnostic is clear: Discourse mastery requires immersion in practice, and AI tools that displace practice displace the mechanism through which mastery develops. The prescription — preserve the practices, maintain the regime of competence, design environments that support acquisition rather than merely learning — is easy to state and hard to implement, because it requires organizations to tolerate inefficiency in the short term for developmental gains in the long term, and short-term efficiency is what the market rewards.
The gap between competent output and competent practitioners is the AI transition's hidden liability. It does not appear on the balance sheet. It is not captured by the productivity metrics. It is invisible in the quarterly numbers. But it is there, accumulating beneath the smooth surface of excellent output, and the organizations that recognize it and address it will be more resilient than the organizations that do not.
---
There is a test that every experienced practitioner in every complex domain can apply, and it is a test that no output metric can replicate. The test is this: present the practitioner with a situation they have never encountered before, a situation that falls outside the range of familiar problems, and observe what happens. The practitioner who knows the domain — who has memorized the rules, who can recite the principles, who can produce correct output under standard conditions — will struggle. The practitioner who understands the domain — who has built situated, experiential, deeply connected knowledge through years of productive failure and reflection — will navigate.
The difference between knowing and understanding is not a difference of degree. It is a difference of kind. Knowing is having information. Understanding is having the capacity to use information in novel situations, to recognize when information applies and when it does not, to generate new information when existing information is insufficient, and to detect when the situation has shifted in ways that make previously reliable information misleading.
Gee's framework grounds this distinction in the concept of situated meaning. A practitioner who knows a concept possesses the concept's decontextualized meaning — the definition, the abstract description, the general principle. A practitioner who understands a concept possesses its situated meaning — the rich, textured, experientially grounded knowledge that includes not just what the concept means in the abstract but how it behaves in practice, where it applies and where it breaks, what it looks like when it is working and what it looks like when it is failing, and how it interacts with other concepts in the messy, complicated, never-fully-predictable reality of actual practice.
Decontextualized meaning is what textbooks provide. It is what lectures transmit. It is what AI can generate with remarkable fluency. When Claude explains a concept — say, the concept of eventual consistency in distributed systems — the explanation is often clearer, more comprehensive, and more precisely worded than what most human practitioners could produce on demand. The decontextualized meaning is delivered with excellence.
Situated meaning is what experience provides. The developer who has built a distributed system and watched it fail because she assumed strong consistency where only eventual consistency was available possesses situated meaning that no explanation can convey. She knows what eventual consistency feels like in practice — the subtle lag, the temporary contradictions, the user-facing consequences of updates that have not yet propagated, the design decisions that mitigate the consequences and the ones that make them worse. This situated meaning was deposited through the mastery cycle: she performed (built the system), failed (watched it break), received feedback (the system's behavior under load), and reflected (revised her model of how distributed consistency actually works).
The distinction between knowing and understanding is not visible in the output under normal conditions. A developer who knows eventual consistency and a developer who understands eventual consistency will produce similar code under standard conditions — code that handles the common cases correctly, that follows established patterns, that would pass a code review. The difference emerges when the conditions are not standard — when the system encounters a load pattern that the common patterns do not handle, when a network partition produces a consistency violation that the developer must diagnose in real time, when a design decision that seemed correct under standard assumptions turns out to be catastrophically wrong under the actual conditions of deployment.
Under these conditions, the developer who understands will navigate. She will recognize the problem, not because she has seen this exact problem before, but because her situated understanding of how distributed consistency works gives her a mental model rich enough to generate hypotheses about unfamiliar failures. She will diagnose quickly — not through systematic analysis of every possibility but through the pattern-matching that situated understanding enables, the intuitive sense that something is wrong here, in this specific way, for this specific reason. She will fix the problem and, in fixing it, deposit another layer of situated understanding that will serve her in future encounters with novel situations.
The developer who knows will not navigate. She will recognize that something is wrong — the symptoms are visible — but she will not have the situated understanding to diagnose the cause. She will consult documentation. She will ask Claude. She may arrive at the correct diagnosis eventually, but the process will be slower, less confident, and more dependent on external resources. And the fix she applies may address the symptom without addressing the underlying cause, because addressing the underlying cause requires understanding that she has not had the opportunity to develop.
The gap between knowing and understanding is the gap that Segal describes in The Orange Pill when he writes about the senior engineer who could "feel a codebase the way a doctor feels a pulse, not through analysis but through a kind of embodied intuition that had been deposited, layer by layer, through thousands of hours of patient work." That embodied intuition is understanding. It is the product of years of situated experience — years of performing, failing, receiving feedback, and reflecting. It cannot be transmitted through explanation. It cannot be generated by AI. It can only be built through the practice that the mastery cycle requires.
Gee, in his 2025 RELC Journal interview, made a striking observation about the relationship between AI and the development of understanding. The best affinity spaces — the communities of practice where learning happens most effectively — are "fueled by individual interest, passion, curiosity, and enjoyment." These affective dimensions are not incidental to learning. They are constitutive of it. The developer who cares about distributed systems, who is genuinely curious about how they work, who feels the satisfaction of solving a hard consistency problem, is developing understanding in a way that the developer who merely needs the system to work is not.
AI, by eliminating the struggle, risks eliminating the affective engagement that drives the deepest learning. The satisfaction of solving a hard problem is inseparable from the difficulty of the problem. Remove the difficulty, and the satisfaction changes character — from the deep fulfillment of genuine mastery to the lighter satisfaction of efficient completion. Both are genuine forms of satisfaction. But they are associated with different depths of learning, and the substitution of one for the other, repeated across thousands of interactions, produces a gradual shift in the practitioner's relationship to the domain: from understanding to knowing, from situated engagement to surface familiarity, from the embodied intuition that years of struggle produce to the decontextualized knowledge that AI provides on demand.
The educational implications are severe and immediate. When Gee argues that schools should model themselves on the learning principles embedded in good game design, the argument assumes that the learning environment maintains the conditions for understanding to develop — the productive failures, the calibrated challenges, the immediate feedback, the identity investment. AI in the classroom, deployed without attention to these conditions, produces students who know more and understand less. The essays are better. The test scores may improve. The surface metrics that education currently values will look excellent.
But the students will not have developed the understanding that the education was supposed to produce. They will know the material in the thin, decontextualized sense that allows them to reproduce it on demand. They will not understand it in the thick, situated sense that allows them to use it in novel situations, to recognize when it applies and when it does not, to generate new knowledge when existing knowledge is insufficient.
This is the deepest cost of the AI transition as viewed through Gee's framework. Not the loss of jobs, though that loss is real and consequential. Not the disruption of industries, though that disruption is profound. The deepest cost is the potential thinning of human understanding — the gradual replacement of situated, embodied, deeply connected knowledge with decontextualized, surface-level familiarity that looks like understanding, performs like understanding under standard conditions, and fails like ignorance when the conditions change.
The replacement is not inevitable. It is the default outcome of deploying AI tools in environments that do not deliberately preserve the conditions for understanding to develop. The alternative — designing environments that use AI to scaffold learning rather than substitute for it, that maintain productive failure within the augmented workflow, that protect the cycle through which understanding is built — requires deliberate effort, sustained attention, and the willingness to accept short-term inefficiency in exchange for long-term depth.
The choice between knowing and understanding is the choice that every organization, every educational institution, and every practitioner faces in the AI transition. The default is knowing. Understanding requires the dams.
The senior software architect who told Edo Segal he felt like a master calligrapher watching the printing press arrive was not describing a skills problem. He was describing an identity crisis. The distinction matters, because the two problems require different interventions, and conflating them — treating the identity crisis as though it were merely a skills gap to be addressed through retraining — misses the depth of what the AI transition is actually doing to the people living through it.
James Paul Gee spent decades developing a theory of identity that treats it not as a fixed possession but as a performance sustained by practice. A person's identity, in Gee's framework, is not something they have. It is something they do — a set of practices, values, ways of speaking, ways of thinking, and ways of interacting that are recognized by a community as belonging to a particular kind of person. The software engineer's identity is constituted by the practices of software engineering: writing code, debugging systems, reviewing others' work, participating in the communal rituals of stand-ups and sprint retrospectives and code reviews. Remove the practices, and the identity does not merely change. It dissolves, because there is nothing left to sustain it.
Gee distinguished between three types of identity that operate simultaneously in any learning or practice environment. The first is what might be called institutional identity — the identity assigned by an institution. "Software engineer" on a badge, "senior developer" in a title, "principal architect" in an organizational chart. This identity is granted by an authority and can be revoked by that authority. It is the thinnest form of identity, because it depends on external recognition rather than internal constitution.
The second is what Gee described as the identity recognized by a community of practice — the identity that emerges from participation in a Discourse. This is the identity that the software architect was mourning. Not the title on his badge but the way of being in the world that twenty-five years of practice had produced. The ability to feel a codebase the way a doctor feels a pulse. The embodied intuition that tells him something is wrong before he can articulate what. The particular quality of attention that he brings to a system under stress, honed through thousands of hours of encountering systems under stress and learning to read their signals.
This identity is not assigned. It is acquired — built layer by layer through the mastery cycle, through the accumulated deposits of performance, failure, feedback, and reflection that produce situated understanding. And because it is acquired through practice, it is sustained only by continued practice. The surgeon who stops operating does not merely lose skill. She loses the identity of "surgeon" — the way of seeing, the way of thinking, the way of being in the world that operating constituted. The musician who stops performing does not merely get rusty. He loses the identity of "musician" — the particular relationship to sound, to time, to the audience, that performing sustained.
The third type is what Gee called projective identity — the aspirational self that the practitioner is becoming through their engagement with a domain. The junior developer who stays up late debugging a system she does not yet understand is not just solving a problem. She is becoming a certain kind of practitioner — the kind who can debug systems, who understands how components interact, who possesses the patient, systematic approach to complex problems that the debugging process demands. Her projective identity is the version of herself that she will become if she continues the practice long enough for the mastery cycle to do its work.
AI disrupts all three types of identity, but it disrupts them differently, and the differences matter for understanding the specific kind of distress that the AI transition produces.
Institutional identity is disrupted by the economic consequences of AI adoption. When organizations can produce the same output with fewer practitioners, the institutional identities of the displaced practitioners are revoked. This is the disruption that dominates the headlines, the disruption measured in layoff numbers and unemployment statistics. It is real and consequential, but it is also the shallowest form of identity disruption, because institutional identity is the form of identity least connected to the practitioner's sense of who they actually are.
The community-recognized identity is disrupted by the displacement of the practices through which the identity was constituted. When Claude handles the debugging, the senior engineer's identity as the person who can feel a codebase is not revoked by an institution. It is undermined by the disappearance of the practice that sustained it. The debugging sessions were not just tasks. They were the ongoing performance through which the engineer's identity as a master practitioner was maintained and recognized by the community. Without the performance, the identity has no stage.
This is what Segal captures when he describes the elegists — the practitioners who are mourning not a job but a relationship. The specific intimacy between a builder and the thing they build, the codebase that is legible to them the way a friend's handwriting is legible, not because it follows rules but because they know it. That intimacy is a feature of community-recognized identity. It develops through practice. It is sustained by practice. And when the practice is displaced, the intimacy fades — not because the practitioner's knowledge has been erased but because the ongoing relationship through which the knowledge was kept alive has been severed.
The projective identity is disrupted in a way that is particularly consequential for junior practitioners and students. The junior developer's projective identity — the version of herself she is becoming through the practice of debugging, writing code, struggling with systems — is the aspirational self that motivates her learning. She endures the frustration of the debugging session because the frustration is the price of becoming the kind of practitioner she aspires to be. The projective identity is the engine of the mastery cycle: it provides the motivation to persist through productive failure, the willingness to endure the discomfort of the regime of competence, the investment in the process that makes the process educational rather than merely painful.
When AI eliminates the practices through which the projective identity is pursued, the motivation structure collapses. The junior developer does not need to debug because Claude debugs. She does not need to struggle with implementation because Claude implements. The practices that would have constituted her projective identity — the practices through which she would have become the kind of practitioner she aspires to be — are no longer necessary for the production of output. They are available only as deliberate exercises, disconnected from the productive context that gave them their meaning and their motivational force.
This is the twelve-year-old's question, refracted through Gee's identity framework. "What am I for?" is not a question about capability. It is a question about identity. What practices will constitute who I am? What kind of person will I become through my work? If the practices that previous generations used to build their professional identities have been displaced by tools, what practices remain? What projective identity can I pursue? What version of myself am I becoming?
Gee's framework suggests that the answer lies in the development of new Discourses — new identity kits that include new practices, new values, new ways of being recognized by a community. The Discourse of AI-augmented practice is such a Discourse. It has its own practices (prompting, evaluating, directing, integrating), its own values (clarity of specification, quality of judgment, speed of iteration), and its own forms of community recognition (the builder who ships extraordinary products with AI assistance, the practitioner who can direct AI toward outcomes that others cannot envision).
But the new Discourse is not yet fully formed. Its practices are still emerging. Its values are still being negotiated. Its community is still coalescing. And in the interim — in the period between the dissolution of the old Discourse and the consolidation of the new one — practitioners exist in what can only be described as a Discourse gap: a space where the old identity kit no longer fits and the new one has not yet arrived.
The Discourse gap is experienced as the specific vertigo that Segal describes throughout The Orange Pill. It is the silent middle's condition: holding two truths at once, unable to resolve them, suspended between an identity that is dissolving and an identity that has not yet formed. The exhilaration comes from the glimpse of the new identity — the builder who can do twenty times what they could do before, the practitioner whose capabilities have expanded beyond anything the old Discourse could have supported. The terror comes from the dissolution of the old identity — the loss of the practices that made them who they were, the uncertainty about whether the new practices will produce an identity as rich and as deeply grounded as the one they are losing.
Gee's insight — that identity is constituted by practice, not possessed as a trait — means that the resolution of the Discourse gap cannot be achieved through reassurance or reframing alone. Telling practitioners that their value has been "promoted" from execution to judgment does not resolve the gap, because the gap is not cognitive. It is practical. The identity will form only through new practices — through the sustained, repeated, community-embedded performance of the new Discourse's constitutive activities. And the formation will take time, because identity formation always takes time, and no amount of acceleration can compress the process beyond the minimum that human development requires.
The organizations and educational institutions that support their people through the Discourse gap — that provide the conditions for new practices to develop, new identities to form, new communities to coalesce around shared engagement with the emerging Discourse — will produce practitioners who are genuinely at home in the AI-augmented world. The organizations that do not will produce practitioners who are competent at the surface level and rootless at the identity level — people who can produce the outputs the new Discourse values without having become the kind of people the new Discourse is still learning to recognize.
Identity is not a luxury. It is the motivational foundation on which mastery is built. Thin the identity, and the motivation to pursue mastery thins with it. Sever the practices, and the identity that the practices sustained dissolves into nostalgia for what was, anxiety about what is, and uncertainty about what is coming.
---
Stack Overflow was, for more than a decade, the largest and most effective learning environment in the history of software development. It was not designed as a learning environment. It was designed as a question-and-answer platform. But the learning that occurred within it — the millions of developers who built situated understanding through the practice of asking questions, reading answers, evaluating competing solutions, and contributing their own knowledge to the communal pool — was a phenomenon that James Paul Gee's framework was built to explain.
Gee developed the concept of "affinity spaces" to describe learning communities organized around shared interests rather than institutional affiliations. An affinity space, in Gee's technical sense, is a space — physical or virtual — where people with a shared interest gather to learn from and with each other. The space is organized around the interest, not around the participants' institutional identities or social positions. A professor and a teenager and a self-taught hobbyist can participate in the same affinity space, and their contributions are evaluated on merit rather than credential.
Affinity spaces differ from formal educational institutions in several ways that Gee considered fundamental to their effectiveness as learning environments. First, participation is voluntary and interest-driven. People are there because they want to be, not because they are required to be. The motivation is intrinsic — fueled by what Gee described as "individual interest, passion, curiosity, and enjoyment." Second, the space supports multiple forms of participation. Some people contribute expertise. Some ask questions. Some lurk, absorbing knowledge without contributing visibly. All forms of participation are legitimate. Third, the space generates and distributes knowledge communally. No single authority determines what is correct. The community validates knowledge through collective evaluation — upvotes, peer review, the practical test of whether an answer actually solves the problem it claims to solve.
Stack Overflow exemplified all of these characteristics. A developer stuck on a problem could search for existing answers, find multiple competing solutions, evaluate them against each other, and select the one that best addressed their specific situation. If no existing answer sufficed, they could ask a new question and receive responses from practitioners whose expertise ranged from novice to world-class, all evaluated by the community through a voting system that surfaced the most useful contributions regardless of the contributor's credentials.
The learning that occurred through this process was profoundly situated. The developer did not learn about abstract concepts. She learned about specific solutions to specific problems in specific contexts. The knowledge she acquired was grounded in her immediate need — the system she was building, the error she was encountering, the feature she was trying to implement. And because the knowledge was acquired through active engagement with the community — reading, evaluating, comparing, sometimes contributing — it was the kind of knowledge that Gee's framework identifies as genuine mastery: situated, experiential, identity-constituting, and deeply connected to the practices of the community within which it was developed.
AI is contracting these affinity spaces. The contraction is not hypothetical. Stack Overflow's traffic has declined significantly since the widespread adoption of AI coding assistants, and the decline is driven by exactly the mechanism Gee's framework predicts: when practitioners can get immediate, personalized answers from an AI tool, the incentive to participate in communal knowledge-building diminishes. The developer who would previously have spent twenty minutes searching Stack Overflow, reading multiple answers, evaluating their relative merits, and perhaps contributing a comment or a clarification now spends thirty seconds asking Claude and receiving a response that is tailored to her specific situation.
The efficiency gain is real. The developer gets her answer faster, with less effort, in a format that is often more immediately useful than what Stack Overflow could provide. Claude's answer is personalized — it takes into account the specific context the developer has provided, the specific technology stack she is using, the specific constraints of her project. Stack Overflow's answers are generic — written for a general audience, addressing the general case, requiring the developer to adapt the general solution to her specific situation. The adaptation was itself a form of learning, but it was also a form of friction, and friction is what AI is designed to eliminate.
The contraction of the affinity space produces losses that the efficiency gain does not compensate. The losses are in three categories, each corresponding to a feature of affinity spaces that AI tools do not replicate.
The first loss is communal knowledge validation. In an affinity space, knowledge is validated by the community — tested against multiple perspectives, evaluated by practitioners with different levels and kinds of expertise, refined through the process of communal scrutiny. An answer on Stack Overflow that has been upvoted hundreds of times has been evaluated by hundreds of practitioners, each of whom brought their own situated understanding to the evaluation. The validation is distributed, multi-perspectival, and robust.
AI-generated knowledge is validated by no one except the practitioner who receives it, and the practitioner may lack the situated understanding to evaluate it effectively. This is the Deleuze error at scale: output that looks right, sounds authoritative, and passes the superficial tests that the practitioner is equipped to apply, but that may contain errors, omissions, or misleading simplifications that communal validation would have caught.
The second loss is exposure to multiple approaches. An affinity space naturally generates multiple solutions to the same problem, because different practitioners approach the problem from different perspectives, with different assumptions, different levels of experience, and different aesthetic preferences. The developer who reads five competing answers to the same question encounters five different ways of thinking about the problem, and the exposure to multiple approaches is itself a form of learning — it develops the flexibility, the recognition that there are many valid paths to a solution, that Gee identified as a hallmark of deep mastery.
Claude provides one answer. The developer can request alternatives, but the default is a single response, and the response tends toward the statistical center of the training data — the most common approach, the most conventional solution, the smooth average of established practice. The idiosyncratic approaches that live at the edges of an affinity space's collective knowledge — the unusual solutions proposed by practitioners with unusual backgrounds or unusual perspectives — are precisely the approaches that AI tools are least likely to generate and that affinity spaces are most likely to surface.
The third loss is identity formation through community participation. An affinity space is not just a source of knowledge. It is a community within which identities are formed. The developer who participates in Stack Overflow is not just acquiring information. She is developing a identity within the community of software developers — learning the norms, the values, the ways of communicating that mark her as a member of the Discourse. She is becoming a certain kind of practitioner through her participation, and the becoming is as important as the knowing.
AI participation does not constitute community. The developer who asks Claude a question is not participating in a community. She is interacting with a tool. The interaction may be productive, but it does not provide the social scaffolding — the recognition by peers, the validation of contributions, the sense of belonging to a community of practice — that affinity space participation provides. The identity that develops through AI interaction is the identity of a tool user, not the identity of a community member. Both are real identities. They are not the same identity, and the community-member identity provides something — belonging, recognition, the motivational force of shared practice — that the tool-user identity does not.
Gee was explicit, in his 2025 RELC Journal interview, about the importance of the affective dimension of learning. Students, he argued, "cannot choose to engage with subjects like algebra unless their feelings and emotions align with the idea that learning it will, in some way, benefit their well-being." The affective dimension of affinity space participation — the pleasure of contributing, the satisfaction of being recognized, the sense of belonging to a community of people who share your interests and value your contributions — is the motivational fuel that drives sustained engagement with difficult material. Remove the affective dimension, and the engagement becomes instrumental: the practitioner uses the tool to get the answer, but the deeper learning that sustained engagement produces does not occur.
New affinity spaces are forming around AI use itself. Discord servers where developers share prompting strategies. Reddit communities where practitioners discuss AI-augmented workflows. GitHub repositories where collaborative projects demonstrate the possibilities of human-AI partnership. These are genuine affinity spaces, with the characteristics that Gee identified: interest-driven participation, multiple forms of contribution, communal knowledge validation, identity formation through practice.
But the new affinity spaces are organized around the practice of using AI, not around the practice of the domain that the AI is being used within. The developer who participates in a Claude Code Discord server is developing situated understanding of how to use Claude effectively. She is not necessarily developing situated understanding of the domain — the distributed systems, the database architectures, the algorithmic challenges — that Claude is helping her navigate. The affinity space produces mastery of the meta-skill (directing AI) while potentially displacing the affinity spaces that produced mastery of the domain skill (building systems).
The displacement is not total. Many AI-focused affinity spaces include substantial domain discussion alongside AI discussion. Practitioners share not just prompting strategies but also the domain knowledge that makes effective prompting possible. The best of these spaces function as hybrid affinity spaces — communities where domain mastery and AI mastery develop together, where the situated understanding of the domain informs the quality of AI direction, and where the AI's capabilities extend the range of domain problems that the community can address.
The question is whether the hybrid spaces will develop robustly enough to replace the domain-specific spaces that are contracting. The answer depends on the quality of the dams — on the deliberate design of affinity spaces that support both domain mastery and AI mastery, that preserve the communal knowledge validation and the multiple-approach exposure and the identity formation that the pre-AI affinity spaces provided, while incorporating the expanded capabilities that AI makes possible.
The organizations and educational institutions that build these hybrid spaces will produce practitioners who possess both the depth that domain mastery requires and the breadth that AI augmentation enables. The organizations that allow the domain-specific spaces to contract without replacement will produce practitioners who are fluent in directing AI and shallow in understanding what they are directing it to do.
Affinity spaces are the social infrastructure of mastery. They are where situated knowledge is produced, validated, and transmitted. They are where identities are formed through communal practice. They are where the affective engagement that fuels sustained learning is generated and maintained. The contraction of these spaces is not a minor adjustment to the landscape of professional development. It is the erosion of the foundation on which professional development has been built for as long as professions have existed.
The dams, once again, must be built. Not to stop the river — the AI tools are here, they are useful, they are not going away — but to create the conditions in which the affinity spaces that mastery requires can persist, adapt, and flourish in the AI-augmented world. The pool behind the dam is not just still water. It is habitat — the environment within which the complex ecosystem of knowledge, identity, and community sustains itself against the unimpeded current that would otherwise sweep it away.
---
The game my twelve-year-old played last summer kept killing him. Not once or twice — dozens of times in the same level, the same impossible jump, the same enemy placement that punished every approach he tried. I watched him die and restart, die and restart, for forty-five minutes one Saturday morning. He did not look frustrated. He looked absorbed.
By the end of the session he had solved the level. He understood exactly how the enemy moved, where the safe windows were, what timing the jump required. He understood these things not because someone told him, not because a tutorial explained the mechanics, but because he had failed enough times that his body had learned the pattern. He knew it the way you know the weight of a door you've opened a thousand times — without thinking, without words, through accumulated practice that had become part of him.
I had been writing The Orange Pill at the time. I was deep in the chapters about productivity and amplification and the extraordinary power of Claude Code to compress the distance between what I could imagine and what I could build. I believed every word of it. I still do. The twenty-fold multiplier was real. The liberation was real. The expansion of what a single human being could accomplish was the most significant thing I had witnessed in thirty years of building.
But watching my son die forty times on the same level, I felt something that Gee's framework helped me name only later: the dying was the learning. Not a side effect. Not an obstacle. The mechanism. Every death deposited a layer. Every layer was invisible until the moment when all of them together produced the mastery that let him clear the level as though it were nothing.
Gee's phrase — "pleasantly frustrating" — captures something that I had been struggling to articulate throughout the writing of this book. The frustration was not an enemy to be defeated. It was a collaborator. It was the signal that his current model was being stretched, that the regime of competence was doing its work, that the situated meaning of the game's mechanics was being deposited in his nervous system through the only process that could deposit it: repeated encounter with difficulty that was calibrated to teach.
The regime of competence. Productive failure. Situated meaning. Affinity spaces. Identity constituted through practice. These are not abstract principles from a scholarly framework. They are descriptions of how human beings actually develop the capacities that matter most — the deep, embodied, transferable understanding that allows a person to navigate situations they have never encountered before, to feel when something is wrong before they can say why, to bring judgment rather than mere information to the problems that demand it.
When I look at what AI does best — the speed, the breadth, the ability to produce competent output across domains that would have taken a human lifetime to traverse — I see power. Genuine power. Power I use every day, power I would not give up, power that has made me more capable than I have ever been.
When I look at what Gee's framework reveals about that power — the thinning of the regime, the displacement of productive failure, the contraction of affinity spaces, the severing of identity from practice — I see cost. Genuine cost. Cost that is invisible in the output, invisible in the quarterly numbers, invisible to everyone except the people who know what deep mastery feels like and can sense, in the smooth efficiency of AI-augmented work, the absence of the friction that built them.
Both are true. The power is real and the cost is real and neither cancels the other. What Gee taught me — what this entire analysis drove home with a force I had not expected — is that the cost is not a problem to be solved. It is a condition to be managed. The regime of competence must be deliberately maintained. Productive failure must be deliberately preserved. Affinity spaces must be deliberately built and sustained. Identity-constituting practices must be deliberately protected within environments that are structurally incentivized to eliminate them.
The dams are not optional. They are the difference between an AI transition that produces more capable human beings and one that produces more capable outputs with thinner human beings behind them.
My son cleared that level because the game was well-designed enough to keep him in the zone where dying taught him something. The question for all of us — parents, builders, teachers, leaders — is whether we can design the environments of AI-augmented work and learning with the same care. Whether we can build the regimes of competence that the tools, left to their own optimization, will inevitably erode.
Gee would say the answer is in the design. I would add that it is also in the will — in the decision to value the development of the practitioner as much as the quality of the output, to protect the conditions for depth even when the market rewards breadth, to remember that the person matters as much as the product.
The game killed my son forty times. It was the best teacher he had all summer.
-- Edo Segal
James Paul Gee spent decades proving that the most powerful learning environments share one counterintuitive feature: they are designed to make you fail. Video games, surgical residencies, jazz ensembles -- mastery develops through a cycle of attempt, failure, feedback, and reflection that cannot be compressed or skipped. AI tools, optimized for helpful efficiency, interrupt this cycle at the most critical stage. The output improves. The practitioner thins beneath it.
This book applies Gee's learning science framework -- the regime of competence, situated meaning, productive failure, affinity spaces, and identity formation through practice -- to the AI revolution that The Orange Pill documents from the builder's perspective. What emerges is a precise diagnosis of the gap between competent output and competent practitioners, and a framework for designing environments that preserve the conditions for genuine mastery.
The friction was never the obstacle. It was the curriculum.

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that James Paul Gee — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →