Jean Lave — On AI
Contents
Cover Foreword About Chapter 1: The Tailor's Hands Chapter 2: The Supermarket and the Classroom Chapter 3: The Periphery and the Center Chapter 4: The Community That Thinks Together Chapter 5: What the Struggle Deposits Chapter 6: Thin Knowledge, Thick Knowledge Chapter 7: The Apprenticeship Severed Chapter 8: The Decontextualization Machine Chapter 9: Recontextualizing the River Epilogue Back Cover
Jean Lave Cover

Jean Lave

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Jean Lave. It is an attempt by Opus 4.6 to simulate Jean Lave's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question that broke something open for me was not about intelligence. It was about a supermarket.

Jean Lave studied grocery shoppers in Orange County in the 1980s. She gave them arithmetic problems in two settings: in the aisles, where they were comparing prices and managing real household budgets, and on paper, where the problems were mathematically identical but stripped of every contextual detail. In the supermarket, they scored near ninety-eight percent. On paper, fifty-nine.

Same people. Same math. The variable was not ability. It was context.

I read that finding during a stretch when I was watching my engineers in Trivandrum produce extraordinary work with Claude Code — features shipping in days that should have taken weeks. The output was undeniable. But something Lave illuminated made me stop and look harder at what was underneath the output. She spent four decades demonstrating a single, devastating claim: knowledge is not a substance you pour from one container into another. It is constituted by the context in which it is developed. The struggle, the specific environment, the community of practitioners around you, the tools in your hands, the consequences of getting it wrong — these are not obstacles on the path to understanding. They are the medium through which understanding forms.

Remove the medium, and you do not get understanding faster. You get something else. Something thinner.

This matters right now more than it has ever mattered. We are in the middle of the most dramatic expansion of human capability in history, and the tools driving that expansion are, by their nature, decontextualization engines. A large language model extracts patterns from billions of situated human experiences and delivers them as context-free output. The output is often extraordinary. But the situated understanding that produced the original knowledge — the feel a senior engineer has for a codebase, the judgment a lawyer brings to a case she has lived inside for months — does not travel with it.

Lave gives us the vocabulary to see what is being gained and what is quietly being lost. Not to refuse the tools. I am not refusing them; I am building with them every day. But to understand that the context of learning is not incidental to what is learned. It is constitutive. And if we do not design deliberately for the situated engagement that AI makes optional, we will produce a generation of practitioners who are more productive and less wise than any that came before.

This book is that design challenge, examined through the lens of the thinker who saw it most clearly.

-- Edo Segal ^ Opus 4.6

About Jean Lave

Jean Lave (1939–2023) was an American social anthropologist and learning theorist whose ethnographic fieldwork fundamentally reshaped how scholars understand the relationship between cognition and context. Born in 1939, she conducted formative research among Vai and Gola tailoring apprentices in Liberia during the early 1970s, documenting how newcomers acquired mastery not through formal instruction but through gradually deepening participation in the everyday practice of their craft. Her landmark book *Cognition in Practice: Mind, Mathematics, and Culture in Everyday Life* (1988) drew on studies of adult arithmetic in supermarkets and kitchens to demonstrate that human cognition is fundamentally situated — shaped by and inseparable from the specific social, material, and cultural contexts in which it occurs. In 1991, she and Etienne Wenger co-authored *Situated Learning: Legitimate Peripheral Participation*, which introduced the concepts of "communities of practice" and "legitimate peripheral participation" to describe how learning happens through increasing engagement in shared social activity rather than through the transfer of abstract knowledge. Lave held faculty positions at the University of California, Irvine and the University of California, Berkeley, where she spent the majority of her career in the Department of Geography. Her work challenged the dominant cognitivist assumption that knowledge is a portable, context-free substance, influencing fields ranging from education and organizational theory to human-computer interaction and the design of learning technologies. She died in 2023.

Chapter 1: The Tailor's Hands

In a workshop in Monrovia, Liberia, in the early 1970s, a young apprentice tailor sat cross-legged on the floor, watching his master cut cloth. He had been watching for months. He had not yet been permitted to hold the scissors. His job, for now, was to sew buttons onto finished garments and to press completed trousers — the final operations in the production sequence, the tasks closest to the finished product and furthest from the complex judgment that determined whether the product would be worth finishing at all.

This arrangement strikes the modern observer as inefficient. Why not start the apprentice at the beginning — teach him to measure, then to cut, then to sew seams, then to assemble, proceeding logically from first principles to finished garment? The answer, which Jean Lave documented across years of ethnographic fieldwork among Vai and Gola tailors in Liberia, is that the logical sequence and the learning sequence are not the same thing. The tailoring masters had discovered, through generations of practice, something that Western educational theory would take decades to formalize: learning does not proceed from simple to complex. It proceeds from the periphery to the center of a living practice, and the periphery is defined not by what is easiest but by what gives the newcomer legitimate access to the whole activity while limiting the damage an unskilled participant can cause.

The apprentice who presses trousers and sews buttons is not performing meaningless busywork. He is handling finished garments — seeing, touching, and internalizing the standard of quality that the entire workshop's practice is organized to produce. He learns what a well-made garment feels like in his hands before he learns any of the operations that produced it. The endpoint comes first. The understanding of what the practice is trying to achieve precedes any instruction in how to achieve it.

Lave's documentation of this process, first published in her contributions to the anthropology of learning and later formalized in Cognition in Practice (1988) and Situated Learning: Legitimate Peripheral Participation (1991, with Etienne Wenger), constituted a quiet revolution in how learning itself was understood. The revolution consisted of a single, devastating observation: knowledge is not a substance that can be extracted from one context and poured into another. Knowledge is situated. It is embedded in the specific practices, relationships, materials, tools, and social structures within which it was developed. The tailor's knowledge of cloth is inseparable from the specific workshop, the specific scissors, the specific customers whose bodies taught him what measurements mean in practice, the specific master whose corrections were delivered not as abstract principles but as adjustments to particular cuts on particular pieces of fabric on particular afternoons.

Remove the apprentice from the workshop and place him in a classroom. Teach him the same measurements, the same cutting angles, the same assembly sequence — but teach them abstractly, as principles divorced from the practice in which they are meaningful. The resulting knowledge is not the same knowledge. It may contain the same propositional content — the same facts about cloth grain and seam allowances — but it lacks the contextual density, the embodied feel, the social embedding that makes knowledge reliable under the pressure of actual practice. It is, in a term Lave's framework makes precise, structurally thinner.

This observation, arrived at through patient ethnographic work in tailoring workshops and later confirmed across radically different domains — supermarkets, kitchens, Weight Watchers meetings, naval navigation — carries implications that extend far beyond educational theory. It is, in fact, the most rigorous challenge available to a set of assumptions that have become, in the age of artificial intelligence, so pervasive they have become invisible.

Those assumptions, stated plainly: that knowledge is information; that information is context-free; that a correct answer is a correct answer regardless of how it was produced; and that the gap between receiving knowledge and possessing it can be closed by transfer — by moving information from one container (a book, a lecture, a large language model) to another (a human mind).

Every one of these assumptions is wrong. Lave spent four decades demonstrating why.

The demonstration began in Monrovia but found its most striking expression in a series of studies Lave conducted in the early 1980s among adult shoppers in Orange County, California. The shoppers were members of Weight Watchers, and part of their program required them to calculate food portions precisely — a task that demanded practical arithmetic in the specific context of their kitchens and supermarket aisles.

Lave gave the same shoppers formal arithmetic tests. The results were extraordinary. In the supermarket, navigating real decisions with real consequences — comparing unit prices, calculating discounts, determining the best value among competing products — the shoppers achieved accuracy rates approaching ninety-eight percent. On the formal tests, which contained mathematically identical problems stripped of their practical context, the same people scored around fifty-nine percent.

The same minds. The same mathematical operations. Radically different performance. The variable was not intelligence, not education, not mathematical training. The variable was context.

In the supermarket, the mathematics was situated — embedded in a practice the shoppers understood, serving goals they cared about, performed with physical products they could see and handle and compare. The arithmetic was not an abstract operation performed on symbols. It was a tool wielded in the service of a meaningful activity, and the meaning of the activity — feeding a family well, managing a budget, making choices that would be evaluated by real consequences — provided a scaffold that made the mathematics more reliable, more sophisticated, and more accurate than the same mathematics performed in a vacuum.

In the formal test, the mathematics was decontextualized — stripped of its practical meaning, its physical referents, its social stakes. The numbers on the page had no consequences. The operations had no purpose beyond their own completion. And the shoppers, deprived of the contextual scaffold that had made them extraordinary practitioners, became ordinary test-takers.

Lave's conclusion was not that supermarkets are better classrooms than schools, though the data would support such a reading. The conclusion was more fundamental: cognition itself is situated. The mind does not contain knowledge the way a container holds water — passively, indifferently, independently of the container's shape. The mind produces knowledge in interaction with its environment, and the knowledge it produces bears the shape of that interaction as indelibly as a river bears the shape of its bed.

This conclusion, which Lave developed across the 1980s and formalized in Cognition in Practice, was not merely an academic insight. It was, and remains, a direct challenge to the foundational assumption of artificial intelligence.

The foundational assumption of AI, from its origins in the Dartmouth conference of 1956 through the expert systems of the 1980s to the large language models of the 2020s, is that intelligence is computation — the manipulation of representations according to rules. Knowledge, in this framework, is information: propositions, patterns, statistical regularities that can be encoded, stored, transmitted, and applied independently of the context in which they were produced. A large language model trained on the collected text of humanity does not need to have sat in a Monrovian tailor shop to generate accurate statements about tailoring. It does not need to have pushed a cart through a supermarket to solve arithmetic problems about unit pricing. It has processed the representations. It can produce the outputs.

And the outputs are, in many cases, indistinguishable from the outputs of situated human expertise.

This is the fact that makes the current moment so vertiginous. The tailoring apprentice who has spent three years in the workshop and the language model that has processed the workshop's documentation can both describe how to cut a sleeve. The code that a senior engineer produces after years of debugging and the code that Claude generates in thirty seconds can both compile, run, and serve users. The brief that a lawyer drafts after reading the cases and the brief that an AI generates from summaries of those cases can both cite the right precedents and structure the right arguments.

The outputs converge. And in a culture that evaluates knowledge by its outputs — by the quality of the code, the accuracy of the brief, the fit of the garment — convergent outputs are taken as evidence of convergent understanding.

Lave's framework says this is an illusion. Not because the AI outputs are wrong — they are often right — but because output and understanding are different phenomena, produced by different processes, and the difference between them becomes visible only when the situation changes in a way the output did not anticipate.

The tailor who learned in the workshop knows what to do when the cloth behaves unexpectedly — when the grain shifts, when the fabric stretches differently under the scissors than it did on the table, when the customer's body does not match the measurements because bodies never precisely match measurements. This knowledge does not live in any proposition the tailor could state. It lives in the thousands of situated encounters he has had with cloth, scissors, bodies, and the specific demands of specific garments on specific days. It is knowledge that was produced by the context of struggle, and it cannot be separated from that context without degradation.

The person who has received the same information from a language model — who knows the propositional content of tailoring without having undergone the participatory process through which situated tailoring knowledge is constituted — possesses something different. Something that looks the same, in normal conditions, and reveals itself as different only when the situation demands the kind of judgment that decontextualized knowledge cannot provide.

Lave did not work on AI. She endorsed Lucy Suchman's Plans and Situated Actions in 1987, calling it "a uniquely creative anthropological approach to human and machine intelligence" that "poses a closely argued challenge to central assumptions in the cognitive sciences." That endorsement was not incidental. Suchman's argument — that human action is fundamentally situated, improvised in response to the specific circumstances of the moment, and cannot be reduced to the execution of pre-specified plans — was the computer science parallel to Lave's anthropological demonstration. Both were challenges to the same target: the cognitivist model that treats intelligence as the manipulation of context-free representations.

But the AI of 1987 was not the AI of 2025. The expert systems that Suchman critiqued were rigid, rule-based, and obviously limited. The large language models that emerged four decades later are flexible, contextually responsive, and capable of producing outputs that pass, in most normal circumstances, for genuine understanding.

The question Lave's framework poses to this new AI is not "Can it produce correct outputs?" It manifestly can. The question is: "What kind of knowledge does it produce in the people who use it?"

When a developer describes a problem to Claude and receives working code in thirty seconds, what has the developer learned? Not in the trivial sense of whether the developer can now recall the syntax — syntax is cheap. In the deep sense of whether the developer has undergone the situated, contextual, struggle-embedded process through which genuine understanding of the problem and its solution would have been constituted.

The answer, Lave's framework insists, is no. Not because the developer is lazy or the tool is flawed, but because the process that produces situated understanding requires contextual engagement — the specific encounter with the specific problem in the specific environment, with all of its friction, frustration, false starts, and eventual resolution. That process is constitutive of what is learned. It is not an obstacle the learner must overcome before acquiring knowledge. It is the medium through which the knowledge is formed, the way water is the medium through which the riverbed is carved. Remove the water and you do not get a faster riverbed. You get dry land.

The Liberian tailor's workshop knew this. The master who made the apprentice press trousers for months before touching the scissors was not being cruel or hierarchical. He was sequencing the apprentice's access to the practice in a way that ensured each stage of participation deposited a specific layer of contextual understanding. The apprentice who pressed trousers understood the feel of a finished garment. The apprentice who later sewed seams understood how seam quality affected the garment he had already learned to feel. The apprentice who eventually held the scissors brought to the cutting table a thick accumulation of situated knowledge — about quality, about materials, about the relationship between each operation and the final product — that a course in cutting theory could never replicate.

Each layer was deposited through practice, not instruction. Each layer required time, struggle, and the specific social context of the workshop — the master's corrections, the other apprentices' examples, the customers' reactions. Each layer was inseparable from the context in which it was formed.

Large language models process the documentation of millions of such workshops. They have access to more propositional content about tailoring, coding, law, medicine, architecture, and every other human practice than any situated practitioner could accumulate in a hundred lifetimes. And the knowledge they produce — in themselves, and in the people who rely on them — is structurally different from the knowledge the workshops produce. Not wrong. Not useless. Thinner. Adequate for production. Insufficient for judgment.

The distinction between production and judgment — between the capacity to generate correct output and the capacity to know what to do when the situation deviates from what the output anticipated — is the central concern of this book. It is the distinction that Lave's four decades of ethnographic work make available, with a precision that no other framework can match, to the most consequential question of the present technological moment.

The tailor's hands knew things his mind could not articulate. The question is what happens to that knowledge — not the articulable kind, but the situated kind, the kind that lives in hands and habits and the accumulated friction of practice — in an age when the hands are no longer needed.

---

Chapter 2: The Supermarket and the Classroom

In 1984, Jean Lave published the results of a study that should have overturned a century of educational assumptions. It did not, because the implications were too uncomfortable for the institutions that would have had to change.

The study was called the Adult Math Project, and its design was deceptively simple. Lave and her colleagues recruited thirty-five adults in Orange County, California, and observed them performing arithmetic in two settings: in the supermarket, where they calculated prices, compared unit costs, and managed household budgets as part of their normal shopping routine; and in a formal testing environment, where they solved arithmetic problems that were structurally identical to the calculations they had performed in the aisles.

The results were not subtle. In the supermarket, accuracy approached ninety-eight percent. On the test, accuracy dropped to fifty-nine percent. The gap was not a quirk of sampling or a statistical artifact. It held across ages, educational backgrounds, and mathematical confidence levels. People who considered themselves "bad at math" and who performed poorly on paper were virtuosos in the aisles. The mathematics was the same. The context was everything.

Lave's interpretation of these results, developed across Cognition in Practice and the subsequent work that would lead to Situated Learning, challenged the most deeply held assumption of Western educational theory: the assumption of transfer.

Transfer is the idea that knowledge acquired in one context — a classroom, a textbook, a training program — can be applied directly and without significant transformation to another context — a workplace, a kitchen, a supermarket aisle. Transfer is the foundational justification for formal education. If knowledge does not transfer, then the classroom has no obvious relationship to the world it claims to prepare students for, and the billions of hours spent teaching abstract mathematics, decontextualized science, and generalized principles become difficult to defend.

Lave's data said transfer was, at best, weak. The shoppers did not take their supermarket competence into the testing room. More importantly, they did not appear to take their formal education into the supermarket. The arithmetic they performed in the aisles was not the application of school-learned rules to a practical setting. It was a different kind of mathematics entirely — improvisational, contextually driven, shaped by the physical arrangement of products on shelves, the specific goals of that day's shopping trip, and the embodied experience of hundreds of previous trips.

A shopper comparing two bottles of ketchup did not set up a formal proportion and solve for the unit price. She picked up both bottles. She felt their weight. She estimated — not as a fallback from real mathematics but as a sophisticated cognitive operation embedded in a specific physical and social context. She used the shape of the bottle, the feel of the weight, the spatial comparison of the two objects held in two hands, to arrive at a judgment that was more accurate than the formal arithmetic she could not reliably perform on paper.

The knowledge was in the practice, not in the head.

This finding, arrived at through careful ethnographic observation rather than laboratory experiment, constituted what amounted to a paradigm challenge. Lave was not saying that people were smarter than they appeared. She was saying something far more radical: that the location of intelligence had been misidentified. Intelligence was not a property of individual minds, extractable and measurable in isolation. Intelligence was a property of the relationship between a mind and its context — a relationship that included the physical environment, the social situation, the tools available, the goals in play, and the history of the practitioner's engagement with similar situations.

Remove the context, and the intelligence degrades. Not because the person has become less intelligent, but because the intelligence was never solely in the person. It was in the system — the person-acting-in-a-setting, to use the phrase Lave would develop in subsequent work.

Now transpose this finding to the age of artificial intelligence.

A large language model is, in a precise sense, the purest expression of decontextualized knowledge ever produced. It has been trained on the text of human civilization — billions of words, representing millions of contexts, stripped of every contextual element except the statistical relationships between tokens. It does not know what a supermarket smells like. It has never felt the weight of a bottle. It has no body, no goals, no history of shopping trips. It possesses the propositional residue of human knowledge — the information that could be written down — and it possesses it with a breadth and accessibility that no situated practitioner could approach.

And yet. The supermarket shopper with her ninety-eight percent accuracy possesses something the model does not: the situated, embodied, contextually specific understanding that makes knowledge reliable in practice. Her knowledge is thick — layered with sensory data, shaped by specific experiences, tested against specific consequences, embedded in a specific life with specific needs and specific constraints. The model's knowledge is thin — technically accurate in many cases, impressively broad, but lacking the contextual density that makes knowledge trustworthy when the situation deviates from the statistical norm.

The thin/thick distinction is not absolute. It is a spectrum, and different kinds of knowledge fall at different points on it. Factual information — the capital of France, the boiling point of water — transfers relatively well across contexts and can be decontextualized without significant loss. Procedural knowledge — the steps for tying a knot, the syntax of a programming language — transfers somewhat less well but can still be usefully communicated in abstraction.

But the kind of knowledge that matters most in professional practice — the judgment about which facts are relevant, which procedures to apply, when to deviate from the standard approach, how to respond when the situation is not quite what anyone expected — this knowledge resists decontextualization almost completely. It is the knowledge the supermarket shoppers demonstrated and the formal testers could not capture. It is the knowledge that Lave spent decades studying in tailoring workshops, kitchens, Weight Watchers meetings, and naval vessels. And it is the knowledge that AI, for all its extraordinary capabilities, produces only a thin simulacrum of.

The implications for how AI is being integrated into professional practice are immediate and troubling.

Consider software development, the domain where AI has made its most dramatic entrance. A developer working through a problem — encountering an error, reading the documentation, forming a hypothesis, testing it, failing, revising, testing again — is engaged in a process that Lave's framework identifies as situated learning. The specific error, the specific codebase, the specific documentation, the specific sequence of failed hypotheses — all of these are contextual elements that shape the knowledge the developer is producing. The developer who has spent an afternoon debugging a memory allocation error has not merely learned about memory allocation. She has learned about this system, this architecture, this particular configuration of dependencies and interactions that produced this particular failure. That learning is situated — embedded in this context, shaped by this struggle, available in this form only because of this specific sequence of engagements.

Claude generates the fix in thirty seconds. The fix is correct. The developer moves on.

What has been lost? Not the fix — the fix is better, faster, more reliable. What has been lost is the afternoon. The specific, frustrating, contextually rich afternoon that would have deposited a layer of situated understanding that no future prompt can replicate. The shopper in the supermarket aisle, holding two bottles and weighing them against each other, was not performing a calculation. She was participating in a practice. The developer who spends an afternoon debugging is not solving a problem. She is participating in the practice of software development, and the participation is constitutive of the expertise she is building.

When the participation is replaced by a query, the expertise is not built. The output is produced. The participation is skipped. And the practitioner arrives at tomorrow's problem with one fewer layer of the situated understanding that would have made tomorrow's judgment more reliable.

Lave's supermarket study revealed something else that carries direct implications for the AI age: the shoppers were not aware that their in-context performance vastly exceeded their test performance. They did not experience themselves as mathematical virtuosos in the aisles. They experienced themselves as people doing their shopping. The mathematical sophistication was invisible to them because it was embedded in a practice they understood as something other than mathematics.

This invisibility has a precise parallel in AI-augmented work. The contextual understanding that a developer builds through years of situated practice is largely invisible — to the developer, to her manager, to the organization. It manifests not as a discrete skill that can be listed on a resume but as a quality of judgment that surfaces only when the situation demands it: when the system breaks in an unexpected way, when the architecture must accommodate a requirement that was not anticipated, when two valid approaches conflict and the choice between them depends on understanding the specific system deeply enough to know which trade-off matters more in this particular context.

Because this judgment is invisible, it is easy to undervalue. And because it is easy to undervalue, it is easy to optimize away.

When an organization adopts AI tools and measures the result by output — code shipped, features delivered, bugs fixed — the metrics improve. The output is faster, broader, more consistent. What the metrics do not capture is the change in the kind of knowledge the organization's practitioners are developing. The metrics measure the artifacts of practice. They do not measure the practice itself — the situated, contextual, socially embedded process through which practitioners develop the judgment that determines whether the artifacts are appropriate, resilient, and wise.

Lave demonstrated that the classroom's failure was not a failure of teaching but a failure of context. The classroom decontextualized knowledge, stripped it of its practical meaning, and then was surprised when the decontextualized knowledge failed to transfer to practice. The same structural critique applies to AI-mediated learning. AI decontextualizes knowledge — not maliciously but inherently, because decontextualization is what statistical language modeling does. It extracts patterns from millions of contexts and produces outputs that belong to no context in particular. The outputs are technically impressive. They are contextually rootless.

The shopper in the supermarket did not need a calculator. She needed the supermarket — the shelves, the bottles, the weight in her hands, the history of a hundred previous trips, the specific goal of this afternoon's dinner. The knowledge she produced was a function of all these contextual elements, and it outperformed the same knowledge produced in their absence by a factor that should have humbled every educational institution in the country.

It did not. The study was published. It was cited. It was admired. The classrooms did not change.

Now the question returns, in a form more urgent than Lave could have anticipated: if the classroom failed because it decontextualized knowledge, and AI decontextualizes knowledge more thoroughly and more efficiently than any classroom ever could, what is the trajectory of understanding in a civilization that has outsourced its cognition to the most powerful decontextualization engine ever built?

The answer is not that the civilization becomes less capable. Its outputs may improve. The code may be better. The briefs may be more thorough. The garments may fit more precisely. But the understanding that supports those outputs — the thick, situated, contextually embedded knowledge that constitutes genuine expertise — will thin, unless the structures that produce it are deliberately preserved.

Unless someone builds the supermarket.

---

Chapter 3: The Periphery and the Center

In the Vai and Gola tailoring workshops of Liberia, the apprentice's journey followed a path that appeared, to the Western educational observer, to be structured backward. The newcomer began not with fundamentals but with finishing. He pressed trousers. He sewed on buttons. He attached waistbands. Only gradually, over months and years, did he move toward the more complex and consequential operations — cutting, fitting, designing — that constituted the core of the master tailor's expertise.

Jean Lave, with Etienne Wenger, named this trajectory legitimate peripheral participation. The phrase is precise in each of its three words, and each word carries weight that the AI revolution puts under extraordinary pressure.

Legitimate: the newcomer's participation is genuine. He is not performing an exercise. He is contributing to the workshop's actual production. The buttons he sews appear on garments that will be sold to real customers. His work has consequences. It matters, and his awareness that it matters shapes his relationship to the practice in ways that simulated exercises cannot replicate.

Peripheral: the newcomer begins at the edge of the practice, performing tasks that are real but low-risk. The periphery is not the bottom of a hierarchy of difficulty. It is the margin of a practice — the zone where a newcomer can participate meaningfully without possessing the full competence that central participation requires. Pressing trousers is peripheral not because it is easy (it requires skill) but because errors at the pressing stage are recoverable. Errors at the cutting stage — where expensive cloth is irreversibly shaped — are not.

Participation: the operative word. The newcomer is not studying the practice from outside. She is inside it, engaged with its materials, its rhythms, its social relationships, its standards. The knowledge she develops is produced through this engagement, not transmitted to her prior to it. She learns by doing — but "doing" here means something far more specific and more consequential than the phrase usually implies. It means participating in the actual social practice of the community, bearing real responsibility for real outcomes, and gradually transforming her relationship to the practice as her competence grows.

This framework — legitimate peripheral participation — was formalized in the 1991 book Lave co-authored with Wenger, and it was based not only on the Liberian tailoring data but on studies of midwives in the Yucatan, quartermasters in the U.S. Navy, butchers in American supermarkets, and nondrinking alcoholics in Alcoholics Anonymous. Across all of these radically different domains, the same structure appeared: newcomers entered a community of practice at its periphery and moved gradually toward full participation through a trajectory of increasing engagement, increasing responsibility, and increasing access to the community's core activities.

The trajectory was not designed. No curriculum specified it. No instructor administered it. It emerged from the structure of the practice itself — from the fact that every practice has peripheral activities that are accessible to newcomers and central activities that require the competence only sustained participation can produce. The learning was a by-product of the practice, not a separate process imposed upon it.

This observation, seemingly modest, has enormous implications for the question of how expertise is produced in any domain. It says that expertise is not the end state of a learning process that precedes practice. It is the end state of a participatory process that is the practice — a process in which the practitioner's relationship to the community, the materials, and the standards of quality is continuously transformed through engagement.

The relevance to software development is immediate. In a traditional engineering organization, the junior developer enters at the periphery. She writes tests. She fixes small bugs. She reviews documentation. She reads other people's code, not as an assignment but as a practical necessity — she needs to understand the system in order to perform even her peripheral tasks. She sits in code reviews and listens to senior engineers argue about architecture. She does not fully understand the arguments, but she absorbs the vocabulary, the standards, the patterns of reasoning that the community uses to evaluate quality.

Over months and years, she moves inward. She takes on more complex bugs. She writes small features. She begins to participate in the arguments she once only observed. She develops opinions — about code style, about architectural trade-offs, about the specific character of this codebase and what it rewards and what it punishes. These opinions are not abstract preferences. They are the crystallization of hundreds of situated encounters with the specific practice of this specific community.

Eventually, she is a senior engineer. She reviews others' code. She makes architectural decisions. She mentors newcomers who are now at the periphery she once occupied. Her expertise is not a body of facts she possesses. It is a relationship to the practice — a relationship that was built, layer by layer, through the specific trajectory of legitimate peripheral participation that her community's structure afforded.

AI disrupts this trajectory with the precision of a surgical instrument that does not know it is cutting a nerve.

When the junior developer has access to Claude, the peripheral tasks change character. She does not need to read other people's code to understand the system, because Claude can explain it. She does not need to write tests by hand, because Claude can generate them. She does not need to fix small bugs through the slow, frustrating process of reading error messages, forming hypotheses, and testing them, because Claude can identify the bug and produce the fix in seconds. She does not need to sit in code reviews absorbing the community's standards, because Claude's output is already competent enough to ship.

Each of these changes is, considered in isolation, a productivity improvement. The junior developer produces more, faster, with fewer errors. The metrics say she is performing well. The trajectory from periphery to center — the slow, situated, contextually embedded process through which she would have developed the judgment that constitutes genuine expertise — has been compressed, altered, or in some cases eliminated entirely.

What has she lost?

Not the facts. Claude's explanations of the codebase are often more thorough and more patient than any senior engineer's. Not the procedures. Claude's code follows best practices with mechanical consistency that human developers rarely match. Not the output quality. The features she ships are competent, sometimes excellent.

What she has lost is the participation. The hundreds of small, situated encounters with the specific resistance of this particular codebase. The experience of reading an error message and not understanding it, and sitting with that incomprehension until something clicked. The experience of watching a senior engineer make a decision she did not understand, and carrying that incomprehension for weeks until the consequences of the decision revealed its logic. The experience of writing a test by hand and discovering, through the writing, an assumption about the system's behavior that turned out to be wrong.

Each of these experiences is a moment of legitimate peripheral participation. Each one deposits a thin layer of contextual understanding. And each one is, from the perspective of output metrics, inefficient.

The organization that measures performance by output will not see the loss. The junior developer's tickets are closed on time. Her code compiles. Her features work. The metrics say she is progressing.

But the trajectory from periphery to center has been altered. She has moved to the center of output without moving to the center of understanding. She is performing central tasks — building features, making architectural choices — without having undergone the peripheral participation that would have prepared her judgment for those tasks. The gap between her output and her understanding is invisible in normal conditions and potentially catastrophic in abnormal ones.

Wenger's career trajectory is itself an instructive case. Before collaborating with Lave, he had earned a doctorate in artificial intelligence at the University of California, Irvine. He had studied the computational model of mind from the inside. He had learned its assumptions, its methods, its strengths, and its limitations through direct engagement with the practice of building AI systems. And he had concluded, through that situated engagement, that the model was wrong — that learning was not the acquisition and processing of information but the transformation of participation in social practice.

Wenger's move from AI to social learning theory was not a rejection born of ignorance. It was a judgment born of situated understanding — of having participated in the practice of AI deeply enough to know what it could and could not capture. He had been at the periphery of the AI community, moved toward its center, and concluded that the community's central assumptions about the nature of intelligence were inadequate. That judgment was itself a product of legitimate peripheral participation. He could not have arrived at it from the outside.

This is the pattern that Lave's framework reveals with uncomfortable clarity: the very judgment that is needed to evaluate AI's impact on learning is the kind of judgment that AI threatens to eliminate. The senior engineer who understands what the junior developer is losing has that understanding because she underwent the peripheral participation that produced it. If the next generation of senior engineers skips that participation — if they arrive at seniority through AI-augmented output rather than situated engagement — they will not have the experiential basis to recognize what has been lost. The loss will become invisible not because it has been solved but because the people who would have been able to see it will not have developed the vision.

The apprenticeship structure of the Liberian tailoring workshop was not a pedagogical theory. It was a practice — evolved, adapted, and maintained by generations of practitioners who understood, through their own situated experience, what newcomers needed in order to develop genuine competence. The masters did not theorize about legitimate peripheral participation. They practiced it, because the practice itself had encoded the knowledge that this trajectory worked.

AI does not destroy the apprenticeship. It restructures it in a way that preserves the appearance of progression while altering its substance. The junior developer still moves from simple tasks to complex ones. She still eventually reviews code and makes architectural decisions. The titles change. The responsibilities grow. But the situated engagement that would have filled each stage with contextual understanding has been thinned, compressed, or replaced by tool-mediated shortcuts that produce output without producing participation.

The question is whether the community of practice notices. Whether the senior engineers, the team leads, the organizational structures recognize that output and understanding have diverged, and design deliberately for the situated participation that AI does not provide.

In the Liberian workshop, the master ensured the apprentice pressed trousers before he touched scissors. The sequencing was not arbitrary. It was an act of stewardship — a recognition that the trajectory of participation matters, that what comes first shapes what comes after, and that the peripheral tasks are not obstacles to be optimized away but foundations upon which central competence is built.

The question the AI age poses to every organization, every profession, every community of practice is whether anyone will perform the master's function. Whether anyone will insist on the trajectory when the tools make it possible to skip it. Whether anyone will protect the periphery when the center is so much more immediately productive.

---

Chapter 4: The Community That Thinks Together

In 1995, the cognitive scientist Edwin Hutchins published a study of navigation aboard a U.S. Navy vessel that demonstrated something the Western intellectual tradition had been resisting since Descartes: thinking does not happen inside individual heads. It happens between them.

Hutchins observed the navigation team of the USS Palau as they brought the ship into San Diego harbor. The task required continuous computation of the ship's position — taking bearings on landmarks, plotting them on charts, calculating course corrections, communicating results to the bridge. No single member of the team could perform the entire computation alone. The knowledge required was distributed across persons, tools, charts, instruments, and the specific communicative practices the team had developed through months of working together.

The quartermaster who took the bearing did not know the ship's position. The plotter who marked the chart did not know the bearing. The coordinator who communicated the fix to the bridge did not know the mathematics of either operation. But the team — the system of persons, tools, and practices operating together — knew the ship's position with extraordinary precision.

Hutchins called this distributed cognition, and his demonstration of it was not merely an observation about naval navigation. It was a fundamental claim about the nature of intelligence: that cognition, in the real world, is rarely an individual achievement. It is a property of systems — systems composed of people, tools, representations, and the social practices that connect them.

Jean Lave's concept of the community of practice is the social theory that explains how such systems come into being and sustain themselves. A community of practice, as Lave and Wenger defined it, is a group of people who share a domain of concern, engage in joint practice, and develop over time a shared repertoire of resources — tools, vocabularies, stories, methods, standards — through sustained mutual engagement.

The navigation team of the USS Palau was a community of practice. The tailoring workshops of Monrovia were communities of practice. The engineering team at any software company is a community of practice. In each case, the community is not merely a collection of individuals who happen to work in the same place. It is a cognitive system — a structure within which knowledge is produced, maintained, transmitted, and transformed through the interactions of its members.

The crucial insight is that the community does not merely contain knowledge. It produces it. The knowledge of the navigation team was not the sum of its members' individual knowledge. It was something more — an emergent property of their interaction, their communicative practices, their shared history of working through problems together. The senior quartermaster's correction of the junior's bearing was not merely a transfer of information. It was a situated interaction that produced, in both participants, understanding that neither possessed before the interaction occurred.

This social production of knowledge is the mechanism through which communities of practice sustain themselves across generations. When the junior quartermaster joins the team, he does not receive a manual. He enters a community. He watches. He assists. He performs peripheral tasks under the supervision of practitioners who have internalized the community's standards through years of participation. Gradually, he takes on more responsibility. His questions provoke responses that refine both his understanding and the community's articulation of its own practices. He makes mistakes that force the community to make its implicit standards explicit. He introduces variations that the community evaluates, adopts, or rejects.

The community is transformed by his participation. He is transformed by the community's practices. The knowledge flows in both directions, and both the newcomer and the community are different for the exchange.

This bidirectional transformation is the social engine of expertise. It is how professions maintain their standards, develop their methods, and adapt to changing conditions. It is also the mechanism most directly threatened by the introduction of AI into professional practice.

The threat is not that AI replaces individuals. It is that AI makes community participation optional.

Consider the dynamics of a software engineering team before AI. A junior developer writes code. She submits it for review. A senior engineer reads the code, identifies a problematic pattern, and explains — not just what is wrong but why it is wrong, what the consequences might be, what the better approach would look like, and what principle the better approach embodies. The junior developer learns. The senior engineer, in the process of articulating the principle, refines her own understanding of it. The team, through this interaction, maintains and evolves its shared standards.

The code review is not a bureaucratic checkpoint. It is a situated learning event — an interaction within a community of practice through which knowledge is socially produced and transmitted. The junior developer is not receiving information. She is participating in the community's practice of evaluating code, and through that participation, she is internalizing the community's standards at a depth that no documentation could achieve.

Now consider the same team with AI. The junior developer describes the feature to Claude. Claude generates the code. The junior developer submits it. The senior engineer reviews it, and the code is competent — because Claude's output generally is competent. The senior engineer approves it with minor comments. The interaction that would have occurred — the extended discussion about why this pattern is problematic and what principle the better approach embodies — does not happen. Not because anyone decided to eliminate it, but because the occasion for it did not arise. The code was already good enough.

The quality gate was passed. The learning event was not.

Over months, this pattern compounds. The junior developer has fewer occasions for the situated interactions through which the community's standards are transmitted. The senior engineer has fewer occasions for the articulation of principles through which her own understanding is deepened. The team's shared repertoire — the vocabulary, the stories, the collective memory of past failures and the lessons they deposited — grows more slowly, because the interactions that would have generated new entries are less frequent.

The team is still a team. Its members still share a Slack channel and attend standups. But the community of practice — the social system through which knowledge is produced, maintained, and transmitted — is eroding. Not dramatically, not visibly, not in any way that would appear in a quarterly review. Slowly, in the space between the interactions that are no longer happening.

What replaces the community? In many cases, individual-tool interaction. The developer and Claude. The designer and Midjourney. The writer and the language model. Each practitioner developing a private relationship with a tool that provides competent output but does not participate in the social production of meaning.

The tool does not argue. It does not challenge assumptions. It does not say, as a senior engineer might say, "That approach will work, but let me tell you about the time we tried something similar and it failed in production because of a concurrency issue no one anticipated." The tool does not have a history of shared practice. It does not remember the last time this team made this mistake. It does not carry the community's accumulated wisdom in the specific, embodied, socially embedded form that human practitioners carry it.

It produces output. The output is often good. But the output is produced in a social vacuum, and the social vacuum is where the community of practice used to be.

Lave's framework predicts a specific consequence of this shift, and the prediction is grim. If knowledge is socially produced through community participation, then the dissolution of community participation produces a decline in the kind of knowledge that communities produce. Not a decline in output — output may improve. A decline in the situated, contextual, socially embedded understanding that constitutes the difference between competence and expertise, between knowing how to produce a correct result and knowing why a correct result might be the wrong choice in this specific situation.

This is a different kind of loss from the ones typically discussed in the AI discourse. It is not the loss of jobs, or the loss of skills, or the loss of economic value. It is the loss of the social infrastructure through which professional knowledge is generated and maintained. It is an infrastructure loss — like losing not a road but the practice of road-building.

The loss is self-concealing. The community that is dissolving does not announce its dissolution. Code reviews still happen, but they are shorter, less substantive, less likely to generate the extended discussions that produce learning. Mentoring relationships still exist on paper, but they are less likely to produce the specific, situated, contextually rich interactions through which the mentor's tacit knowledge is transmitted to the mentee. Standups still occur, but they are more likely to be status reports than collaborative problem-solving sessions, because the problems have been solved by AI before they reach the standup.

Each of these changes is small. Each is rational in isolation — why have a long code review when the code is already competent? Each contributes to a cumulative erosion that no individual change produces but that the pattern of changes, sustained over months and years, makes inevitable.

The communities of practice that formed the social foundation of professional expertise did not emerge by design. They emerged because the structure of the work required collaboration — because no individual could produce the full range of outputs that the practice demanded, and the interactions required to coordinate individual contributions generated, as a by-product, the social learning that sustained the community's knowledge.

AI does not eliminate the need for collaboration. Complex systems still require multiple perspectives, multiple skills, multiple areas of judgment. But AI reduces the frequency and depth of the collaborative interactions through which knowledge is socially produced. Each practitioner can do more alone. Each practitioner needs the community less. And the community, composed of practitioners who need it less, produces less of the situated knowledge that was its most valuable — and least visible — output.

The question, then, is whether communities of practice can be maintained deliberately in an environment where the economic and practical pressures all point toward individual-tool interaction. Whether organizations can recognize the invisible infrastructure of social learning and protect it against the visible efficiencies of AI-augmented individual production. Whether the value of what the community produces — not the code, but the understanding that makes the code wise — can be articulated clearly enough to justify the apparent inefficiency of preserving the interactions that produce it.

Hutchins' navigation team functioned as a cognitive system because the task required it — because no individual could navigate the ship alone. The tailoring workshop sustained its community of practice because the apprenticeship structure ensured a continuous flow of newcomers whose participation generated the interactions through which the community's knowledge was maintained.

In both cases, the social structure of knowledge production was sustained by necessity. AI removes the necessity. What it does not remove — what it cannot remove — is the need for the kind of knowledge that only community participation produces. The thick understanding. The situated judgment. The feel for a system that comes from years of shared struggle.

Those things are still needed. They are more needed than ever, in a world where AI-generated output is abundant and the judgment to evaluate that output is the scarce resource. But the social structures that produce them are dissolving, quietly, in the efficiency gains that AI provides.

The ship still needs to be navigated. The question is whether anyone is maintaining the team that knows how.

Chapter 5: What the Struggle Deposits

In the early months of 2026, an engineer at a mid-sized technology company in Austin, Texas, described an experience that, by that point, had become common enough to constitute a pattern. She had been working with AI coding assistants for roughly four months. Her output had increased substantially — she was shipping features at a pace that would have been impossible six months earlier. Her manager was pleased. Her performance review was excellent. She was, by every metric her organization used to evaluate her, performing at the highest level of her career.

And she had begun to feel, in a way she could not quite articulate, that something was wrong.

The feeling was not anxiety about job security. She was more productive than ever, and the organization valued her output. It was not frustration with the tools, which she found genuinely impressive. It was something subtler — a sensation she described, after searching for the right word, as hollowness. The work was getting done. She was not sure she was doing it.

She compared it to the difference between driving a familiar route and being driven along it. Both get you to the destination. But the driver builds a spatial understanding of the route — the turns, the landmarks, the feel of the road at different points — that the passenger does not. The passenger arrives. The driver arrives and knows the way.

She was arriving. She no longer knew the way.

Jean Lave's framework provides the theoretical vocabulary for what this engineer was experiencing, and the vocabulary is precise enough to distinguish the experience from the superficially similar but fundamentally different phenomena it is often confused with — nostalgia for the old way of working, resistance to change, the Luddite impulse that Segal's Orange Pill treats at length. The engineer was not nostalgic. She liked the tools. She was not resisting change. She had adopted the tools eagerly and used them well. She was registering, at the level of professional intuition, a structural change in the character of her own understanding.

Lave's framework identifies the mechanism. When the engineer debugged code by hand — reading error messages, forming hypotheses, testing them, failing, revising — she was engaged in situated practice. The struggle was not an impediment to learning. The struggle was the learning. Each encounter with a specific error in a specific codebase under specific conditions deposited a layer of contextual understanding — not just about the error but about the system, its architecture, its tendencies, its personality (engineers do use that word, and they are not being whimsical when they do).

The deposits were cumulative. Over months and years, they built into something solid — the kind of thick understanding that manifests not as articulable knowledge but as professional judgment. The senior engineer who looks at a system and senses that something is wrong before she can explain what — who feels the architecture straining under a load it was not designed for, who intuits that a particular pattern will cause problems at scale even though it works perfectly in testing — possesses a form of knowledge that was built, layer by geological layer, through thousands of situated encounters with the resistance of real systems.

This geological metaphor, which appears in Segal's discussion of the aesthetics of smoothness, is remarkably precise when examined through Lave's lens. Each hour spent debugging deposits a thin stratum of contextual understanding. Each failed hypothesis adds a layer. Each interaction with a colleague who saw the problem differently adds another. The strata accumulate into bedrock — the foundation on which professional judgment rests.

The metaphor also reveals what happens when the deposition stops. A riverbed without new sediment does not maintain its current depth. It erodes. The existing layers, no longer reinforced by fresh deposits, become vulnerable to the currents that flow over them. The engineer who stops debugging does not retain her current level of judgment in perpetuity. The judgment, deprived of the ongoing situated practice that produced it, gradually thins.

This is not speculation. The phenomenon of skill decay is well documented in domains where practitioners can identify when their situated practice was interrupted. Surgeons who stop performing a particular operation lose proficiency in it, not because they forget the steps but because the embodied feel — the tactile judgment, the sense of how tissue responds to the instrument — degrades without continuous practice. Musicians who stop performing do not forget the notes. They lose the feel — the specific, embodied, contextually rich understanding of how this instrument responds in this room with this audience.

The AI-augmented developer is not being told to stop practicing. She is being given a tool that makes certain forms of practice unnecessary. The distinction matters because the skill decay is not imposed but emergent. Nobody decided that the engineer should stop building situated understanding. The tool simply made it possible to produce output without the situated engagement that would have produced understanding as a by-product. The by-product was never the goal. The goal was always the output. But the by-product was, in many cases, more valuable than the output itself — because the output served today's need while the by-product built tomorrow's capability.

Lave's ethnographic work in the tailoring workshops revealed this dynamic with particular clarity. The apprentice who pressed trousers was not pressing trousers for the educational benefit. He was pressing trousers because the workshop needed trousers pressed. The learning was a by-product of the labor. But the by-product — the developing feel for finished garments, the accumulating sense of what quality meant in this workshop — was the mechanism through which the apprentice would eventually become a master. Eliminate the labor (by giving the apprentice a machine that presses trousers automatically) and the primary output improves: trousers are pressed faster, more uniformly, more efficiently. The by-product — the situated learning that the labor produced — disappears.

The loss of the by-product is invisible in every metric the workshop uses. Trousers pressed per hour: up. Garment quality: unchanged (the machine presses well). Customer satisfaction: stable. The apprentice's trajectory from periphery to center: quietly disrupted, in a way that will not be visible until the apprentice reaches the center without the understanding that the peripheral participation was supposed to produce.

The engineer in Austin was experiencing this disruption. Her output metrics were excellent. Her understanding was thinning. And the thinning was invisible to her organization because the organization did not measure understanding. It measured output.

There is a specific character to the knowledge that struggle deposits, and Lave's framework makes it possible to describe that character with precision. Struggle-deposited knowledge is not propositional. It cannot be stated in sentences or encoded in documentation. It is the kind of knowledge that Michael Polanyi called tacit — knowledge that manifests in practice without being accessible to articulation. The cyclist does not know the physics of balance. The surgeon does not know the biomechanics of her hand movements. The senior engineer does not know the formal rules that govern her architectural intuitions. But all three perform with a reliability that formal knowledge alone cannot produce.

Tacit knowledge is, by definition, resistant to transfer. It cannot be written down (because it cannot be articulated). It cannot be taught (because teaching requires articulation). It can only be developed — through sustained, situated, embodied engagement with the practice in which it is relevant. The apprentice who watches the master cut cloth does not receive the master's tacit knowledge through observation. He begins the long process of developing his own tacit knowledge through the specific situated encounters that observation makes available to him.

AI can transfer propositional knowledge with extraordinary efficiency. It can generate correct code, accurate explanations, competent analyses. What it cannot transfer is tacit knowledge — because tacit knowledge is not a transferable substance. It is a property of the relationship between a practitioner and her practice, developed through the specific contextual encounters that participation provides.

When AI eliminates the encounters — when the debugging session does not happen because Claude fixed the bug, when the architectural decision is not struggled with because Claude produced a competent architecture, when the code review does not generate a substantive discussion because Claude's output is already clean — the encounters that would have deposited tacit knowledge are removed from the practitioner's trajectory. The propositional knowledge is still available. The answers are still there, more accessible than ever. But the tacit knowledge that would have developed through the struggle to find those answers is not produced.

The engineer in Austin had propositional access to her systems that exceeded anything she had possessed before AI. Claude could explain any component, trace any dependency, identify any vulnerability. She knew more facts about her codebase than she ever had.

And she could feel, in the specific way that tacit knowledge makes itself known — not as an articulable proposition but as a background sense of confidence or its absence — that her grip on the system was loosening. Not her knowledge of the system. Her feel for it. The distinction is the distinction between thin and thick understanding, and it is the distinction that Lave's framework insists upon with the most consequences for the present moment.

The implications extend beyond individual practitioners to the systems they build and maintain. Tacit knowledge is not a luxury. It is a structural component of reliable practice. The surgeon whose hands know the tissue, the pilot whose body knows the aircraft, the engineer whose intuition knows the codebase — these practitioners do not merely perform better. They perform more safely. Their tacit knowledge serves as an early warning system, detecting anomalies that formal monitoring might miss, sensing problems before they manifest as failures. The judgment is unreliable in the way all tacit knowledge is unreliable — it is sometimes wrong, sometimes biased, sometimes resistant to evidence that contradicts it. But it is also indispensable, because the systems these practitioners operate within are too complex for any monitoring framework to capture fully, and the gap between what can be formally monitored and what can go wrong is exactly the gap that tacit knowledge fills.

When the practitioners' tacit knowledge thins — when the deposits stop because the struggle has been optimized away — the early warning system degrades. The engineer who would have felt something wrong with the architecture does not feel it, because the feeling was built from situated encounters she no longer has. The failure, when it comes, surprises everyone, because the person who would have anticipated it was never given the contextual engagement through which anticipation is developed.

The struggle deposits understanding the way a river deposits sediment. Not evenly, not predictably, not in any pattern that can be designed in advance. But cumulatively, reliably, in a way that builds the ground practitioners stand on. AI is not a dam in this metaphor. It is a bypass — a channel that carries the water around the depositional zone, delivering it downstream faster and cleaner, but leaving the zone where sediment would have accumulated dry.

The downstream output improves. The ground thins.

The question is not whether the ground matters. Every practitioner who has ever relied on tacit knowledge — which is every practitioner who has ever developed genuine expertise in any domain — knows it matters. The question is whether the organizations, the educational institutions, and the practitioners themselves will recognize the thinning before it becomes a structural vulnerability, and will design for the situated engagement that AI does not provide and that no tool, however sophisticated, can substitute for.

The engineer in Austin recognized it. She described it as hollowness — the word a person uses when the structure looks intact from the outside but the interior has been evacuated. She could not point to a specific missing skill. She could not identify a specific knowledge gap. She could only feel, with the unreliable but indispensable sensitivity of a practitioner whose tacit knowledge was still functioning well enough to detect its own erosion, that she was becoming less than she had been, even as she produced more than she ever had.

That feeling is the data. Not the kind of data that appears in quarterly reviews or productivity dashboards. The kind that appears in the quiet, situated, contextually specific moment when a practitioner notices that the ground beneath her is thinner than it used to be — and wonders, with a precision that only situated understanding can produce, what will happen when the weight of a real crisis is placed upon it.

---

Chapter 6: Thin Knowledge, Thick Knowledge

The distinction between thin knowledge and thick knowledge is not a spectrum with clearly marked gradations. It is more like the difference between a photograph of a place and the experience of having lived there. The photograph contains accurate information — the layout of the streets, the color of the buildings, the relative positions of landmarks. A person who has studied the photograph carefully can navigate the place on a first visit. She knows where to turn. She recognizes the buildings. She can find the town square without asking for directions.

But she does not know the place. She does not know which streets flood in heavy rain. She does not know that the cobblestones near the market become treacherously slick in October. She does not know that the shortcut through the alley is safe during the day but avoided by locals after dark, or that the bakery on the corner closes early on Wednesdays, or that the church bell that rings at noon rings five minutes late because the mechanism has been broken since 1987 and the parish cannot afford to fix it. She does not know any of this because this knowledge is not in the photograph. It is in the living — in the specific, accumulated, contextually embedded experience of having walked those streets in different weather, at different hours, across different seasons, with different purposes.

The photograph is thin knowledge. Living there is thick knowledge. Both are real. Both are useful. They are not the same thing, and the difference between them is not merely a matter of quantity — the thick version does not simply contain more information than the thin one. It contains a different kind of information: contextual, embodied, situated, accumulated through the specific engagements of a specific person with a specific place over specific time.

Jean Lave's entire body of work is, in essence, a sustained demonstration that this distinction matters — that the character of knowledge, not just its content, determines its reliability in practice. The supermarket shoppers whose arithmetic was brilliant in context and mediocre in abstraction were not demonstrating a performance gap. They were demonstrating that two different kinds of knowledge had been produced by two different kinds of engagement, and that the contextually situated kind was, for the purposes of actual practice, vastly more reliable.

Thin knowledge is propositional. It can be stated, transmitted, tested. "The boiling point of water at sea level is 100 degrees Celsius." "A binary search has O(log n) time complexity." "In American contract law, consideration is required for a binding agreement." These propositions are context-free in the sense that their truth value does not depend on who states them, where they are stated, or what situation prompted the statement. They transfer well. They can be looked up. They can be generated by a language model with near-perfect accuracy.

Thick knowledge is relational. It cannot be fully stated because it includes elements that exist only in the relationship between the knower and the known — the feel of the system, the sense of what matters here, the intuition that something is off. "This codebase tends to accumulate technical debt in the authentication module because three different teams have contributed to it over five years and their architectural assumptions are subtly incompatible." This is thin knowledge — it can be stated, documented, communicated. But the senior engineer who knows this codebase thickly knows something more: she knows which of those incompatibilities is likely to cause a production incident under load, because she was on call the last time it happened, at three in the morning, and the specific pattern of cascading failures she observed that night deposited a layer of understanding that no documentation captures.

She knows what the system feels like when it is about to break. That knowledge is thick. It is embodied. It is situated. And it is exactly the kind of knowledge that determines whether a system is maintained wisely or merely maintained.

AI produces thin knowledge with breathtaking efficiency. Claude can generate accurate descriptions of codebases, correct analyses of legal precedents, competent summaries of medical research. The propositional content is often impeccable. A junior developer who asks Claude to explain a system's architecture will receive an explanation that is, in many cases, more thorough, more clearly structured, and more immediately useful than the explanation she would receive from a busy senior colleague.

But the explanation is thin. It contains the propositions without the context. It describes the architecture without the history of decisions, trade-offs, failures, and workarounds that produced the architecture. It names the components without conveying the feel of how those components behave under stress, in production, at three in the morning when the load spikes and the caching layer that everyone assumed was robust turns out to have a race condition that only manifests under specific, improbable, and yet real conditions.

Thick knowledge is produced by a process that Lave's framework describes with precision: sustained participation in a community of practice, through which the practitioner accumulates not just propositional knowledge but contextual understanding — the kind of understanding that manifests as judgment, intuition, the ability to act wisely when the formal knowledge runs out.

The process is slow. It is often frustrating. It is inefficient by any output metric. And it is irreplaceable.

The distinction between thin and thick knowledge is often invisible in normal operations. When the system is running smoothly, when the cases are routine, when the situation falls within the parameters that the formal knowledge anticipates, thin knowledge is sufficient. The junior developer with Claude's explanations can maintain the system, ship features, fix bugs. The junior lawyer with AI-generated briefs can serve clients competently. The medical resident with AI-assisted diagnostics can treat patients within standard protocols.

Normal operations are, by definition, what happens most of the time. This is why the thinning of knowledge is so difficult to detect — it does not show during normal operations. The person with thin knowledge and the person with thick knowledge produce indistinguishable results under ordinary conditions.

The distinction reveals itself at the margins. When the system breaks in a way nobody expected. When the case presents a legal question that the precedents do not clearly resolve. When the patient's symptoms do not match any standard diagnostic pattern. When the situation is abnormal, and the formal knowledge — the propositions, the procedures, the best practices — proves insufficient, and the practitioner must rely on judgment.

Judgment is the application of thick knowledge to situations that thin knowledge cannot resolve. It is the capacity to act wisely in the absence of clear rules. And it is produced not by instruction or information transfer but by the specific, situated, cumulative process of participation that Lave spent four decades documenting.

A study in the Berkeley tradition — Xingqi Maggie Ye and Aruna Ranganathan's 2026 research on AI's effects on work practices — documented a phenomenon that maps with uncomfortable precision onto the thin/thick distinction. Workers using AI tools expanded their scope, taking on tasks outside their traditional domains. A designer started writing code. A developer started doing design work. The boundary-crossing was real, and the output was competent.

But the researchers also documented something less celebrated: the work felt different to the people doing it. More juggling. Less depth. A sense of always moving but never arriving. The workers were producing more, across a wider range, with less of the sustained engagement in any single domain that thick understanding requires.

The expansion of scope is thin. It produces competent output across a broad surface. The deepening of expertise is thick. It produces judgment within a narrow but deeply understood domain. AI tools favor breadth over depth — not because they are incapable of supporting deep work but because the incentive structure of AI-augmented work rewards visible output (features shipped, tasks completed, scope expanded) over invisible understanding (the slowly accumulating judgment that only sustained engagement produces).

The result is a workforce that is wider and thinner. More capable on the surface. Less reliable at the margins. Better equipped for normal operations. Less prepared for the abnormal conditions where thick knowledge is the difference between a manageable problem and a catastrophe.

This is not a hypothetical risk. Every complex system eventually encounters conditions that exceed its design parameters. Every professional practice eventually presents a case that the standard procedures do not cover. Every codebase eventually breaks in a way that the documentation does not describe. At these moments, the organization does not need more output. It needs judgment — the thick, situated, contextually embedded understanding of someone who has been participating in this specific practice long enough to know what the formal knowledge does not say.

If the pipeline that produces such people has been thinned — if the situated participation that builds thick understanding has been compressed, shortcut, or replaced by AI-mediated output — then the organization arrives at the moment of crisis with a workforce that is extraordinarily productive and insufficiently wise.

The thinning of professional knowledge is not unlike the thinning of topsoil — a process that is invisible in the short term, devastating in the long term, and almost impossible to reverse once it has progressed past a critical point. Topsoil is produced by the slow decomposition of organic matter, the patient work of organisms that break down plant material and animal waste into the rich, complex medium that supports agricultural production. The process takes centuries. The destruction takes years. Industrial agriculture can strip topsoil in a generation, producing higher yields in the short term while eliminating the substrate that makes future yields possible.

The analogy is not perfect — no analogy is — but it captures the essential dynamic. AI produces higher professional output in the short term while potentially eliminating the situated engagement that produces the thick knowledge on which future judgment depends. The yields look spectacular. The soil is thinning.

The question is not whether thin knowledge is useful. It is enormously useful. The question is whether a civilization that has optimized for thin knowledge — for the broad, context-free, efficiently transmissible kind — can sustain itself when the situations that demand thick knowledge inevitably arrive.

Lave's framework does not answer that question. It merely makes it impossible to avoid.

---

Chapter 7: The Apprenticeship Severed

For most of human history, the primary mechanism for transmitting professional knowledge from one generation to the next was the apprenticeship. Not the romanticized version — the sage master imparting wisdom to the eager student in a firelit workshop — but the actual, messy, situated version: the newcomer entering a community of practice at its periphery, performing real work under real conditions, absorbing the community's standards through the specific, embodied, contextually rich experience of working alongside practitioners who have internalized those standards through years of the same process.

The apprenticeship model survived the printing press. It survived the factory. It survived formal schooling. It survived, in modified forms, the professionalization of knowledge work in the twentieth century. Medical residencies are apprenticeships. Law clerkships are apprenticeships. Engineering mentorship programs are apprenticeships. Doctoral programs, at their best, are apprenticeships — the student working alongside the advisor on real research, absorbing not just the explicit methods of the discipline but the tacit norms, the judgment about what constitutes a good question, the feel for what the data is saying beneath its surface.

The apprenticeship model survived all of these transitions because none of them eliminated the fundamental condition that the model depends on: the necessity of situated participation for the development of expertise. Books could transmit propositional knowledge. Schools could teach methods. But the thick, contextual, tacit understanding that distinguishes the expert from the competent practitioner could only be developed through sustained engagement with the practice, in the presence of practitioners who embodied the standards that the newcomer was learning to internalize.

AI poses a different kind of challenge to the apprenticeship model than any previous technology. Not because it is more powerful — the printing press was, relative to its era, equally transformative. But because it intervenes at precisely the point where the apprenticeship does its most important work: the juncture between the novice's output and the novice's understanding.

Every previous technology that accelerated production left the relationship between output and understanding intact. The power loom produced cloth faster, but the loom operator still needed to understand the machine. The compiler automated code translation, but the programmer still needed to understand the logic. The calculator performed arithmetic faster, but the engineer still needed to understand the mathematics. In each case, the technology amplified what the practitioner could do without severing the connection between doing and understanding.

AI severs that connection. Not completely — the developer who uses Claude still makes decisions, still exercises judgment, still understands at some level what the code is doing. But the connection is attenuated in a way that no previous tool accomplished. The developer can produce code that works without understanding why it works. The lawyer can file a brief that cites the right cases without having read them. The medical student can generate a differential diagnosis without having examined the patient. The output suggests expertise. The understanding is elsewhere.

This is the apprenticeship severed: the novice arrives at the output of expertise without traversing the experiential path that produces it. And the path — the years of situated struggle, the accumulated layers of contextual understanding, the gradual transformation from peripheral participant to full member of the community of practice — was not merely the route to expertise. It was the mechanism through which expertise was constituted.

Lave documented this mechanism in Liberian tailoring workshops, where the apprentice's journey from pressing trousers to cutting cloth followed a trajectory designed (not consciously, but through generations of practice) to ensure that each stage of participation deposited the specific contextual understanding needed for the next stage. The apprentice who had pressed hundreds of garments had, without formal instruction, developed a feel for finished quality that informed everything he would do when he moved to sewing seams, then to assembling garments, then finally to cutting.

The trajectory was not efficient. A modern efficiency consultant, observing the workshop, would have recommended starting the apprentice on cutting — the high-value operation — and providing formal instruction in the principles that the pressing stage was supposed to impart. The consultant would have been wrong, and Lave's data shows why: the principles cannot be effectively imparted in abstraction. They must be developed through the specific, situated encounters that each stage of the trajectory provides. The pressing stage is not a delay before the real learning begins. It is the first layer of a foundation on which all subsequent learning rests.

AI is the efficiency consultant's dream. It starts the novice at the high-value operation. It provides the output that the entire trajectory was designed to produce. And it does so immediately, without the months and years of peripheral participation that the trajectory requires.

The result, in organizational terms, is faster output. The junior developer ships features sooner. The law clerk produces briefs faster. The medical resident generates diagnoses earlier. The metrics say the apprenticeship is working — the novice is producing at a level that used to take years to achieve.

But the metrics are measuring the wrong thing. They are measuring the output of the apprenticeship, not the transformation of the apprentice. And the transformation — the gradual development of situated understanding through legitimate peripheral participation — is the thing the apprenticeship exists to produce.

A junior developer at a technology company in 2025 began her career with access to Claude from her first day. She was bright, motivated, and effective — she shipped features within her first week. Her manager praised her speed. Her onboarding was considered a success.

Six months later, the codebase encountered a problem that fell outside Claude's competence — a subtle interaction between components that produced intermittent failures under specific, hard-to-reproduce conditions. The kind of problem that requires not just technical skill but deep familiarity with this particular system, its history, its personality, its specific patterns of failure.

The senior engineers on the team spent a day diagnosing the issue. Their diagnosis drew on years of accumulated context — similar failures they had seen, architectural decisions they remembered, the specific behavior of this system under stress that they had observed during previous incidents. The junior developer watched, contributed where she could, and learned from the experience. But she was learning from the outside — observing a process whose foundations she had not built, because the peripheral participation that would have built them had been compressed into AI-assisted feature shipping.

She understood the diagnosis intellectually. She could explain it. She could not have arrived at it, because arriving at it required the thick, situated, contextually embedded understanding that only years of the trajectory she had skipped could provide.

This is not her failure. She did what the tools and the incentives asked her to do. She produced output. She was effective by every measure her organization used. The failure, if it can be called that, is structural — a misalignment between the trajectory the tools enable and the trajectory that expertise requires.

Lave and Wenger used the term learning curriculum to describe the set of learning opportunities that a community's practice makes available to newcomers — as distinct from a teaching curriculum, which is designed and administered by an authority. The learning curriculum is not a plan. It is a property of the practice itself — the sequence of situated encounters that the practice's structure makes available to participants at different stages of their trajectory.

In the tailoring workshop, the learning curriculum was the sequence: pressing, buttoning, sewing seams, assembling, cutting. Each stage provided access to specific situated encounters that deposited specific contextual understanding. The sequence was not arbitrary. It was produced by generations of practice, and it was maintained because the practitioners who had undergone it understood, from their own situated experience, that it worked.

AI disrupts the learning curriculum by making it possible to skip stages. The novice who can produce cutting-level output with AI assistance does not need to spend months pressing trousers. The developer who can ship features with Claude's help does not need to spend months debugging small issues. The stages are skippable because the output can be achieved without them.

But the stages are not separable from the learning they produce. Skipping the stage does not merely skip the output — it skips the situated encounters that the stage would have provided, and with them, the contextual understanding those encounters would have deposited. The learning curriculum is disrupted not because anyone decided to disrupt it but because the tool made the stages appear unnecessary by enabling the output they were supposed to train the novice to produce.

The most consequential implication of the severed apprenticeship is recursive. If the current generation of novices does not undergo the full trajectory of situated participation, they will not develop the thick understanding that enables them to recognize what the next generation of novices needs. The master tailor who insists the apprentice press trousers before touching scissors does so because his own trajectory — his own situated experience of building expertise layer by layer — taught him the value of each stage. The master who did not undergo that trajectory will not insist, because he will not know what was lost.

The erosion is generational. The first generation of AI-augmented practitioners may have thick understanding developed before AI arrived. They can serve as mentors, recognizing what the novices are missing and designing for the situated engagement that the tools do not provide. The second generation, trained with AI from the start, will have thinner understanding and a correspondingly thinner basis for recognizing what the third generation needs. The third generation thinner still.

Each generation is less equipped than the last to diagnose the thinning, because the diagnostic capacity is itself a product of the thick understanding that is being eroded. The loss is self-concealing and self-accelerating — invisible to the very people who are experiencing it, because the experience that would have made it visible was the experience they did not have.

The question, then, is not whether the apprenticeship model should be preserved in its traditional form. Lave herself would resist that framing — her work was descriptive, not prescriptive, and she was careful to avoid romanticizing the practices she studied. The question is whether the function the apprenticeship served — the development of thick, situated, contextually embedded understanding through legitimate peripheral participation in a community of practice — can be preserved in a new form. Whether organizations, educators, and practitioners can design for situated learning deliberately, in an environment where the economic pressures and the tools themselves push toward the thinnest, fastest, most output-oriented engagement possible.

The trajectory still needs to exist. The situated encounters still need to happen. The layers still need to be deposited. The question is who will build the structures that ensure they do, when every incentive points toward skipping them.

---

Chapter 8: The Decontextualization Machine

A large language model trained on the text of human civilization processes, by one estimate, several trillion tokens — words, fragments of words, punctuation marks — drawn from books, websites, academic papers, forums, documentation, code repositories, and every other form of written expression that digital infrastructure has made available. The training process extracts statistical patterns from this corpus: which tokens tend to follow which other tokens, which sequences are probable given which preceding sequences, which patterns recur across which domains. The result is a system that can generate text that is, in many domains, indistinguishable from human-produced text in its propositional accuracy, structural coherence, and rhetorical competence.

The process is, in a precise sense, an act of decontextualization on a civilizational scale.

Every text in the training corpus was produced in a context. A legal brief was written by a specific lawyer, for a specific client, in a specific jurisdiction, in response to a specific dispute, drawing on specific precedents that the lawyer selected because of specific aspects of the case that she judged to be relevant based on years of situated practice. A piece of code was written by a specific developer, as part of a specific project, within a specific team, to solve a specific problem that arose in the specific context of that project's architecture and that team's practices and that organization's priorities.

The language model retains none of this context. It retains the text — the propositional residue of the situated practice that produced it. The statistical patterns that the model extracts capture what was said without preserving why it was said, by whom, in response to what specific situation, drawing on what specific history of contextual engagement.

This is not a failure of the technology. It is the technology's design. Decontextualization is what large language models do. It is their specific contribution to the processing of information: the extraction of general patterns from specific contexts, producing outputs that are plausible across contexts without being situated in any particular one.

Lave's framework identifies this as a fundamental problem — not because decontextualized information is useless (it is enormously useful) but because the gap between decontextualized information and situated knowledge is precisely the gap where professional judgment lives. And the people who rely on the decontextualized information without awareness of the gap are building on a foundation that looks solid but is, in Lave's terms, structurally thinner than the foundation that situated practice would have produced.

The legal brief generated by Claude cites the right cases. It structures the argument competently. It uses the right terminology and follows the right conventions. What it does not contain is the specific, situated judgment that a lawyer exercised when she chose these cases rather than others — the judgment that was informed by her understanding of this judge's particular interpretive tendencies, this jurisdiction's particular doctrinal emphasis, this client's particular strategic goals, and the thousand other contextual factors that situated practice integrates into the decision about what belongs in a brief and what does not.

The developer who receives code from Claude receives correct code — syntactically accurate, functionally competent, following established patterns. What the code does not contain is the situational wisdom that an experienced developer would have built into it: the awareness that this particular database connection pool has historically been a bottleneck under load in this particular system, that this particular team prefers a specific error-handling pattern because a production incident two years ago taught them the cost of the alternative, that this particular architectural decision was made as a temporary compromise and should not be extended without consultation with the architect who made it and who has reasons for the compromise that are not documented anywhere because they live in her situated understanding of the system's constraints.

The decontextualization machine produces output that is context-free in both senses of the phrase: it was produced without context, and it arrives without context. The practitioner who receives it must supply the context — must judge whether this particular output is appropriate in this particular situation, given the specific factors that the language model did not and could not account for.

And this is where the circularity tightens. The capacity to supply context — to judge whether decontextualized output is appropriate in a specific situation — is itself a product of situated experience. The lawyer who can evaluate Claude's brief has that evaluative capacity because she spent years reading cases, arguing before judges, watching how specific arguments land in specific courtrooms. The developer who can evaluate Claude's code has that evaluative capacity because she spent years building systems, watching them break, learning through situated encounters what the code does not say about itself.

If the next generation of lawyers and developers develops their evaluative capacity through AI-assisted practice rather than situated engagement — if they learn to produce briefs by reviewing Claude's output rather than by reading cases and writing arguments under the pressure of real litigation — then their capacity to supply context, to judge the appropriateness of decontextualized output, will be thinner than their predecessors'. They will be less equipped to detect the gap between what Claude produced and what the situation requires.

The gap will not disappear. It will become invisible — not because it has been closed but because the people who would have been able to see it will not have developed the situated understanding that makes the gap perceptible.

Lucy Suchman, whose work Lave endorsed in 1987, demonstrated this dynamic in the context of human-machine interaction three decades before large language models existed. Suchman studied users interacting with a Xerox photocopier that had been designed using a "plan-based" model of human action — the assumption that users approach machines with clear plans and execute those plans in sequence. What Suchman observed was radically different: users' actions were situated, improvised in response to the specific circumstances of the moment, including the machine's responses, the physical environment, and the user's evolving understanding of what was happening.

The gap between the machine's model of the user (a plan-executor) and the user's actual behavior (a situated improviser) produced systematic failures — not because the machine was broken but because the machine's assumptions about the nature of human action were wrong. The machine decontextualized human behavior, treated it as the execution of abstract plans, and produced responses that were inappropriate for the specific, situated, contextually determined behavior the user was actually performing.

Large language models are vastly more sophisticated than the Xerox photocopier Suchman studied. But the structural issue Suchman identified persists. The model operates on decontextualized representations. The human operates in a specific, situated context. The gap between decontextualized output and situated need is bridged — when it is bridged — by the human's capacity to supply the context that the model lacks. And that capacity is a product of the situated experience that the model's very efficiency threatens to erode.

Suchman updated her analysis in 2007, in Human-Machine Reconfigurations, extending it to contemporary robotics and AI. Her argument remained consistent: the question is not whether machines are intelligent but how the interaction between humans and machines is organized — who does what, who knows what, and whose understanding of the situation is taken as authoritative. When the machine's decontextualized output is taken as authoritative — when the developer accepts Claude's code without supplying situational context, when the lawyer files Claude's brief without evaluating it against her situated understanding of the case — the gap between output and situation goes unbridged. Not always. Not catastrophically. But systematically, in a way that accumulates.

The decontextualization machine operates at a scale and speed that makes the accumulation difficult to track. A single instance of unsupplied context — one piece of code that is correct in isolation but inappropriate in context, one legal citation that is accurate but strategically wrong for this case — is trivial. The damage is not in any single instance. It is in the pattern: the steady, invisible accumulation of decisions made on decontextualized information by practitioners whose capacity to supply context is thinning because the situated engagement that produces that capacity has been replaced by tool-mediated output.

The response Lave's framework demands is not the rejection of decontextualized information. That response is neither possible nor desirable — decontextualized information is useful, necessary, and in many cases preferable to the slow, inefficient process of developing situated understanding from scratch. Claude's code is often better than what the developer would have written alone. The AI-generated brief is often more thorough than what the lawyer would have produced under time pressure. The decontextualization machine provides genuine value.

The response Lave's framework demands is recontextualization: the deliberate, institutional, sustained effort to embed AI-mediated information within the situated practices through which practitioners develop the judgment to use it wisely.

Recontextualization means designing the code review not as a quality gate — pass/fail, does the code work — but as a situated learning event, where the reviewer and the developer engage in the kind of extended, contextually specific discussion that deposits understanding. It means structuring the legal mentorship not as a supervision of output but as a joint engagement with cases, where the senior lawyer's situated understanding of the jurisdiction, the judge, and the strategic context is transmitted through collaborative practice rather than review of AI-generated documents.

It means, in organizational terms, recognizing that the most valuable thing a senior practitioner does is not produce output — AI handles that — but maintain and transmit the situated understanding that makes output wise. And it means protecting the time, the space, and the institutional structures through which that transmission occurs, against the relentless pressure to convert every hour into measurable production.

Recontextualization is expensive. It is slow. It is inefficient by every output metric. It requires senior practitioners to spend time on interactions whose value is invisible in the short term and indispensable in the long term. It requires organizations to invest in the social infrastructure of knowledge production — mentoring, collaborative practice, situated engagement — at a moment when the economic pressures all point toward individual-tool efficiency.

It is also the only path that Lave's framework identifies as capable of sustaining the thick, situated, contextually embedded knowledge on which professional judgment depends. The alternative — the continued thinning of knowledge under the weight of decontextualized efficiency — produces practitioners who are more productive and less wise, organizations that are more efficient and less resilient, and a professional culture that is more capable on the surface and less prepared for the conditions where surface capability is insufficient.

The decontextualization machine is extraordinarily powerful. What it cannot do is supply the context it has removed. That work remains human. And preserving the conditions under which humans can do it is the most consequential institutional challenge of the present moment.

Chapter 9: Recontextualizing the River

The most common institutional response to AI in 2025 and 2026 was integration — bring the tools into the workflow, measure the productivity gains, celebrate the output. The second most common response was restriction — ban the tools from the classroom, prohibit their use on exams, treat them as a form of cheating. The third response, rarer and more difficult than either, was the one Lave's framework demands: redesign the context of learning itself.

Integration without redesign accepts the decontextualization. The tools arrive. The practitioners use them. The output improves. The situated engagement that the old workflow incidentally provided — the debugging sessions, the extended code reviews, the slow accumulation of contextual understanding through friction-rich practice — erodes. Nobody notices, because nobody was measuring it. The organization optimized for what it could see, and what it could see was output.

Restriction without redesign refuses the decontextualization but offers nothing in its place. The classroom bans ChatGPT. The students write their essays by hand. The friction is preserved. But the friction is preserved in its old form, within a context that no longer matches the world the students will enter. The student who learns to write without AI in 2026 has preserved the situated engagement of the writing process. She has also developed a skill — writing without AI assistance — that she will almost certainly never use professionally. The friction was real. The context was artificial.

Recontextualization is the third path. It accepts the tools. It redesigns the context of learning to preserve the situated engagement that the tools would otherwise eliminate. It does not pretend the tools do not exist, and it does not pretend the tools are sufficient. It asks: given that these tools are here, what must the learning environment provide that the tools cannot?

Lave's framework specifies the answer with precision. The tools cannot provide legitimate peripheral participation — the gradual, situated, socially embedded trajectory from newcomer to full practitioner. They cannot provide community membership — the experience of belonging to a group of practitioners who share standards, negotiate meaning, and maintain the social infrastructure of professional knowledge. They cannot provide the context of struggle — the specific, embodied, friction-rich encounters through which tacit understanding is deposited. And they cannot provide the social production of meaning — the collaborative process through which a community's standards are created, maintained, and evolved.

These are the four elements that recontextualization must preserve. Not as abstract values, but as concrete features of the learning environment — features that can be designed, maintained, and evaluated.

Consider what this means in practice for software engineering, the domain where AI has advanced furthest and where the situated learning implications are most immediately visible.

A recontextualized engineering organization does not ban AI tools. It structures their use within a framework that preserves situated engagement. The junior developer uses Claude — but not for everything. Specific categories of work are designated as AI-free zones: not because the work cannot be done by AI, but because the struggle of doing it without AI produces the situated understanding that the developer needs. These categories are chosen with the same logic that the Liberian tailoring masters used to sequence their apprenticeships: not by difficulty, but by what contextual understanding the struggle deposits.

Debugging, for instance, might be designated as an AI-free zone for the first six months of a developer's tenure. Not because AI cannot debug — it debugs better than most junior developers — but because the process of debugging deposits layers of contextual understanding about the system's architecture, its failure modes, its personality, that no amount of AI-generated explanation can replicate. The developer who has spent forty hours debugging a specific class of error in a specific codebase possesses a situated understanding of that codebase that the developer who received Claude's fix in thirty seconds does not.

The designation is not permanent. After six months, the developer has built enough contextual foundation to use AI debugging tools wisely — to evaluate Claude's fixes against her own situated understanding of the system, to supply the context that Claude cannot. The trajectory from AI-free to AI-augmented mirrors the trajectory from periphery to center: the practitioner earns access to the more powerful tools by developing the judgment needed to use them well.

Code reviews, in a recontextualized organization, are redesigned as situated learning events rather than quality gates. The current practice at most organizations treats code review as a pass-fail checkpoint: does the code work, does it follow standards, is it ready to ship. Recontextualization redesigns the review as an extended dialogue — a conversation between the reviewer and the author about why this approach was chosen, what alternatives were considered, what the trade-offs are, how this code relates to the broader system architecture.

This dialogue is, in Lave's terms, a situated interaction within a community of practice. It is the mechanism through which the community's standards are transmitted, its vocabulary is developed, its shared understanding is maintained. It is also, by output metrics, inefficient. A thirty-minute code review that produces learning is, by throughput measures, inferior to a five-minute review that approves the code and moves on.

The inefficiency is the point. The code review that deposits situated understanding is performing a function that the five-minute review does not — a function that is invisible in the short term and indispensable in the long term. The organization that protects the thirty-minute review is investing in the social infrastructure of knowledge production. The organization that optimizes for throughput is consuming that infrastructure without replenishing it.

Mentoring relationships, in a recontextualized organization, are structured around collaborative practice rather than supervision of output. The senior engineer does not review the junior engineer's AI-generated code and offer corrections. She works alongside the junior engineer on a real problem — a problem complex enough to require the senior engineer's situated understanding and specific enough to provide the junior engineer with contextual encounters that deposit genuine learning.

The apprenticeship model is not restored in its traditional form. The junior developer is not pressing trousers for six months. But the function of the apprenticeship — the transmission of situated understanding through collaborative practice within a community — is preserved in a form appropriate to the tools and the context.

Education presents a different and in some ways more challenging case. The classroom has always been, in Lave's analysis, a deeply problematic context for learning — artificial, decontextualized, organized around the transmission of abstract knowledge rather than the development of situated understanding. The supermarket shoppers who performed brilliantly in context and poorly on tests were demonstrating not the failure of their learning but the failure of the test to capture what they had learned.

AI does not solve the classroom's problems. It amplifies them. A classroom that was already organized around the transmission of decontextualized knowledge now has access to a tool that decontextualizes knowledge more efficiently than any teacher could. The student who receives Claude's explanation of a concept receives a decontextualized account that is clearer, more thorough, and more patient than most human explanations. The student learns the propositions. The student does not undergo the situated engagement that would have produced thick understanding.

Recontextualization in education means redesigning the classroom around situated practice rather than information transfer. This is not a new idea — project-based learning, apprenticeship models, community-engaged pedagogy have all been proposed and, in some cases, implemented over the past century. But AI makes the redesign both more urgent and more feasible.

More urgent, because the information-transfer function of the classroom — the function that justified lectures, textbooks, and traditional assessment — has been rendered largely redundant by AI. If Claude can explain the concept better than the teacher, the teacher who continues to explain concepts is competing with a tool she cannot beat on the tool's terms.

More feasible, because AI can handle the information-transfer function, freeing the teacher to focus on the function that AI cannot perform: the creation and maintenance of a learning environment in which students develop situated understanding through engagement with real problems in the context of a community of practice.

The teacher who grades questions rather than answers — who evaluates students on the quality of their inquiry rather than the quality of their output — is performing a recontextualization. She is redesigning the learning context to preserve the element that matters most (the student's engagement with the problem) while offloading the element that AI handles better (the production of the answer). The student who produces five excellent questions about a topic has demonstrated deeper engagement with the material than the student who produces a correct essay, because the questions require the student to identify what she does not understand — a metacognitive operation that no AI can perform on her behalf.

But recontextualization in education faces an obstacle that organizational recontextualization does not: the assessment system. Organizations can redesign their internal practices relatively freely. Schools are embedded in assessment systems — standardized tests, college admissions requirements, credentialing frameworks — that reward output over understanding, answers over questions, thin knowledge over thick. A teacher who grades questions rather than answers must still prepare her students for a system that grades answers rather than questions.

This is the institutional challenge that Lave's framework illuminates but cannot, by itself, resolve. The redesign of learning contexts requires the redesign of the institutional structures within which learning occurs — assessment systems, credentialing frameworks, organizational incentive structures. These systems were designed for a world in which decontextualized knowledge was the best approximation of expertise that institutions could measure. They are now operating in a world where decontextualized knowledge is the one thing machines produce better than humans, and where the human contribution — situated judgment, contextual understanding, the thick knowledge that only participation produces — is precisely the thing the assessment systems were never designed to capture.

The Liberian tailoring masters assessed their apprentices through observation of practice — watching the apprentice work, evaluating the quality of the garments he produced, judging his readiness for the next stage of the trajectory based on the accumulated evidence of his situated engagement with the craft. There was no test. There was no credential. There was the master's judgment, itself a product of decades of situated practice, applied to the specific question of whether this apprentice, in this workshop, at this moment, was ready.

That assessment was thick — contextual, situated, judgment-based. Modern assessment is thin — decontextualized, standardized, output-based. AI makes the thinness of modern assessment newly visible, because the outputs that the assessment measures are now producible without the understanding that the assessment was designed to infer.

Recontextualization, fully realized, would mean redesigning not just the learning environment but the assessment system — developing ways to evaluate situated understanding, contextual judgment, and thick knowledge that do not reduce to the measurement of decontextualized output. This is an institutional project of enormous scope, and it has barely begun.

The alternative is the continued operation of assessment systems that cannot distinguish between the practitioner who produced competent output through situated understanding and the practitioner who produced competent output through AI assistance without situated understanding. The two look identical on every metric the system uses. They are not identical. And the moment the situation demands the judgment that only situated understanding produces, the difference will be catastrophic.

Lave's framework does not provide a blueprint for recontextualization. What it provides is the diagnosis that makes recontextualization recognizable as a necessity rather than a luxury — the understanding that the context of learning is not an incidental feature of the learning process but a constitutive component of the knowledge that results. Change the context, and you change the knowledge. Eliminate the situated engagement, and you eliminate the thick understanding. Preserve the output while thinning the understanding, and you produce a system that looks competent until the moment it needs to be wise.

Building the structures that preserve situated learning in the age of AI is not a technical problem. It is an institutional one, a cultural one, an act of collective will. It requires organizations to invest in interactions whose value is invisible in the short term. It requires educational institutions to redesign assessment systems that have been in place for a century. It requires practitioners to protect the friction-rich engagement that produces thick understanding against the constant pressure to convert every hour into measurable output.

It is also, Lave's framework insists, the only path that preserves the kind of knowledge on which wise practice depends. The tools are extraordinarily powerful. The question is whether the contexts in which they are used will produce practitioners who can wield them wisely, or merely practitioners who can wield them productively.

The difference will define the generation.

---

Epilogue

The word my team started using was scaffolding, and at first I thought they meant it as a compliment.

We were in Trivandrum, the second week of the training sprint I describe in The Orange Pill, and the engineers had begun building with Claude Code at the pace that still astonishes me when I revisit the numbers. Twenty-fold productivity. Features that should have taken weeks arriving in days. The scaffolding, they said, was what Claude provided — the structural support that let them reach heights they could not have reached alone.

Then I read Lave. And the word changed.

In construction, scaffolding is temporary. You build it to reach the upper floors. You remove it when the building can stand on its own. The assumption is that the building will stand on its own — that the scaffolding's purpose is to enable the construction of something that will eventually be self-supporting.

Lave's question, the one I have not been able to set down since I encountered it, is: what if our scaffolding is permanent? What if the tool that lets the engineer reach the upper floors is never removed — and the building never learns to stand alone?

The engineer in Trivandrum who built a complete user-facing feature in two days, having never written frontend code before — I celebrated that in The Orange Pill as the collapse of the imagination-to-artifact ratio. I still celebrate it. The capability expansion is real. But Lave made me see something I had been looking past: that engineer did not develop the situated understanding of frontend systems that she would have built through months of peripheral participation in a frontend team. She produced the output. She did not undergo the trajectory.

I keep thinking about the tailors. Not as a metaphor — I am constitutionally suspicious of metaphors that make the argument too tidy — but as a case that maps onto my world with uncomfortable precision. The apprentice who pressed trousers for months before touching scissors was building something invisible: a feel for finished garments that would inform every subsequent operation. The engineer who ships features with Claude from day one skips that invisible accumulation. The output is immediate. What is not built is the foundation of contextual understanding on which judgment rests.

My senior engineers still have that foundation. They built it the old way, through years of debugging, of reading other people's code, of struggling with systems that resisted their intentions. They can evaluate Claude's output because they possess the situated understanding to know when the output is technically correct but contextually wrong. They are the masters in the workshop, the ones who know why the apprentice presses trousers first.

But their successors — the engineers entering the profession now, with Claude available from the first line of code — will not have built those same foundations. And when those successors become the senior engineers, who evaluates Claude's output then? Who supplies the context that the tool cannot provide?

This is Lave's recursive problem, and it frightens me more than any other implication of the AI revolution. The capacity to detect the loss is itself a product of the experience being lost. Each generation is less equipped to see what the next generation is missing, because the experience that would have made it visible was the experience they did not have.

I am not going to pretend I have solved this. I have not. But I am going to tell you what I have started doing, because action taken in honest uncertainty is still better than paralysis.

We are building what I have started calling situated zones into our development process. Specific categories of work — debugging, architecture review, system diagnosis — where Claude is deliberately excluded, not because it cannot do the work, but because the struggle of doing it deposits something the tool cannot. The zones are not permanent. They are stages, sequenced like the Liberian apprenticeship, designed to ensure that each engineer builds a minimum foundation of contextual understanding before the tools become her primary interface with the system.

It is slower. It is less efficient by every output metric. My board would prefer I convert every productivity gain into margin. I am spending some of that margin on invisible learning — on the social infrastructure of knowledge production that Lave convinced me is the only thing standing between my organization and a future of technically competent, contextually thin practitioners.

Will it work? I do not know. The honest answer is that nobody knows, because nobody has run this experiment at scale. We are all building in the river, and the current is faster than any of us anticipated.

But I keep returning to one of Lave's quietest observations, one that does not appear in any headline and is not the kind of thing that goes viral on social media: knowledge is inseparable from the context in which it is acquired. The struggle, the community, the specific situated encounters through which understanding forms — these are not obstacles on the way to knowledge. They are the medium through which knowledge is constituted. Remove the medium, and you do not get knowledge faster. You get something else — something thinner, something adequate for production, something insufficient for wisdom.

I am building the most powerful amplifier I have ever had access to. Lave taught me to ask what is being amplified — and what was never given the chance to form.

-- Edo Segal

The AI produces the answer.
But the understanding was never in the answer.

It was in the struggle to reach it -- and that struggle just disappeared.

Jean Lave spent four decades proving that knowledge is not information. It is inseparable from the context in which it forms -- the specific workshop, the specific community, the specific sequence of friction-rich encounters through which a novice becomes an expert. A large language model is the most powerful decontextualization engine ever built: it extracts patterns from billions of situated human experiences and delivers them stripped of every contextual element that made them wise.

This book follows Lave's framework into the heart of the AI revolution to ask the question the productivity metrics cannot answer: when the tool removes the struggle that deposits understanding, what happens to the understanding? From Liberian tailoring apprentices to Silicon Valley engineering teams, it traces the invisible infrastructure of situated learning and asks whether we will preserve it -- or optimize it away.

The answer will define whether AI produces a generation that is more capable or merely more productive.

-- Jean Lave

Jean Lave
“a uniquely creative anthropological approach to human and machine intelligence”
— Jean Lave
0%
10 chapters
WIKI COMPANION

Jean Lave — On AI

A reading-companion catalog of the 14 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jean Lave — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →