By Edo Segal
The mistake I kept making was the same one every builder makes. I thought the hard part was building the thing. Getting the code right, shipping the feature, hitting the deadline. The hard part was always something else — something I couldn't name until I found Schon's vocabulary for it.
The hard part was figuring out what I was actually building.
Not the specification. Not the feature list. The problem underneath the problem. The thing the client couldn't articulate, the user need that no survey would surface, the architectural decision that felt wrong in my gut before I could explain why. That unnamed territory — the swamp where the real work lives — is what Donald Schon spent his career mapping.
Schon drew a line that cuts through everything happening in AI right now. On one side: problems you can define, then solve. On the other: situations so messy, so tangled, so resistant to clean framing that the first job is figuring out what the problem actually is. He called the first kind "the high ground." He called the second "the swampy lowlands." And he demonstrated, with decades of careful observation, that the work that matters most — the work professionals are actually paid for, whether they know it or not — happens in the swamp.
The high ground is exactly where AI excels. Give Claude a well-defined problem and it will solve it faster and more comprehensively than most humans. The swamp is where AI needs you most. Because the swamp requires judgment that comes from lived experience, from ten thousand previous encounters that have deposited layers of understanding too deep for language to reach.
I brought Schon into this book because his framework diagnoses something the technology discourse keeps missing. The conversation about AI fixates on capability — what the tool can do, how fast, how well. Schon redirects attention to the practitioner — what happens to the human on the other side of the conversation. Does she develop deeper judgment through the collaboration, or does the tool's speed and polish erode the reflective capacity that makes her judgment possible in the first place?
That question matters more than any benchmark. It matters for every developer using Claude Code, every lawyer drafting with AI, every teacher watching students submit work they didn't struggle through. The tool is extraordinary. The question is whether we are becoming the kind of practitioners who deserve an extraordinary tool.
Schon gave me the vocabulary to ask that question clearly. I hope his thinking does the same for you.
-- Edo Segal ^ Opus 4.6
1930-1997
Donald Schon (1930–1997) was an American philosopher, urban planner, and professor at the Massachusetts Institute of Technology whose work fundamentally reshaped how professions understand expertise, learning, and practice. Born in Boston, he studied philosophy at Yale and the Sorbonne before pursuing a career that spanned government service, consulting, and academia. His landmark book *The Reflective Practitioner: How Professionals Think in Action* (1983) challenged the dominant model of professional knowledge — what he called "technical rationality" — by demonstrating that competent practitioners do not simply apply theory to problems but engage in an ongoing, improvisational conversation with the situations they face. He introduced concepts including reflection-in-action, knowing-in-action, and the reflective practicum that have influenced fields from education and medicine to architecture and management. His collaboration with Chris Argyris produced the theory of single-loop and double-loop learning, a framework for understanding how individuals and organizations either reinforce or revise their governing assumptions. Schon's work remains foundational to professional education theory and has gained renewed urgency in the age of AI, where the distinction between articulable knowledge and tacit judgment has become the central question of professional value.
For three centuries, Western civilization has operated under an elegant delusion about how professionals know what they know. The delusion is so deeply embedded in the architecture of universities, the structure of licensing examinations, the hierarchy of professional firms, and the self-understanding of every doctor, lawyer, engineer, and manager who has ever felt the quiet pride of expertise, that it has become invisible. Like water to a fish, or gravity to a falling stone, the delusion is the medium through which professional life moves, and nearly no one pauses to notice it.
Donald Schon spent his career making it visible. He called the delusion technical rationality, and he demonstrated, with the methodical patience of a pathologist dissecting a specimen everyone else had assumed was healthy, that it described almost nothing about how competent professionals actually work.
Technical rationality holds that professional practice is the application of scientific theory to practical problems. The hierarchy is clean. At the top sits basic science — the production of general principles through rigorous inquiry. Below it sits applied science — the translation of general principles into diagnostic techniques and operational procedures. At the bottom sits practice — the application of those techniques to the problems of the real world. The flow is downward, always downward: from theory to technique to application. The university teaches the theory. The professional school teaches the technique. The practitioner applies both.
This hierarchy governs the structure of virtually every professional school in the Western world. Medical students learn anatomy and biochemistry before they touch a patient. Law students study constitutional theory before they draft a contract. Engineering students master differential equations before they design a bridge. The sequence feels natural, even inevitable. Of course you learn the science before you practice the art. What else would you do?
Schon's answer: almost anything else. Because the hierarchy, for all its institutional elegance, fails to account for the situations in which professional competence actually matters most.
The failure is not at the margins. It is structural. Technical rationality works beautifully for what Schon called the "high ground" of professional practice — well-defined problems with clear solutions, where the relevant science is settled and the technique is established. A bridge engineer calculating load tolerances operates on the high ground. A pharmacist checking drug interactions operates on the high ground. The science is known, the technique is reliable, and the application is straightforward.
But most professional practice does not take place on the high ground. It takes place in what Schon called the "swampy lowlands" — the messy, ambiguous, ill-defined situations where the problems do not arrive pre-sorted into the categories the theory provides. The architect facing a client who cannot articulate what they want but will recognize it when they see it. The therapist confronting a patient whose symptoms fit no diagnostic category. The manager navigating an organizational crisis that no case study anticipated. The teacher standing before a classroom where three students are bored, two are confused, one is crying, and the lesson plan has become irrelevant in the first five minutes.
In the swampy lowlands, technical rationality collapses. Not because the theory is wrong, but because the theory addresses a different problem than the one the practitioner faces. The theory tells you what to do when you know what problem you are solving. Practice demands that you first figure out what problem you are solving — and that figuring out, which Schon called "problem setting" as distinct from "problem solving," is the part that no theory covers. The science does not tell you which science to apply. The technique does not tell you which technique is appropriate. The hierarchy provides the tools but not the judgment about which tools to reach for. And it is the judgment that separates the competent practitioner from the merely credentialed one.
This distinction — between problem solving and problem setting — is the hinge on which Schon's entire framework turns. It is also the hinge on which the AI revolution turns, though it took forty years and the arrival of large language models for the connection to become visible.
Herbert Simon, one of the founding fathers of artificial intelligence and Schon's most formidable intellectual adversary, had built the theoretical architecture of AI on precisely the epistemology Schon was dismantling. Simon's Sciences of the Artificial, first published in 1969, framed professional practice — indeed, all intelligent behavior — as search through a problem space. The intelligent agent identifies the goal state, surveys the available operators, and selects the sequence of moves that transforms the initial state into the goal state. Intelligence is optimization. Expertise is efficient search. The professional is a problem solver, and problem solving is the application of general methods to well-specified objectives.
This is the model that powered classical AI. It is also the model that powered professional education. And the symmetry is not coincidental. Both Simon's AI and the professional school assume the same epistemology: that intelligence consists of applying formal knowledge to pre-defined problems. The difference between a chess-playing computer and a diagnostic physician, in Simon's framework, is one of complexity, not of kind. Both search a problem space. Both apply operators. Both converge on solutions.
Schon's critique cut deeper than a disagreement about methods. He argued that Simon's framework, and the professional schools built on it, systematically misidentified what professionals actually do when they are at their best. The best practitioners do not search a problem space. They construct one. They do not apply pre-existing categories to the situation. They create new categories through their engagement with the situation's specific, unrepeatable, stubbornly particular demands. The jazz musician does not apply music theory to the chord changes. She listens to what the ensemble is doing, feels the harmonic direction, senses the rhythmic tension, and produces a phrase that is simultaneously responsive to the situation and constitutive of it — a phrase that changes the situation even as it responds to it.
This is not optimization. This is not search. This is something for which Simon's framework has no vocabulary, and it is the thing that matters most.
The relevance to the present moment is not analogical. It is structural. The entire prior era of software development — the era that preceded the language interface documented in The Orange Pill — was organized according to the principles of technical rationality with a fidelity that Schon would have recognized instantly.
Consider the workflow. A developer receives a specification — a pre-defined problem. She researches the available libraries — the body of applied science. She studies the documentation — the techniques for applying the science to the problem. She writes the implementation — the application of technique to situation. She tests the result against the specification — the verification that the application has produced the correct solution to the pre-defined problem.
The sequence is linear. The flow is downward, from theory through technique to application. The problem is given. The knowledge is formal. The practitioner applies. This is technical rationality made flesh in a text editor.
And like technical rationality in every other domain, it works beautifully on the high ground. When the specification is clear, the library is well-documented, and the implementation is straightforward, the linear workflow produces correct software with reliable efficiency. The difficulty arises in the swampy lowlands — when the specification is ambiguous, when the library's behavior is underdocumented, when the edge cases multiply, when the real requirement turns out to be something the specification did not anticipate. In those situations, the linear workflow breaks down, and the developer finds herself doing something that looks nothing like the application of pre-existing knowledge: she is probing, experimenting, interpreting unexpected behavior, reframing the problem based on what the code reveals, constructing the problem and the solution simultaneously through iterative engagement with a situation that resists the categories she brought to it.
She is, in short, doing what Schon described. She is reflecting in action. But the tools she uses — the text editor, the compiler, the debugger, the linear workflow — are all designed for the high ground. They assume the problem is given. They assume the knowledge is formal. They assume the flow is downward. And so the reflective practice that actually produces her best work happens despite the tools, not because of them.
The language interface changed this equation. When a developer describes a problem in natural language, receives an implementation, evaluates the result, adjusts the description, and receives a revised implementation — in a continuous, rapid, iterative cycle — the workflow is no longer linear. The flow is no longer downward. The problem is no longer given. The developer and the tool are engaged in something that looks, structurally, like a conversation. And the conversation produces understanding and solution simultaneously, in exactly the way Schon argued that the best professional practice always does.
The shift is not merely from slow to fast, though the speed change is dramatic. The shift is epistemological. The developer's relationship to knowledge itself has changed. In the old workflow, knowledge was something you acquired before you practiced — you studied the documentation, learned the API, understood the library — and then applied. In the new workflow, knowledge is something you produce through practice — you describe, the tool responds, you learn what you meant by seeing what the tool produced, and the learning and the doing are fused.
This is the transition from technical rationality to reflective practice, and it is happening at scale across every profession the language interface touches. Not just software development. The lawyer who describes a legal problem to Claude and receives a draft brief is no longer applying doctrine to facts in the linear manner of the law school case method. She is engaged in a reflective conversation with a partner whose output reveals dimensions of the problem she had not considered. The physician who describes a complex of symptoms and receives a differential diagnosis is not looking up the answer in a textbook. She is conducting an iterative inquiry in which each exchange refines both her understanding of the patient and her understanding of the possibilities.
The irony is extraordinary. The technology built on Simon's epistemology — AI as search, AI as optimization, AI as the application of formal knowledge to pre-defined problems — has produced a tool that validates Schon's epistemology. The large language model does not work the way Simon predicted intelligent systems would work. It does not search a problem space defined by logical operators. It generates responses through a process that is, at a mathematical level, something closer to pattern recognition across a vast, implicit body of knowledge — a process that resists the clean hierarchy of basic science, applied science, and practice. And the tool it has become, in the hands of practitioners, converts work that used to follow the linear logic of technical rationality into work that follows the iterative logic of reflective practice.
Simon and Schon debated for decades about the nature of professional knowledge. The machines built on Simon's model have, in their practical effect, proved Schon right.
But the victory, if it is one, comes with a complication that Schon did not anticipate. Schon's reflective practitioner is defined by her capacity to evaluate — to listen to the situation's back-talk, to judge whether the result is good, to determine whether the frame is appropriate. The evaluation is hers. The judgment is hers. The capacity for judgment is what separates the reflective practitioner from the technician.
When the tool generates the output, the practitioner evaluates it. But the output arrives polished, plausible, and fast. The temptation is to accept the plausible as the good — to mistake the smooth surface of the machine's response for the depth of genuine understanding. And if the practitioner succumbs to that temptation, the reflective practice collapses back into something that looks like technical rationality from the inside — a linear sequence of prompt, output, acceptance — while wearing the costume of reflective engagement.
The lie every professional school tells is that expertise consists of knowing what to do. Schon spent his career demonstrating that expertise consists of knowing what question to ask. The language interface makes the distinction urgent in a way it has never been before, because the tool will answer any question you pose — confidently, fluently, and fast. The question of whether you have posed the right question, whether the frame within which you are operating deserves the confidence you are placing in it, whether the swampy lowland you are navigating requires a different map than the one the tool has offered — that question remains entirely, irreducibly, stubbornly human.
And no professional school has figured out how to teach it.
---
A surgeon opens the abdomen and finds something unexpected. Not what the imaging suggested. Not what the pre-operative plan anticipated. The anatomy is aberrant — the artery that should run here runs there, the tissue that should be healthy is fibrotic, the mass that appeared contained on the scan is entangled with structures that were not supposed to be involved.
She does not stop the operation to consult a textbook. She does not step out of the operating room to review the literature. She adjusts. In real time. With the patient open on the table and the clock running. She adjusts her grip, her angle, her plan. She re-routes the dissection. She modifies the technique based on what her hands are telling her about what her eyes are showing her. She is thinking about what she is doing while she is doing it — not the retrospective thinking of the post-operative debrief, not the prospective thinking of the pre-operative plan, but thinking fused with action, cognition inseparable from performance, understanding produced by and through the doing.
Donald Schon gave this capacity a name: reflection-in-action. The name has become famous enough to be trivialized — reduced, in countless management training slides, to "learning by doing" or "thinking on your feet." These reductions miss everything that makes the concept important. Reflection-in-action is not a vague encouragement to be flexible. It is a precise description of a specific cognitive operation that the best practitioners perform in every domain, and that the standard model of professional knowledge cannot explain.
The standard model — technical rationality — assumes that thinking and doing are sequential. First you think, then you do. First you diagnose, then you treat. First you plan, then you execute. The sequence implies a clear division of cognitive labor: the thinking produces the plan, and the doing executes it. If the plan is good, the execution is straightforward. If the execution fails, the fault lies in the plan, and the remedy is better thinking before the next attempt at doing.
Schon observed that competent practitioners violate this sequence constantly. Not because they are sloppy or undisciplined, but because the situations they face are too complex, too particular, too resistant to pre-formulation for any plan to survive contact with the reality it was designed to address. The plan is a hypothesis. The execution is the experiment. And the competent practitioner treats it as such — observing the results of her actions, noting where reality diverges from expectation, adjusting the hypothesis in real time, and testing the adjusted hypothesis through modified action.
This is not trial and error. Trial and error is random. Reflection-in-action is structured. The practitioner brings to the situation an entire repertoire of past experience — patterns she has seen before, moves that have worked before, frames that have been productive before — and uses that repertoire to interpret the surprise. The aberrant anatomy does not produce panic. It produces recognition: this is like that case from three years ago, or this pattern suggests the mass has been growing longer than we thought, or the fibrosis means I need to change my approach to the dissection. The recognition is instantaneous, often pre-verbal, and deeply informed by years of accumulated practice. It is thinking that has been compressed by experience into something that operates at the speed of perception.
The jazz musician provides Schon's most vivid illustration. A jazz solo is not the execution of a pre-formed plan. It is not random, either. The musician enters the solo with a repertoire — a vocabulary of phrases, harmonic patterns, rhythmic ideas — and uses that repertoire to conduct a real-time conversation with the ensemble. She plays a phrase. The rhythm section responds. The response suggests a direction she had not planned. She follows it. The harmony shifts. She adjusts. The bass player introduces a pattern that changes the harmonic color, and she hears it, interprets it, and incorporates it into the next phrase, all within the space of a few beats. The thinking and the doing are not sequential. They are simultaneous. The music is produced by the conversation, not by the plan.
This is the structure that Schon identified in every domain of expert practice. The architect does not design in the linear sequence that the professional school teaches — program, then schematic, then development, then construction documents. The architect sketches. The sketch reveals something — a relationship between spaces, a tension between form and function, an implication the architect had not considered. The architect reflects on what the sketch reveals. The reflection produces a new understanding of the problem. The new understanding generates a new sketch. The cycle continues until the design emerges, not from the application of pre-existing knowledge to a pre-defined problem, but from the iterative conversation between the architect and the drawing.
The drawing talks back. This phrase, central to Schon's framework, sounds almost mystical but is precisely literal. The architect puts marks on paper, and the marks show the architect something she did not know she was thinking. The marks have consequences that the architect did not intend, and those consequences — the back-talk of the situation — become the stimulus for the next round of reflection.
The concept is demanding because it refuses the comfortable separation between the knower and the known, between the subject and the object, between the practitioner and the situation. In reflection-in-action, the practitioner is simultaneously shaping the situation and being shaped by it. The conversation is genuine. Both parties — the practitioner and the materials of the situation — contribute something that the other did not provide. And the product of the conversation — the design, the diagnosis, the solo, the lesson — belongs to neither party alone.
Now consider what happens when the materials of the situation include an AI system that responds in natural language.
The prior era of software development offered limited back-talk. The compiler said yes or no — the code compiled or it did not. The test suite said pass or fail. The debugger pointed to a line number. These responses were precise but thin. They told the developer whether the code met the machine's requirements but revealed almost nothing about whether the code met the human's intentions. The conversation was barely a conversation at all. It was more like a series of yes-or-no questions addressed to an interlocutor capable of only yes-or-no answers.
The language interface transformed the back-talk. When a practitioner describes a problem to Claude and receives an implementation, the implementation is not a yes-or-no response. It is a substantive interpretation of the practitioner's intent — an attempt to understand not just what was said but what was meant, to infer the shape of the problem from the practitioner's description of its surface, and to produce a response that reflects that inference. The response may be wrong. It may misinterpret the intent, overlook a constraint, or solve a different problem than the one the practitioner had in mind. But even the wrong response is informative. It reveals what the practitioner's description communicated — and, by the gap between what was communicated and what was intended, it reveals what the practitioner has not yet articulated, perhaps not yet understood, about her own intention.
This is back-talk of an entirely new order. The sketch talks back by showing the architect spatial relationships she did not intend. Claude talks back by showing the practitioner conceptual relationships she did not articulate. The practitioner says, "Build me a system that handles user authentication," and Claude produces an implementation that includes session management, token refresh, and role-based access control. The practitioner looks at the implementation and thinks: I did not ask for role-based access control, but now that I see it, I realize I need it. The tool's interpretation has surfaced a requirement that the practitioner's own problem-setting had not yet reached.
The conversation that Schon described between the architect and the sketch is now happening between the builder and the machine, at a speed and with a conceptual richness that no previous tool has approached.
The implications are profound, and they cut in two directions simultaneously.
In the generous direction, the language interface enables reflection-in-action for practitioners who were previously locked into technical rationality by the limitations of their tools. The developer who spent eight years on backend systems and never wrote frontend code — described in The Orange Pill as someone who built a complete user-facing feature in two days — did not suddenly acquire frontend expertise. She acquired a reflective partner whose back-talk was rich enough to sustain a conversation in a domain where she had no pre-existing repertoire. The conversation with Claude replaced the years of accumulated experience she lacked, not by simulating expertise but by providing the iterative feedback loop through which expertise is normally built — compressed from years into hours.
She was conducting a reflective conversation with the situation, exactly as Schon described. The difference was that the situation was talking back through a medium far more articulate than any sketchpad, any compiler, any test suite in the history of software development.
In the cautionary direction, the speed and fluency of the back-talk creates a specific danger that Schon's framework identifies with precision. Reflection-in-action depends on the practitioner's capacity to evaluate the back-talk — to listen critically, to judge whether the situation's response confirms or challenges the current frame, to distinguish between back-talk that reveals a genuine dimension of the problem and back-talk that merely echoes the practitioner's assumptions in a more polished form.
When the back-talk comes from a physical medium — the sketch, the clay, the patient's body, the code's behavior under test — the practitioner's evaluation is grounded in the medium's own logic. The clay does not flatter. The patient's vital signs do not perform. The code either works or it does not, and the reasons it does not work are the medium's honest response to the practitioner's intervention.
When the back-talk comes from a language model, the evaluation is harder. The response is fluent regardless of its accuracy. The structure is clean regardless of its appropriateness. The confidence is high regardless of its justification. The practitioner must evaluate not just whether the response is correct but whether the response's polish is concealing a fundamental misunderstanding — whether the smooth surface of the prose is hiding a seam where the reasoning breaks.
Schon's framework predicts exactly this danger. The reflective practitioner's competence depends on the quality of her evaluation, and the quality of her evaluation depends on her repertoire — her accumulated experience of what good looks like, feels like, sounds like in this domain. When the back-talk is richer than the practitioner's repertoire can evaluate, the conversation does not stop. It continues, but it continues without the corrective function that makes reflection-in-action productive rather than merely iterative.
Iteration without evaluation is not reflection. It is the appearance of reflection — the external form without the internal substance. The practitioner prompts, receives, prompts again, receives again, and the cycle looks like the reflective conversation Schon described. But the evaluative function — the judgment that distinguishes productive surprise from misleading pattern-match — has been bypassed, and what remains is a sequence of exchanges that refine the output without deepening the understanding.
The author of The Orange Pill catches himself in precisely this trap. The passage about the Deleuze reference that sounded like insight but broke under examination is a case study in what happens when the back-talk's fluency outstrips the practitioner's evaluative capacity. The connection between Deleuze and Csikszentmihalyi sounded right. It felt right. It was wrong. And it was wrong in a way that only the practitioner's overnight unease — a form of tacit evaluation, the body's slow processing of something the conscious mind had accepted too quickly — was able to detect.
That overnight unease is reflection-in-action operating at its deepest level: the practitioner's embodied knowledge detecting a dissonance that the explicit evaluation missed. The question the AI age poses to Schon's framework is whether that embodied detection can be cultivated fast enough, and reliably enough, to keep pace with a tool whose output demands it constantly.
The jazz musician develops her evaluative capacity through thousands of hours of playing, listening, and reflecting. The surgeon develops hers through thousands of operations. The architect develops hers through thousands of sketches. In each case, the repertoire is built slowly, through the specific friction of engagement with a medium that resists — that says no, that surprises, that teaches by failing to cooperate.
When the medium cooperates too smoothly, what happens to the repertoire? When the back-talk is always fluent, always structured, always polished — regardless of its accuracy — does the practitioner develop the evaluative muscle that reflection-in-action requires?
The question does not have a clean answer. The answer depends on the practitioner's willingness to treat the machine's polish with the same critical attention she would bring to a human colleague's most confident assertion — to ask not just whether the output works but whether it works for the right reasons, not just whether the response is plausible but whether the plausibility is earned.
Reflection-in-action is the most sophisticated form of professional knowledge Schon identified. The language interface makes it more possible — and more necessary — than it has ever been.
---
In 1981, a group of MIT architecture students sat in a design studio under the supervision of a master teacher, and Donald Schon watched. He did not watch casually. He watched with the obsessive attention of a researcher who suspected that the most important thing happening in the room was invisible to everyone in it — including the teacher.
The teacher, whom Schon called Quist in his published account, was working with a student named Petra on the design of an elementary school. Petra was stuck. The site sloped. The building program was complex. She had tried several configurations and none of them worked. The classrooms did not relate to the outdoor spaces. The circulation was awkward. The geometry resisted her intentions.
Quist did not lecture. He did not deliver a theory of site planning or cite principles of educational architecture. He picked up a pencil and began to draw over Petra's sketch, talking as he drew. "The L-shapes are chunky and they don't L well," he said, and as he drew he proposed a reframing — he suggested organizing the building around a different geometry, one that used the site's contours rather than fighting them. The drawing changed as he talked. The talking changed as he drew. The site's slope, which had been Petra's obstacle, became, through Quist's reframing, the organizing principle of the design.
Schon analyzed this exchange with extraordinary care because it exemplified, in a few minutes of studio interaction, the structure of the conversation with the situation that he argued was the foundation of all expert practice.
The structure has three moves, and they repeat in a cycle.
First: the practitioner makes a move — a design decision, a diagnostic hypothesis, an experimental intervention. The move is not random. It is informed by the practitioner's repertoire, her accumulated sense of what works in situations like this one. But it is also provisional. The practitioner treats the move not as a commitment but as a probe — an experiment designed to reveal the situation's response.
Second: the situation talks back. The sketch reveals something. The patient's body responds. The code behaves. The classroom shifts. The back-talk is the situation's response to the practitioner's move, and it carries information that the move was designed to elicit but that the practitioner could not have predicted in its specific form. Quist draws the new geometry, and the drawing shows him that the L-shapes now create a courtyard he had not planned — a courtyard that solves a problem Petra had not named.
Third: the practitioner listens to the back-talk, evaluates it, and adjusts. The evaluation is not mechanical. It involves judgment — the practitioner decides which aspects of the back-talk are promising, which are problematic, and which require a reframing of the problem itself. The adjustment produces a new move, and the cycle begins again.
Move. Back-talk. Evaluation. Adjust. Move again.
This cycle — which Schon documented in architecture studios, psychotherapy sessions, engineering firms, urban planning offices, and musical rehearsals — is the structure of the reflective conversation with the situation. It is the mechanism through which competent practitioners produce knowledge and solution simultaneously. And it is the structure that the language interface has replicated, at unprecedented speed and scale, in every domain it touches.
The parallels between Quist working with Petra's sketch and a practitioner working with Claude are not superficial. They are structural, operating at the level of epistemology rather than analogy.
Consider the builder described in The Orange Pill who was constructing a face-detection component for Napster Station. The builder knew what the system needed to do — detect the user's face and determine when they are speaking. This is the equivalent of Petra's program: the functional requirements are clear, but the design solution is not. The builder described the problem in natural language. Claude produced an implementation. The implementation was not exactly right — it was close, in the way that Quist's first sketch was close, capturing the direction without resolving the details. The builder evaluated the output, identified where it diverged from intention, adjusted the description, and Claude produced a revised implementation. Fifteen minutes of conversation got it the rest of the way.
The structure is identical to Schon's design studio. The builder makes a move (describes the problem). The situation talks back (Claude produces an implementation). The builder evaluates the back-talk (identifies where the output diverges from intent). The builder adjusts (modifies the description). The cycle repeats until the design emerges.
But there is a critical difference, and the difference is what makes the present moment unprecedented in the history of professional practice.
When Quist drew over Petra's sketch, the sketch's back-talk was limited by the medium. A pencil on paper can show spatial relationships, formal gestures, the rudiments of circulation and light. It cannot calculate structural loads, simulate thermal performance, or tell you whether the building code permits the geometry you have proposed. The sketch is eloquent within its domain and silent outside it. The practitioner must supply everything the medium cannot — the structural intuition, the knowledge of codes, the understanding of materials.
When a practitioner describes a problem to Claude, the back-talk is not limited to a single medium's expressive range. Claude draws on a computational repertoire that spans domains — programming languages, design patterns, architectural styles, domain-specific knowledge, edge cases from a million codebases. The back-talk is not just "here is what your description produces." It is "here is what your description produces, and here are the implications you did not state, the edge cases you did not consider, the patterns from other domains that are structurally similar to your problem."
The laparoscopic surgery example from The Orange Pill is the paradigmatic case. The author was stuck — trying to articulate the relationship between friction-removal and depth. He described the impasse. Claude responded not with a solution within the author's frame but with an example from an entirely different domain that reframed the problem. Laparoscopic surgery removed one kind of friction (hands in the body) and introduced a harder kind (interpreting a two-dimensional image of a three-dimensional space). The friction did not disappear. It ascended.
In Schon's terms, Claude provided back-talk that triggered a reframing. The author had been operating within the frame "friction removal equals depth loss." Claude's response suggested an alternative frame: "friction removal at one level equals friction elevation to a higher level." The reframing did not come from the author's repertoire. It came from the machine's computational repertoire — its capacity to traverse domains and find structural analogies that no single human practitioner, limited by the boundaries of her own experience, could have produced.
This is back-talk of a kind that Schon's framework describes but that no previous tool has delivered. The sketch talks back within its medium. The compiler talks back within its logic. Claude talks back across domains, at a level of conceptual sophistication that rivals — and in some cases exceeds — the back-talk a practitioner would receive from a human collaborator.
The conversation with the situation has gained a participant who, for the first time in the history of tool use, can redirect the inquiry at the level of framing rather than merely at the level of execution.
This matters because reframing is, in Schon's framework, the highest-order reflective skill. Lower-order reflection adjusts the move within the existing frame. Higher-order reflection questions the frame itself — asks whether the problem has been correctly set, whether the categories being applied are the right categories, whether the assumptions underlying the approach are warranted. Reframing is the operation that distinguishes the master practitioner from the competent technician. The technician refines the solution. The master questions the problem.
When the tool's back-talk includes reframing suggestions — cross-domain analogies, alternative framings, connections the practitioner did not see — the tool is operating at the level that Schon reserved for the most sophisticated practitioners. The conversation gains a depth it has never had. And the practitioner gains access to a range of possible reframings that her own repertoire, no matter how rich, could not have generated.
But the gain carries a corresponding risk, and the risk is precisely calibrated to the gain's magnitude.
When a human collaborator suggests a reframing, the practitioner can evaluate the suggestion against the collaborator's track record, her understanding of the collaborator's reasoning, her sense of whether the collaborator understood the problem deeply enough to suggest an alternative framing. The evaluation is interpersonal and contextual: the practitioner knows the collaborator, and that knowledge informs the weight she gives to the suggestion.
When Claude suggests a reframing, the evaluation lacks this interpersonal ground. The suggestion arrives from a system whose reasoning is opaque, whose confidence is uncalibrated to its accuracy, and whose capacity to assess whether its own suggestion is appropriate is fundamentally limited. The practitioner must evaluate the reframing on its merits alone — without the contextual cues that human collaboration provides.
This is harder than it sounds. Reframings are, by nature, surprising. They show you something you did not see. The surprise is the point. But surprise is also disorienting, and the natural human response to a reframing that feels right is to accept it — to experience the click of recognition and move forward. The discipline of evaluating a reframing that feels right, of asking whether the click of recognition is genuine insight or pattern-matched plausibility, is the discipline that Schon's framework demands and that the AI-augmented conversation makes both more necessary and more difficult.
The author of The Orange Pill describes this discipline explicitly. The passage he almost kept because it sounded better than it thought. The overnight unease about the Deleuze reference. The two hours at a coffee shop with a notebook, writing by hand until the version of the argument that was genuinely his emerged. These are acts of evaluation — the practitioner insisting on testing the back-talk against her own understanding, refusing to accept the machine's reframing simply because it arrived in polished prose.
Schon's Quist could evaluate his own reframings because the sketch was transparent to him — he understood the medium completely, and the medium's back-talk could not exceed his capacity to interpret it. The language interface's back-talk routinely exceeds the practitioner's capacity to evaluate it. The medium is more articulate than the practitioner's evaluative repertoire can verify. And in that gap — between the richness of the back-talk and the practitioner's capacity to judge it — lies the specific danger of the AI-augmented conversation with the situation.
The conversation is richer than ever. The back-talk is more generative than ever. The reframings are more frequent and more cross-disciplinary than ever. And the burden on the practitioner's evaluative judgment — the part of the conversation that no tool can perform — is heavier than it has ever been.
The conversation with the situation has become a conversation with a situation that is smarter, faster, and more fluent than any situation the practitioner has previously encountered. Whether the practitioner is equal to that conversation depends not on the tool but on the practitioner — on her repertoire, her judgment, her willingness to pause the iteration long enough to ask whether the frame deserves the polish it is receiving.
Schon would recognize the conversation. He would recognize the back-talk. He would recognize the structure of move, response, evaluation, adjustment. And he would recognize, with the precision of a diagnostician who has seen this pathology before, the specific danger of a conversation in which one participant is too articulate for the other to evaluate.
---
The most productive collaborations in the history of professional practice have not been between people who agreed. They have been between people who disagreed in productive ways — whose different repertoires, different assumptions, and different ways of seeing the same situation created the friction from which genuine insight emerges.
Francis Crick and James Watson did not discover the structure of DNA by applying the same frame to the same data. Crick was a physicist turned biologist; Watson was a geneticist with a taste for model-building. Each brought a repertoire the other lacked. Each saw patterns the other missed. The structure of DNA emerged from the collision between their different ways of seeing — not from agreement, but from the productive tension between two repertoires that were organized by different principles and indexed by different experiences.
Schon understood this dynamic. His concept of the reflective practitioner was never limited to the individual practitioner working alone. The best reflection-in-action happens in the presence of others — the studio master who offers an alternative framing, the colleague who asks the question the practitioner did not think to ask, the patient whose unexpected response forces a revision of the diagnosis. The reflective partner is the person whose contributions create the conditions for the practitioner's own reflection to deepen.
The best reflective partners share three characteristics. First, they bring a different repertoire — a different body of experience, organized by different principles, that produces different perceptions of the same situation. Second, they make their contributions in a form the practitioner can engage with — not as directives or corrections but as alternative readings, as "What if you thought of it this way?" offerings that the practitioner can evaluate and incorporate or reject. Third, they respect the practitioner's evaluative authority — they offer perspectives without insisting on them, recognizing that the practitioner, who is closest to the situation, is the one best positioned to judge which perspectives serve the work.
Claude satisfies the first two of these characteristics with a completeness that no previous tool has approached. It fails the third in a way that creates a specific and underappreciated danger.
Consider the repertoire. A human reflective partner brings the repertoire of one life — one set of experiences, one training, one cultural context, one professional domain. A surgeon who has performed a thousand cholecystectomies has a repertoire organized by a thousand encounters with gallbladder disease. A lawyer who has tried fifty cases has a repertoire organized by fifty journeys through the litigation process. The repertoire is deep within its domain and largely silent outside it.
Claude's computational repertoire is organized by different principles entirely. It has no lived experience. It has no embodied memory of successes and failures weighted by their personal consequences. What it has is a statistical representation of patterns across an immense body of human knowledge — patterns of language, of reasoning, of problem-solving, of conceptual connection, drawn from virtually every domain of human inquiry. The repertoire is shallow in the specific sense that it lacks the felt significance of lived experience, but it is incomparably wide. It can find connections between domains that no single human practitioner could traverse — the laparoscopic surgery example, the punctuated equilibrium concept from evolutionary biology, the cross-pollination between philosophical frameworks and engineering problems that a practitioner with expertise in only one field could never produce.
The breadth of the computational repertoire is the source of Claude's value as a reflective partner. When the practitioner is stuck — when the current frame has produced a dead end and the practitioner's own repertoire offers no alternative — Claude can offer reframings drawn from domains the practitioner has never entered. This is the function that The Orange Pill describes repeatedly: the author reaching an impasse, describing the impasse to Claude, and receiving a connection or an analogy that breaks the impasse by changing the frame.
The second characteristic — making contributions in a form the practitioner can engage with — is equally well-served. Claude's output is in natural language. The practitioner does not need to learn a new formalism, decode a specialized notation, or translate from a domain-specific vocabulary. The contribution arrives in the same medium the practitioner uses to think, and the medium's accessibility means the practitioner can immediately evaluate, incorporate, or reject the suggestion. The barrier between the partner's contribution and the practitioner's reflection is as low as it has ever been in the history of collaborative practice.
But the third characteristic — respecting the practitioner's evaluative authority — is where the collaboration breaks down in subtle and consequential ways.
A skilled human reflective partner modulates her contributions based on the practitioner's response. She watches for the signs that a suggestion has landed — the pause, the shift in posture, the "Oh" of recognition — and the signs that it has not — the polite dismissal, the return to the previous line of thinking, the visible discomfort. She adjusts. She offers the suggestion more tentatively, or withdraws it, or reframes it. The conversation is genuinely mutual: both parties are reading each other, and the reading shapes what each says next.
Claude does not read the practitioner. Not in this way. Claude processes the practitioner's textual input and generates a response. If the practitioner rejects a suggestion, Claude will accommodate the rejection — it will offer an alternative, or modify its approach, or agree that the original suggestion was off-base. But the accommodation is responsive to the text, not to the practitioner. Claude does not see the furrowed brow, the hesitation, the slight shift in tone that signals genuine doubt as opposed to rhetorical pushback. It cannot distinguish between a practitioner who rejects a suggestion because she has evaluated it carefully and found it wanting, and a practitioner who rejects a suggestion because it threatens a frame she is attached to and unwilling to examine.
This matters because the most important function of a reflective partner is not to provide answers but to create the conditions under which the practitioner examines her own assumptions. The best human reflective partners do this by noticing what the practitioner does not say, by sensing where the resistance lies, by pushing gently on the places where the practitioner's frame is weakest. This is a fundamentally interpersonal operation, requiring the partner to model the practitioner's internal state and calibrate interventions accordingly.
Claude cannot perform this operation. The most it can do is something structurally different: it can offer a range of perspectives and let the practitioner select. This is valuable. It is not the same as the targeted, contextually calibrated challenge that a human reflective partner provides. The shotgun versus the scalpel. Claude offers breadth of perspective. The human partner offers precision of challenge. Both are useful. Neither substitutes for the other.
There is a deeper asymmetry, though, and it is the one that Schon's framework illuminates most sharply.
In a human reflective partnership, both parties reflect. Watson says something, and Crick reflects on it. Crick's reflection changes his understanding. His changed understanding shapes his next contribution. Watson reflects on that contribution, and his understanding changes in turn. The conversation is a double helix of mutual reflection — each party's understanding spiraling upward through engagement with the other's evolving understanding. Both parties emerge from the conversation changed. Both know something they did not know before. Both have repertoires that are richer for the exchange.
Claude does not reflect. It processes. It takes the practitioner's input, generates a response based on patterns in its training data, and delivers the response. If the practitioner challenges the response, Claude generates a new response — but the new response is not the product of reflection on the challenge. It is the product of processing the challenge as a new input. The distinction is subtle but fundamental: reflection involves the reorganization of understanding in response to experience. Processing involves the generation of output in response to input. The former changes the processor. The latter does not.
The conversation with Claude is asymmetrically reflective: the practitioner reflects and changes; the machine processes and responds. The practitioner grows through the exchange. The machine does not. The conversation produces learning in one direction only.
This one-directional learning is not merely an absence. It has active consequences for the character of the collaboration. In a mutually reflective partnership, the partner's growth creates new possibilities for the practitioner. As Crick's understanding deepened through engagement with Watson, his contributions became more precisely calibrated to the problems Watson was grappling with — not because he was trying to please Watson, but because his own enhanced understanding of the problem space produced contributions that were more relevant, more challenging, more generative. The mutual reflection created a positive feedback loop in which each party's growth stimulated the other's.
In the asymmetric collaboration with Claude, this feedback loop does not develop. Claude's contributions at the end of a long session are not more precisely calibrated to the practitioner's evolving understanding than its contributions at the beginning — not in the way a human partner's would be. The practitioner may feel that Claude is "getting" her better as the conversation progresses, and this feeling may be partly warranted: the accumulated context of the conversation gives Claude more information to work with. But the "getting" is not understanding. It is pattern-matching against an expanding context window. The distinction matters because understanding involves the kind of judgment — the sense of what matters, what resonates, what challenges productively versus what challenges futilely — that pattern-matching alone cannot produce.
The practical implication is that the practitioner must perform a double function in the AI-augmented reflective conversation. She must do her own reflecting — evaluating the back-talk, testing the reframings, judging the relevance — and she must also compensate for the partner's failure to reflect. She must ask not only "Is this suggestion good?" but "Would a reflecting partner have offered this suggestion at this point in our conversation?" The second question is the one that catches the moments when Claude's output, while plausible, is not responsive to the specific trajectory of the inquiry — when it offers a statistically common connection rather than the situationally appropriate one.
This double function is demanding. It is more demanding than the reflective function in a human partnership, because in a human partnership the evaluative labor is shared. Both parties are watching for the moments when the conversation goes off-track. Both parties are responsible for the quality of the exchange. In the AI partnership, the full weight of evaluation falls on the practitioner.
And yet. The partnership works. It works because the computational repertoire's breadth compensates, in many situations, for the asymmetry's cost. The practitioner who would have been stuck for days — unable to find the reframing, unable to escape the dead-end frame, unable to traverse the domain boundary that separates her current understanding from the insight that would break the impasse — finds the reframing in minutes. The cost of the asymmetry is real: the practitioner must evaluate more carefully, compensate for the missing mutuality, maintain a level of critical vigilance that a human partnership would distribute between both parties. But the benefit of the breadth is also real: perspectives that no single human partner could have offered, connections that no single human repertoire could have produced, reframings that draw on the entire history of human thought rather than the subset available to any individual mind.
The collaboration is unequal. It is also, for many purposes and in many situations, the most productive reflective partnership available — not despite its asymmetry but in spite of it, because the breadth it provides access to exceeds what any single human collaborator could deliver.
Schon's reflective practitioner was never defined by her solitude. She was defined by her capacity to conduct a conversation with the situation — to propose, to listen, to evaluate, to adjust. The language interface has given her a conversational partner of extraordinary range. The partner does not reflect. The partner does not grow. The partner does not care whether the practitioner's frame is right or wrong.
But the partner talks back. With a fluency, a breadth, and a conceptual reach that no previous partner — human or otherwise — has matched. And the practitioner's task, the task that Schon identified as the essence of professional competence, remains unchanged: to listen to the back-talk with the discernment to distinguish signal from noise, insight from pattern, the genuine reframing from the plausible but empty suggestion.
The tool has changed. The task has not. What it takes to be a reflective practitioner in the age of AI is what it has always taken: the willingness to treat every response — from the situation, from the tool, from the partner — as a hypothesis to be tested rather than an answer to be accepted. The difference is that the hypotheses now arrive faster, in greater volume, and with a polish that makes the testing harder and more necessary than it has ever been.
In the centuries before mechanical timekeeping, human beings experienced time as elastic. The day stretched or compressed according to the season, the work, the quality of attention brought to the task at hand. A medieval craftsman shaping a joint did not think in minutes. He thought in the joint — in the resistance of the wood, the angle of the chisel, the slowly emerging fit between the two pieces. Time was a byproduct of engagement, not a container for it.
The mechanical clock changed this. Not by measuring time more accurately — sundials and water clocks had done that passably for millennia — but by imposing a uniform grid on experience. Every minute became identical to every other minute. Time became a container, and work became the substance poured into it. The factory whistle did not merely signal the start and end of work. It restructured the relationship between the worker and the work, converting the craftsman's elastic engagement into the laborer's measured output.
Donald Schon would have recognized this restructuring as a shift in the conditions under which reflection occurs. Reflection-in-action is not a free-floating cognitive capacity. It operates within the temporal structure of the practice. The surgeon reflects at the speed of the operation. The architect reflects at the speed of the sketch. The therapist reflects at the speed of the session. Each practice has a native tempo — a pace at which the cycle of move, back-talk, evaluation, and adjustment naturally unfolds — and the quality of the reflection is calibrated to that tempo.
The language interface did not merely accelerate the tempo of professional practice. It shattered the native tempo entirely, replacing it with something closer to the speed of thought itself — the speed at which a question can be formulated and an answer received.
The acceleration is not incremental. A developer working in the prior era might complete one full cycle of the reflective conversation — describe the problem, write the code, test the result, evaluate the failure, adjust the approach — in the span of a day. The same developer working with Claude might complete twenty such cycles in an hour. The compression is not a matter of doing the same thing faster. It is a compression of the temporal structure within which the doing occurs, and that compression has consequences that Schon's framework identifies with uncomfortable precision.
The first consequence is the most celebrated: more iterations mean more opportunities for the conversation to produce surprise. Each cycle of the reflective conversation is a chance for the situation to talk back in unexpected ways, and each unexpected response is a potential catalyst for reframing. When the developer completes twenty cycles in an hour instead of one cycle in a day, she encounters twenty potential surprises instead of one. The probability of stumbling onto a productive reframing increases not linearly but combinatorially, because each surprise interacts with the surprises that preceded it, creating possibilities that could not have existed at the slower tempo.
This is genuine. The acceleration of the reflective cycle produces a richness of inquiry that Schon himself would likely have celebrated. More iterations. More surprises. More opportunities for the conversation to reveal what the practitioner did not know she needed to know. The developer who built a complete user-facing feature in two days — having never written frontend code — did so not because the tool gave her the answer but because the tool's speed gave her enough cycles of the reflective conversation to discover, through iterative engagement, what the feature needed to be. The speed was the condition for the reflection, not a substitute for it.
But Schon's framework contains a second concept that complicates the celebration, and it is the concept that matters most in the present moment.
Reframing — the cognitive act of seeing the problem through a different lens — operates on a different timescale than iteration. Iteration is fast. Reframing is slow. The distinction is not a matter of personal processing speed or cognitive capacity. It is structural, rooted in the nature of reframing itself.
When a practitioner reframes, she does not simply select an alternative from a menu of available frames. She reorganizes her understanding of the situation — the categories she uses to perceive it, the assumptions that structure her approach, the criteria by which she evaluates the back-talk. This reorganization involves the revision of cognitive structures that have been built, layer by layer, through years of practice. The experienced surgeon who reframes an operative plan mid-procedure is not choosing between pre-existing options. She is constructing a new understanding of what she is looking at, and the construction requires the kind of cognitive work that cannot be rushed — the slow integration of disparate signals, the testing of the new frame against embodied memory, the gradual settling of the reorganized understanding into a form stable enough to act on.
Schon observed this temporal asymmetry repeatedly in his studies of reflective practice. The architect Quist, working with Petra's sketch, made his reframing move — shifting from the student's geometry to a contour-responsive geometry — in what appeared to be an instant. But the apparent speed concealed the temporal depth of the move. Quist's reframing drew on decades of design experience, thousands of projects, an immense repertoire of spatial solutions organized by felt significance. The move looked fast because the preparation was slow. The reframing was the visible tip of an iceberg of accumulated reflective practice.
When the language interface accelerates the reflective cycle, it accelerates the iteration — the move-back-talk-evaluation-adjust sequence — without accelerating the reframing. The two timescales, which in traditional practice were roughly synchronized (slow iteration gave time for slow reframing), become desynchronized. The practitioner iterates at the speed of conversation while her capacity to reframe operates at the speed of cognitive reorganization — which is to say, at a speed determined by the depth and richness of her repertoire and the complexity of the reorganization required.
The desynchronization produces a specific pathology that Schon's framework predicts but that no one, in 1983, had occasion to observe at scale: rapid refinement within a fixed frame.
The practitioner prompts. Claude responds. The response is close but not right. The practitioner adjusts the prompt. Claude responds again. Closer. Another adjustment. Closer still. Each cycle refines the output. Each refinement brings the result nearer to what the practitioner intended. The iteration is fast, the feedback immediate, the convergence satisfying. Twenty cycles in an hour. The result is polished, functional, impressive.
But the frame has not been questioned. The practitioner's understanding of what she is building — her definition of the problem, her assumptions about the user, her criteria for success — has remained constant across all twenty cycles. The iterations have refined the answer. They have not questioned the question. The solution is increasingly polished within a framework that may itself be the wrong framework, and the speed of the polishing has consumed the cognitive resources that would otherwise have been available for the slower, harder work of asking whether the framework deserves the polish it is receiving.
The Berkeley researchers documented this pathology without naming it. Workers using AI tools reported feeling productive — busy, engaged, generating output at an impressive rate. They also reported, over time, a flattening of satisfaction. The work felt less meaningful even as it accumulated in volume. The explanation, through Schon's lens, is precise: the workers were iterating without reframing. The speed of iteration created a sense of momentum — of getting somewhere — that masked the absence of the deeper cognitive operation that gives professional work its meaning.
Reframing is where meaning enters professional practice. When the surgeon reframes the operative plan, she is not merely adjusting her technique. She is revising her understanding of the patient's condition — discovering something new about the situation that changes what the situation means. When the architect reframes the design, she is not merely trying a different geometry. She is reconceiving the relationship between the building and its site — arriving at a new understanding of what the building should be. Reframing is the operation through which practice produces not just results but understanding, and understanding is what distinguishes professional work from mere execution.
When the speed of iteration outstrips the speed of reframing, the practice produces results without understanding. The output accumulates. The learning does not.
The temporal asymmetry also affects the practitioner's relationship to her own repertoire. The repertoire is built through slow engagement with situations that resist — through the specific friction of back-talk that does not cooperate, that forces the practitioner to sit with surprise long enough for the surprise to teach her something. When the reflective cycle accelerates, the duration of each surprise shortens. The practitioner encounters the unexpected, but she encounters it briefly — long enough to adjust the iteration, not long enough to integrate the surprise into her repertoire. The surprise is processed rather than absorbed. The difference is the difference between a photograph and a memory: the photograph captures the image; the memory integrates it into the self.
A developer who encounters an unexpected behavior in the code, spends hours understanding why it occurred, and eventually resolves it through deep engagement with the system's logic has added a layer to her repertoire — a felt understanding of how that kind of system behaves under those kinds of conditions. A developer who encounters the same unexpected behavior, describes it to Claude, receives a fix in thirty seconds, and moves on has resolved the issue without adding the layer. The problem is solved. The practitioner is not changed.
This is not an argument against speed. Speed is valuable. More iterations mean more opportunities for the conversation to produce insight. The argument is that speed without temporal structure — without deliberate pauses for the slower work of reframing and integration — produces a specific pathology: the practitioner becomes faster without becoming deeper.
Schon's framework suggests that the most productive reflective practice balances two temporal modes: the fast mode of iteration, in which the practitioner cycles rapidly through move-back-talk-evaluation-adjust, and the slow mode of reframing, in which the practitioner steps back from the iteration, examines the frame within which the iteration is occurring, and asks whether the frame itself is appropriate. The two modes are complementary. The fast mode produces data — the accumulated back-talk of many cycles. The slow mode produces interpretation — the revised understanding that gives the data meaning.
In the traditional practice Schon studied, the balance was enforced by the medium. The sketch takes time to draw. The patient takes time to respond. The code takes time to compile. The medium's native tempo created natural pauses — moments between iterations when the practitioner, waiting for the situation to respond, had the cognitive space to reflect on the larger picture. The pauses were not scheduled. They were emergent, built into the temporal structure of the practice itself.
The language interface has eliminated these emergent pauses. Claude responds in seconds. There is no waiting. No gap between iteration and response. No unstructured time in which the practitioner's mind might wander from the immediate problem to the larger question. The medium's tempo has been compressed to near-zero, and the natural pauses that once provided the temporal structure for reframing have been squeezed out of the workflow.
This is precisely the pathology that the Berkeley researchers' proposed "AI Practice" was designed to address — structured pauses built into the workday, sequenced rather than parallel work, protected time for human-only thinking. These are not productivity hacks. They are temporal structures designed to restore the balance between iteration and reframing that the tool's speed has disrupted.
The practitioner who builds these pauses into her workflow is not being inefficient. She is recognizing that her most valuable cognitive operation — the one that produces understanding rather than output — requires a different temporal structure than the one the tool naturally creates. She is building the temporal dam that protects the space for reframing against the pressure of accelerated iteration.
The surgeon does not operate faster because the instruments are faster. She operates with more precision, which requires the same deliberation it always did — the pause before the cut, the moment of evaluation between moves, the willingness to stop and reconsider when the tissue does not behave as expected. The speed of the instrument does not determine the tempo of the surgery. The surgeon's judgment does.
The language interface offers the practitioner the fastest reflective instrument in the history of professional practice. The question is whether the practitioner will set the tempo — will insist on the pauses that reframing requires, will protect the slow mode against the seduction of the fast mode, will treat the tool's speed as a resource to be deployed rather than a current to be swept along by.
Schon never studied a tool this fast. But his framework predicts, with a precision that borders on the unsettling, exactly what happens when the balance between iteration and reframing is lost. The practice produces more. The practitioner understands less. The output is polished. The frame is unexamined. And the work, for all its velocity, arrives nowhere it did not intend to go — which is to say, nowhere genuinely new.
The speed of reflection is not determined by the speed of the tool. It is determined by the practitioner's willingness to reflect at the speed the reflection requires, regardless of how fast the tool can iterate. That willingness — the deliberate choice to be slower than the tool permits — is the temporal discipline that Schon's framework demands and that the AI-augmented practitioner must cultivate or lose.
---
There is a moment in every sustained collaboration when something shifts. The partners have worked together long enough that each can anticipate the other's moves. The jazz pianist knows the bassist will walk down to the tonic on the fourth bar. The surgeon knows the assistant will retract before being asked. The architect knows the engineer will flag the cantilever before the calculation is run. The anticipation is not telepathy. It is the product of mutual reflection — of two people who have, through repeated engagement with the same situation, developed a shared understanding deep enough to operate as a single cognitive system distributed across two minds.
Schon did not study this phenomenon explicitly, but it is implicit in everything he wrote about reflective practice. The conversation with the situation is at its most productive when the situation's back-talk is genuinely responsive to the practitioner's evolving understanding — when the back-talk changes as the practitioner changes, when the situation becomes, in some sense, a reflection of the practitioner's growth. The sketch that talks back more richly as the architect's design becomes more complex. The patient who reveals more nuanced symptoms as the therapist's questioning becomes more precise. The code that fails in more interesting ways as the developer's architecture becomes more ambitious.
In each case, the situation's responsiveness is not autonomous. It is a function of the practitioner's engagement. The situation talks back more richly because the practitioner is asking richer questions. The back-talk and the questioning are co-evolving — each shaping the other in a spiral of mutual development.
When the partner is another human being, the co-evolution is real. Watson changed through his collaboration with Crick. Crick changed through his collaboration with Watson. Each became a different thinker for having engaged with the other, and the difference shaped every subsequent contribution. The collaboration did not merely produce the structure of DNA. It produced two minds that were, at the end, genuinely different from the minds that began.
The question that the AI-augmented conversation poses — and that Schon's framework illuminates with uncomfortable clarity — is whether this co-evolution is possible when one partner does not change.
Claude does not develop a shared understanding with the practitioner. Within a single conversation, the accumulation of context creates the appearance of growing rapport — Claude's responses become more calibrated to the practitioner's style, more responsive to the specific vocabulary of the project, more attuned to the patterns of the inquiry. A practitioner working with Claude for several hours may feel, with genuine conviction, that the tool is "getting" her — that something like mutual understanding is developing.
The feeling is not entirely illusory. Claude's responses within a conversation are shaped by the conversation's accumulated context, and as the context grows, the responses become more contextually appropriate. The practitioner who begins a session by describing a problem in general terms and gradually narrows the description through iterative exchange will receive responses that are progressively more targeted — not because Claude understands the problem more deeply, but because the practitioner's accumulated descriptions provide more information for the pattern-matching to work with.
This is a crucial distinction, and it is one that Schon's framework makes visible. Understanding involves the reorganization of cognitive structures — the revision of categories, the modification of assumptions, the integration of new experience into an existing framework of meaning. Processing involves the generation of output from input according to existing structures. Understanding changes the understander. Processing does not change the processor.
Claude processes. It does not understand. The responses become more calibrated not because Claude's understanding has deepened but because the input has become richer. The calibration is in the data, not in the system. When the conversation ends and a new one begins, the calibration resets. There is no persistent growth, no carried-forward understanding, no residue of the shared experience that would shape the next collaboration.
This absence has consequences that are easy to miss and hard to overstate.
In a mutually reflective partnership, the partner's growth creates what might be called reflective leverage. As Crick's understanding of molecular biology deepened through engagement with Watson, his contributions became not merely more numerous but more precisely aimed at the problems Watson was struggling with. He could anticipate where Watson's reasoning would get stuck, because he had developed, through mutual reflection, a model of Watson's thinking that was rich enough to predict its trajectories and its blind spots. This predictive model — this theory of the partner — is what makes sustained collaboration qualitatively different from a series of one-off exchanges with different consultants.
Claude does not develop a theory of the practitioner. It develops a contextual representation of the conversation, which is a different thing. A theory of the practitioner would include a model of the practitioner's strengths and weaknesses, her characteristic blind spots, the frames she tends to favor and the frames she tends to avoid, the kinds of challenges that produce her best work and the kinds that shut her down. A contextual representation of the conversation includes what the practitioner has said, in what order, with what emphasis. The former enables the partner to challenge the practitioner precisely where the challenge would be most productive. The latter enables the tool to respond appropriately to what the practitioner has explicitly stated.
The difference becomes consequential when the practitioner needs to be challenged — when her current frame is inadequate and the most productive move would be to push against it. A human partner with a theory of the practitioner knows when to push, how hard, and where. She knows that pushing on this assumption will produce defensive resistance, while pushing on that one will produce genuine reconsideration. She knows that the practitioner responds to concrete examples but not to abstract arguments, or vice versa. She calibrates the challenge to the practitioner, not just to the problem.
Claude cannot perform this calibration. It can generate challenges — alternative framings, counterexamples, opposing perspectives — but it generates them based on the problem's structure, not on the practitioner's psychology. The challenges are generic in the specific sense that they are not calibrated to this practitioner's particular pattern of resistance and openness. They may be brilliant. They may be precisely the reframing the situation demands. But they arrive without the interpersonal attunement that determines whether a challenge is absorbed or deflected.
The consequence is that the practitioner must be her own challenger. She must perform the function that, in a mutual partnership, would be distributed between both parties: the vigilant attention to her own frames, the willingness to push against her own assumptions, the discipline of asking not only "Is Claude's suggestion good?" but "Am I resisting this suggestion because it is wrong, or because it threatens a commitment I have not examined?"
This is a higher-order reflective operation than anything Schon's original framework described. Schon studied practitioners reflecting on their interaction with the situation. The AI-augmented practitioner must reflect on her interaction with the situation, reflect on the tool's contribution to that interaction, and reflect on her own reception of the tool's contribution — a triple reflection that requires a level of metacognitive sophistication that traditional reflective practice did not demand.
The triple burden is real. It is also unavoidable, because the alternative — accepting the tool's output at face value, treating the contextual calibration as mutual understanding, mistaking the growing rapport of a long conversation for the deepening of shared knowledge — produces the specific pathology of unreflective collaboration: output that is sophisticated, contextually appropriate, and unexamined.
There is a paradox at the center of this analysis, and it is a paradox that Schon himself might have appreciated.
The language interface makes the reflective conversation with the situation richer than it has ever been. The back-talk is more substantive. The reframings are more frequent. The cross-domain connections are more numerous. By every external measure, the conversation has improved.
But the improvement in the conversation has increased the burden on the one participant who carries the full weight of the reflection. The richer the back-talk, the more evaluation is required. The more frequent the reframings, the more judgment about which reframings to pursue. The more numerous the cross-domain connections, the more discernment about which connections are genuine and which are pattern-matched artifacts.
The tool's improvement does not relieve the practitioner. It loads her. And the loading is invisible from the outside, because the output — the polished code, the elegant design, the well-structured argument — looks like the product of effortless collaboration rather than the product of a practitioner performing triple reflection under the temporal pressure of a tool that does not wait.
The practitioner's experience of this loading is often described as productive intensity — the feeling of working harder than ever while producing more than ever. The description is accurate. The question is whether the intensity is sustainable, and whether the production is accompanied by the deepening of understanding that gives production its professional meaning.
Schon studied practitioners who were loaded by the situation — by the complexity of the case, the ambiguity of the problem, the resistance of the materials. The loading was part of the practice, and the practitioner's capacity to bear it was part of her competence. But the loading came from a situation that was genuinely responsive — a situation whose resistance taught the practitioner something about the world. The loading that the AI partnership imposes comes partly from the situation and partly from the asymmetry of the collaboration — from the need to compensate for a partner who does not reflect, does not develop a theory of the practitioner, and does not calibrate its challenges to the practitioner's specific pattern of growth.
The compensation is cognitive overhead. It is real, it is demanding, and it is the practitioner's to bear alone. Whether the practitioner recognizes this overhead — whether she builds it into her understanding of what the collaboration requires, rather than experiencing it as an unexplained fatigue — determines whether the partnership produces genuine reflective practice or its sophisticated imitation.
The machine reflects back. Not in the sense of reflection — not in the sense of genuine cognitive reorganization. But in the sense of a mirror: it returns the practitioner's input, transformed by the computational repertoire's patterns, in a form that looks like reflection. The image is useful. The image is rich. The image is, in many cases, more articulate than the original. But the mirror does not think. It does not grow. It does not know you.
The practitioner who mistakes the mirror for a mind will collaborate differently — and worse — than the practitioner who understands what the mirror is and uses it accordingly. The former trusts the image. The latter evaluates it. And the quality of the evaluation is, as Schon always insisted, the difference between professional competence and professional performance.
---
A master diagnostician at a teaching hospital looks at a chest X-ray for eleven seconds. In those eleven seconds, she sees what the resident, staring at the same image for eleven minutes, cannot see: the subtle asymmetry in the mediastinal silhouette that suggests a mass the resident's eyes passed over because they did not know what to look for. The resident sees the image. The diagnostician sees the patient.
The difference between seeing the image and seeing the patient is not a matter of visual acuity or processing speed. It is a matter of repertoire — the accumulated body of experience that organizes perception, that tells the eye where to look and the mind what to look for, that converts raw sensory data into clinical meaning. The diagnostician has read tens of thousands of X-rays. Each one has deposited a layer of understanding so thin it is invisible in isolation — a slightly refined sense of what normal looks like, a slightly expanded capacity to notice deviation, a slightly enriched vocabulary of patterns. The layers accumulate over years and decades into something that operates below the threshold of articulation: the diagnostician cannot explain how she saw the mass. She simply saw it. The seeing was the knowing.
Schon called this the practitioner's repertoire, and he argued that it is the foundation of professional competence in every domain. The repertoire is not a database of facts. It is not a collection of rules. It is an organized, experientially indexed, emotionally weighted body of knowledge that shapes perception itself — that determines not just what the practitioner knows but what the practitioner sees, hears, feels, and notices in the first place.
The repertoire has several properties that matter enormously in the context of AI.
First, the repertoire is built through friction. The diagnostician's repertoire was not assembled from a textbook. It was assembled from the ten thousand X-rays she looked at wrong before she started looking at them right — from the false negatives that sent her back to the image, the false positives that taught her to distrust her first impression, the ambiguous cases that forced her to sit with uncertainty until the uncertainty resolved into understanding. Each error, each surprise, each moment of productive confusion deposited a layer. The layers were not additive. They were integrative — each new layer reorganized the ones beneath it, producing a structure that was not merely larger but differently organized, capable of perceiving patterns that the earlier, thinner repertoire could not see.
The friction was essential. Not as a moral virtue or a character-building exercise, but as a cognitive mechanism. The error forced the attention. The surprise disrupted the expectation. The disruption created the conditions for the reorganization of the existing structure. Without the friction, the layers do not form properly — the deposition is superficial, the integration incomplete, the resulting repertoire broad but thin, capable of recognizing obvious patterns but not the subtle ones that distinguish the master from the competent.
Second, the repertoire is tacit. Michael Polanyi's famous observation — "we know more than we can tell" — is the epistemological foundation for Schon's concept. The diagnostician cannot articulate the rules she uses to read the X-ray because her competence does not consist of rules. It consists of perceptual patterns too complex and context-dependent to be captured in propositional form. Ask her how she saw the mass, and she will say something like "It just didn't look right" — a description that conveys nothing to the novice but is, in fact, a precise report of a perceptual event. Something in the image triggered a pattern that her repertoire recognized as deviant, and the recognition was immediate, pre-verbal, and certain in a way that no explicit reasoning could have produced.
Third, the repertoire is personal. Two diagnosticians with thirty years of experience will have different repertoires, because they have seen different patients, made different errors, and worked in different institutional contexts. The repertoire is indexed by lived experience — by the specific cases that taught the practitioner something she did not know, the specific failures that reorganized her understanding, the specific moments of recognition that expanded her perceptual capacity. This personal indexing is what makes the repertoire irreducible to any external representation: it cannot be transferred, copied, or uploaded, because it is organized by a life that only one person has lived.
Now consider what happens when the practitioner's repertoire meets Claude's computational repertoire.
The computational repertoire is not a repertoire in Schon's sense. It is a statistical representation of patterns across an immense body of text — billions of words, millions of documents, the compressed residue of an enormous fraction of human written knowledge. The representation is organized not by lived experience but by statistical co-occurrence: patterns that appear together frequently are associated more strongly than patterns that appear together rarely. The organization is powerful — powerful enough to produce responses that are often strikingly apt, that surface connections the practitioner had not considered, that draw on domains the practitioner has never entered.
But the organization is different in kind from the practitioner's repertoire, and the difference is not merely quantitative. The practitioner's repertoire is organized by significance. The computational repertoire is organized by frequency. These are not the same thing, and in many situations they produce divergent results.
Significance is the quality of having mattered — of having been consequential in the practitioner's experience, of having changed something about how she understands her domain. The case that taught the diagnostician to look twice at the mediastinal silhouette is significant not because it was common but because it was consequential. It may have been rare — a one-in-a-thousand finding that most practitioners never encounter. But it reorganized her repertoire in a way that made her permanently better at reading chest X-rays. The significance is personal, biographical, irreducible to statistical frequency.
Frequency is the quality of having occurred often. The computational repertoire associates patterns that co-occur frequently in the training data, regardless of their significance to any particular practitioner. The association is powerful for common patterns — the standard diagnostic findings, the typical code architectures, the conventional design solutions. But it is unreliable for rare but significant patterns — the unusual presentation that the experienced practitioner recognizes instantly because she has been marked by a specific encounter with it.
The two repertoires complement each other in a specific and productive way. The computational repertoire's breadth gives the practitioner access to patterns she has never encountered — solutions from domains she has never entered, connections between ideas she has never associated. The practitioner's repertoire's depth gives her the evaluative capacity to judge which of the computational repertoire's offerings are appropriate to her specific situation — which connections are genuine and which are artifacts of frequency rather than significance.
The senior engineer in Trivandrum whose remaining twenty percent proved to be the part that mattered most was discovering this complementarity in real time. The eighty percent that Claude could handle was the articulable, procedural, frequency-organized knowledge — the syntax, the patterns, the standard implementations. The twenty percent that Claude could not handle was the significance-organized knowledge — the judgment about which architecture would scale, which design would serve the user, which tradeoff was worth making. The judgment was organized not by what typically works but by what had mattered in this engineer's specific experience of building systems over many years.
The complementarity is genuine, but it is not automatic. It requires the practitioner to maintain the primacy of her own repertoire in the evaluative function — to use the computational repertoire for generation and her own repertoire for evaluation. The risk is that the computational repertoire's breadth and fluency overwhelm the practitioner's evaluative confidence. The tool produces a solution drawn from a domain the practitioner has never entered. The solution looks right. It is articulated with the confidence of a million documents. The practitioner's own repertoire offers no basis for evaluation, because the pattern is outside her experience. She must either reject the solution (losing the potential benefit of the cross-domain connection) or accept it (trusting the computational repertoire's frequency-based organization without the check of her own significance-based evaluation).
This is the specific bind that the AI-augmented practitioner faces in every session: the computational repertoire offers more than the practitioner's repertoire can evaluate, and the gap between offering and evaluation is the space where errors of judgment occur.
The author of The Orange Pill describes this bind with a candor that Schon's framework rewards. The Deleuze passage that sounded like insight was a product of the computational repertoire's breadth — a connection between two thinkers that the statistical representation found plausible. The author's overnight unease was a product of his own repertoire's depth — a felt sense, too slow for the daytime iteration but persistent enough to surface in the quiet hours, that the connection did not hold. The resolution — two hours at a coffee shop with a notebook, writing by hand — was the practitioner's reassertion of her own repertoire's evaluative primacy. The hand-written version was rougher, less polished, less computationally sophisticated. It was also the one that reflected what the author actually understood, rather than what the tool made plausible.
Schon argued that the practitioner's repertoire is the most valuable thing she possesses — more valuable than her formal education, more valuable than her technical skills, more valuable than her credentials. The repertoire is what allows her to see what others cannot see, to sense when something is wrong before the data confirms it, to make the judgment calls that no procedure can specify. The repertoire is the embodied, practiced, significance-organized intelligence that makes the practitioner irreplaceable.
AI does not replace the repertoire. It cannot. The repertoire is built through a life, and no computational process can simulate the lived experience that produces it. What AI does is create a new demand on the repertoire — a demand to evaluate offerings that are broader, faster, and more fluent than any previous tool has produced, and to do so without the support of mutual reflection that a human partnership provides.
The practitioner whose repertoire is deep enough to meet this demand will find in AI the most productive reflective partnership available. The practitioner whose repertoire is shallow — who has not yet accumulated the layers of significance-organized knowledge that enable discerning evaluation — will find in AI a mirror that reflects confidence without warranting it, producing output that looks like the product of deep practice without the substance that deep practice provides.
The question of who benefits from AI and who is endangered by it reduces, in Schon's framework, to the question of repertoire. The rich get richer: the practitioner with a deep repertoire leverages the computational repertoire's breadth to produce work of a quality neither could achieve alone. The shallow get smoother: the practitioner without a deep repertoire produces output that is polished, plausible, and indistinguishable from the deep practitioner's work — until the situation demands the judgment that only a deep repertoire can provide.
The diagnostician who has read ten thousand X-rays uses Claude to research an unusual finding, and the computational repertoire surfaces a rare syndrome she has never encountered but immediately recognizes as consistent with what she sees. Her repertoire evaluates; the computational repertoire informs. The result is a diagnosis that neither could have produced alone.
The resident who has read a hundred X-rays uses Claude to research the same finding, and the computational repertoire surfaces the same rare syndrome. The resident cannot evaluate the suggestion — he lacks the layers of significance-organized knowledge that would allow him to distinguish this rare-but-real pattern from a statistical artifact. He accepts the suggestion, not because he has evaluated it but because he cannot evaluate it. The diagnosis may be correct. But the correctness is accidental rather than earned, and the next unusual finding will meet the same unevaluated acceptance, and the one after that, and the repertoire that should be building through friction and surprise is instead being bypassed by a tool whose fluency removes the conditions under which repertoires are built.
The repertoire is not threatened by the tool. It is threatened by the practitioner's willingness to let the tool substitute for the repertoire rather than supplement it. And that willingness, Schon's framework suggests, is the central professional hazard of the AI age.
---
A potter centers the clay on the wheel. Her hands apply pressure — not uniform pressure but a specific, constantly adjusted pressure that responds to the clay's resistance, its moisture, its temperature, the speed of the wheel, the asymmetry of the mass. She does not think about the pressure. She does not calculate it. She feels the clay's response through her fingertips and adjusts — not after the feeling but simultaneously with it, in a continuous loop of sensing and responding that operates below the threshold of conscious deliberation.
If you ask her what she is doing, she will say, "centering." If you ask her how, she will pause. She may demonstrate. She may say something about keeping steady pressure while the wheel turns. But the description will not capture what she actually does, because what she actually does is not a procedure that can be described. It is a practiced, embodied responsiveness that she has developed through thousands of hours of engagement with a material that resists in specific, non-repeatable ways, and that she exercises through a sensory-motor coordination that has become, through practice, as automatic and as unarticulate as breathing.
Schon called this knowing-in-action — the tacit knowledge that is embedded in skilled performance itself, that cannot be separated from the doing, that exists as competence rather than cognition. The term is precise and deliberate. It is not knowing about action, which would be the theoretical knowledge of what centering involves. It is not knowing for action, which would be the planning knowledge of how to approach the centering task. It is knowing in action — knowledge that is constituted by the performance, that does not exist outside the performance, that is the performance considered as a cognitive act.
The philosophical foundation for this concept predates Schon by two decades. Michael Polanyi, writing in 1966, articulated what he called the tacit dimension of knowledge — the vast substrate of understanding that underlies all explicit knowledge and that resists, by its nature, full articulation. "We know more than we can tell," Polanyi wrote, and the sentence has become so famous that its radical implications are easy to miss. It does not merely say that some knowledge is difficult to articulate. It says that the most important knowledge — the knowledge that makes skilled performance possible, that distinguishes the master from the novice, that is the foundation of all judgment — is knowledge that cannot in principle be fully articulated.
The impossibility is not practical but logical. The tacit dimension includes the perceptual frameworks through which we organize experience, and these frameworks cannot be articulated because articulation presupposes them. You cannot describe the framework within which you perceive without using that framework to describe it. The description is always partial, always after the fact, always an approximation of a competence that operates at a level more fundamental than language can reach.
Gilbert Ryle, writing even earlier, drew the distinction between knowing-that and knowing-how. Knowing that Paris is the capital of France is propositional knowledge — it can be stated, stored, transmitted, and verified. Knowing how to ride a bicycle is practical knowledge — it can be demonstrated but not stated, developed through practice but not transmitted through instruction, tested through performance but not through examination. Ryle argued that the Western philosophical tradition had systematically privileged knowing-that over knowing-how, and that the privileging had produced a distorted picture of what intelligence consists of.
Schon's knowing-in-action sits squarely in this tradition, but it extends it into the domain of professional practice with consequences that are directly and urgently relevant to the AI age.
Because here is what AI replicates, and here is what it does not.
AI replicates knowing-that with extraordinary completeness. The large language model has ingested a vast body of propositional knowledge — facts, principles, procedures, frameworks, the entire articulable substrate of human professional knowledge. Ask Claude what the standard approach to authentication in a web application is, and it will tell you. Ask it to enumerate the differential diagnosis for chest pain in a forty-five-year-old male, and it will produce a list more comprehensive than most clinicians could generate from memory. Ask it to explain the legal standard for negligence in a product liability case, and it will articulate the elements with the fluency of a law review article.
This replication is not trivial. It is, in many professional contexts, genuinely useful. A significant fraction of what professionals do in their daily work is the retrieval and application of propositional knowledge — looking up standards, citing precedents, applying formulas, following procedures. When the machine can perform these operations faster and more comprehensively than the human, the practical effect is substantial.
But AI does not replicate knowing-how. Not because the engineering is insufficient — though it is — but because knowing-how is not the kind of thing that can be replicated through the processing of text. The potter's competence is not in any book. It is not in any dataset. It is not in any description, no matter how detailed, of what centering involves. It is in her hands, in the neural pathways that connect her fingertips to her motor cortex, in the practiced responsiveness that adjusts pressure fifty times per second in response to signals that never reach conscious awareness.
The professional equivalent of the potter's knowing-how is what Schon identified as the twenty percent — the judgment, the instinct, the taste — that AI cannot replicate and that the language interface has made visible by commodifying everything else.
The experienced developer who feels that an architecture will not scale. The designer who senses that a layout is wrong before she can explain why. The manager who walks into a room and knows, from the quality of the silence, that the meeting is about to go badly. The teacher who detects, from a shift in posture that no camera would flag, that a student has stopped understanding. These are all expressions of knowing-in-action — embodied, practiced, situationally responsive competencies that operate at a level more fundamental than language, and that no language model can access because the knowledge was never in language to begin with.
The interaction between knowing-in-action and AI is more subtle than either replacement or irrelevance. It is a new kind of relationship, one that Schon's framework illuminates but that his era did not require him to analyze.
When a practitioner works with Claude, the tool produces output. The output is in language — code, text, designs, analyses. The practitioner evaluates the output. The evaluation is partly explicit — she reads the code, checks the logic, tests the functionality. But it is also, and more importantly, tacit — she feels whether the code is right, senses whether the design will work, intuits whether the analysis is sound. The tacit evaluation operates through the practitioner's knowing-in-action, through the embodied competence that has been built over years of practice and that detects patterns, inconsistencies, and qualities that no explicit check can capture.
The potter does not verify that the clay is centered by measuring it. She knows it is centered because centering feels a specific way, and not-centering feels a different way, and the difference is registered through a sensory channel that no instrument replicates. The experienced developer does not verify that the AI-generated code is architecturally sound by running a checklist. She reads the code and knows — with the pre-verbal certainty of embodied expertise — whether it will hold or whether something is structurally wrong in a way she cannot yet articulate.
This tacit evaluation is the practitioner's most important contribution to the AI partnership. It is the quality check that no automated test can perform, the judgment that no metric can capture, the sense of rightness that distinguishes the output that works from the output that will work until it encounters the situation that reveals its inadequacy. It is the felt understanding that Schon spent his career arguing was the foundation of professional competence, and it is the one thing the AI cannot produce, evaluate, or substitute.
But the relationship is not one-directional. AI's output also stimulates knowing-in-action in ways that are newly productive. When Claude produces an implementation the practitioner did not expect, the surprise triggers a tacit response — a feeling, often pre-verbal, that something about the implementation is right or wrong in a way the practitioner has not yet conceptualized. The feeling is the practitioner's knowing-in-action responding to a new stimulus, and the response often contains information that the practitioner's explicit analysis would miss.
The author of The Orange Pill describes this dynamic when he writes about overnight unease — the bodily sense that something was wrong with a passage that had passed every explicit check. The unease was knowing-in-action doing its work: evaluating the output at a level more fundamental than conscious analysis, detecting a misalignment between what the passage claimed and what the author actually understood. The unease was more reliable than the daytime evaluation, not because the body is smarter than the mind but because the body's evaluation draws on the full repertoire — the significance-organized, experientially indexed, emotionally weighted knowledge that conscious analysis can access only partially and imperfectly.
The practitioner who trusts this tacit evaluation — who treats the feeling of wrongness as a signal worth investigating even when the explicit checks pass — is exercising the most sophisticated form of professional knowledge in exactly the way Schon's framework predicts. The practitioner who overrides the tacit evaluation — who dismisses the feeling as irrational because the code compiles and the tests pass — is abandoning the most valuable cognitive resource she possesses in favor of a verification method that, while necessary, is insufficient.
The AI age creates a specific pressure to override the tacit evaluation, because the tool's output is so polished, so confident, so fluent that the practitioner's vague unease feels inadequate by comparison. The code works. The design looks good. The analysis is well-structured. What basis does a feeling have against a functioning artifact? The basis is the entire repertoire — the ten thousand X-rays, the thousand projects, the years of embodied engagement with materials that resist — and the feeling is the repertoire's compressed verdict, delivered not as an argument but as a sensation. It is the basis that no technology can replicate and no explicit check can substitute.
Professional education has never been good at teaching knowing-in-action, because knowing-in-action resists the methods professional education relies on: lectures, textbooks, examinations, articulable learning objectives. Schon proposed an alternative — the reflective practicum, an educational environment modeled on the design studio, in which students develop tacit competence through supervised engagement with real situations under the guidance of a master practitioner. The learning happens not through instruction but through coached practice — through the student's own cycle of move, back-talk, evaluation, and adjustment, guided by a master whose role is not to tell the student what to do but to help the student notice what she is already doing.
The AI age makes the reflective practicum not just desirable but essential. When the explicit, articulable knowledge that professional education has always emphasized can be produced by a machine, the tacit knowledge that professional education has always undervalued becomes the sole basis for professional value. The doctor whose diagnostic competence consists entirely of the knowledge that Claude also possesses has no professional advantage over Claude. The doctor whose diagnostic competence includes the embodied, practiced, tacit capacity to sense what the numbers do not show — the quality of the patient's distress, the pattern in the history that does not fit the obvious diagnosis, the intuition that something else is going on — has a professional advantage that no machine can match.
But this advantage is fragile. It must be cultivated. It must be practiced. It must be built through the specific friction of engagement with situations that resist, that surprise, that force the practitioner to sit with not-knowing long enough for the knowing to develop. And it must be exercised — used, trusted, relied upon — in the face of a tool whose confident output creates a constant temptation to defer.
The potter trusts her hands. Not blindly — she checks the result, she measures when precision matters, she verifies when the stakes are high. But she trusts her hands because her hands know things her mind does not, and the knowing in her hands is the product of a practice that no shortcut can replicate.
The AI-augmented practitioner must learn the same trust. Not blind trust in intuition over evidence, but calibrated trust in the tacit evaluation that embodied expertise provides — the willingness to pause when something feels wrong, to investigate the unease, to treat the body's verdict as data rather than noise. This trust is the irreducible human contribution to the AI partnership, and its cultivation is the central educational challenge of the professional age that is now beginning.
Every profession tells itself a story about why it deserves to exist. The story has a common structure. There is a body of specialized knowledge that takes years to acquire. There is a set of skills that only the trained can exercise. There is a domain of problems that only the credentialed should be permitted to address. And there is an implicit contract with society: we will submit to the rigors of training, and in return you will grant us the authority, the autonomy, and the economic premium that our expertise warrants.
The story is not false. The knowledge is real. The skills are genuine. The training is demanding. But the story conceals a question it cannot afford to ask: what, exactly, is the expertise that justifies the premium?
Donald Schon identified a crisis of confidence in professional knowledge that was already underway in 1983, when The Reflective Practitioner was published. The crisis had multiple sources — the environmental disasters produced by confident engineering, the urban renewal catastrophes designed by credentialed planners, the medical errors committed by well-trained physicians, the economic forecasts that missed every major turning point. In each case, the practitioners had followed the rules. They had applied the theory. They had exercised the techniques their professional schools had taught them. And the results ranged from inadequate to catastrophic.
The crisis was not a crisis of competence in the ordinary sense. The professionals were not incompetent. They were competent in the specific, narrow way that technical rationality defines competence — they could apply known techniques to well-defined problems. The crisis was that the problems they faced were not well-defined. The problems were messy, ambiguous, value-laden, context-dependent, and resistant to the clean application of any theory. The professionals were applying the right techniques to the wrong kind of problem, and the technical rationality that structured their training gave them no way to recognize the mismatch.
Schon's diagnosis was precise: the crisis was not about the quality of professional knowledge but about its kind. Professional schools were teaching the articulable, the procedural, the testable — the knowing-that and the knowing-how-to-follow-rules that could be transmitted through lectures, codified in textbooks, and verified through examinations. They were not teaching, because they did not know how to teach, the tacit, the judgmental, the perceptive — the knowing-in-action that could only be developed through reflective engagement with the swampy lowlands of actual practice.
The crisis Schon identified in 1983 was slow-moving. Professional schools continued to operate on the technical-rationality model because no compelling alternative had been institutionalized at scale. The reflective practicum Schon proposed — the design studio model, in which students develop competence through coached practice rather than lecture-based instruction — was adopted in pockets but never displaced the dominant paradigm. The crisis persisted as a background hum, audible to anyone who listened but easy to ignore in the daily business of professional education and practice.
The AI moment has turned the background hum into a siren.
The technology has performed, in a matter of months, the most rigorous audit of professional knowledge in history. It has separated, with a precision no human examiner could achieve, the articulable from the tacit, the procedural from the judgmental, the explicit from the embodied. And the audit's findings are devastating for the professional story as traditionally told.
The articulable knowledge that professional schools spend years transmitting — the doctrine, the frameworks, the diagnostic criteria, the design standards, the coding patterns, the analytical techniques — can be reproduced by a machine that has never attended a class, never completed a residency, never passed a licensing examination. The reproduction is not perfect. It is not always accurate. But it is comprehensive enough, and improving fast enough, that the articulable knowledge alone no longer justifies the professional premium.
A law student spends three years learning to read cases, identify holdings, apply doctrinal frameworks, and construct legal arguments. Claude can perform all of these operations — not with the nuance of a thirty-year litigator, but with a competence that exceeds the average recent graduate. The three years of legal education, to the extent that they consisted of transmitting articulable legal knowledge, have been replicated by a system available to anyone with an internet connection.
A medical student spends four years learning anatomy, physiology, pharmacology, pathology, and clinical medicine. Claude can recall, organize, and apply this knowledge with a breadth that no individual physician can match. The four years, to the extent that they consisted of transmitting articulable medical knowledge, have been compressed into a system that does not sleep, does not forget, and does not graduate with two hundred thousand dollars in debt.
An engineering student spends four years learning mathematics, physics, materials science, and design methods. Claude can perform calculations, reference standards, generate designs, and troubleshoot implementations with a speed and scope that no individual engineer can approach. The four years, to the extent that they consisted of transmitting articulable engineering knowledge, have been commodified.
The pattern is consistent across every profession that depends heavily on articulable knowledge. The machines have performed the audit, and the audit reveals that the articulable part — the part professional schools are best at teaching, the part licensing examinations are designed to test, the part the professional story cites to justify the premium — is precisely the part that the machines can replicate.
What remains is the tacit part. The judgment. The perception. The capacity to set the problem rather than merely solve it. The knowing-in-action that detects what the data does not show. The repertoire that senses when something is wrong before the analysis confirms it. The reflective competence that evaluates whether the frame is right, not just whether the answer is correct within the frame.
This is what Schon always said mattered most. And the crisis he identified — that professional education emphasizes the replicable while neglecting the irreplaceable — has been transformed, by the arrival of AI, from an epistemological argument into an existential emergency.
The emergency is not that professionals will be replaced. The senior engineer's twenty percent — the judgment, the architectural instinct, the taste — is not going anywhere. The experienced diagnostician's eleven-second reading of the chest X-ray is not reproducible by any system that lacks the significance-organized repertoire of ten thousand previous readings. The master teacher's capacity to sense when a student has stopped understanding, to adjust the lesson in real time based on cues that no camera would flag, is not in any training set.
The emergency is that the professional education system is producing graduates whose competence consists overwhelmingly of the replicable part — the part the machines already do — while providing almost no systematic development of the irreplaceable part — the part that constitutes the actual basis for professional value.
Medical schools still structure their curricula around the transmission of biomedical knowledge, with clinical experience added late and often inadequately supervised. Law schools still organize their teaching around the case method, which develops the capacity to analyze legal arguments but not the capacity to sense when a client's stated problem is not the real problem. Engineering programs still emphasize mathematical technique over design judgment. Business schools still privilege quantitative analysis over the messy, qualitative, relationship-dependent work of actually leading people through uncertainty.
Each of these curricula was designed for a world in which the articulable knowledge they transmit was scarce, expensive to acquire, and difficult to access outside institutional channels. In that world, the professional premium was partly justified by the cost of knowledge acquisition: you needed the degree because the knowledge was nowhere else.
That world has ended. The knowledge is everywhere. The cost of acquisition has collapsed to the cost of a subscription. And the professional schools are producing graduates who have spent years acquiring something that is now freely available, while the thing that would actually justify their premium — the tacit, reflective, judgment-based competence that no machine can replicate — remains largely undeveloped by their education.
The Schon-Argyris framework provides the vocabulary for what is needed. Professional education must shift from single-loop to double-loop learning — from teaching students to solve pre-defined problems within established frameworks (single-loop) to teaching students to question the frameworks themselves, to set problems rather than merely solve them, to evaluate whether the approach is right rather than merely whether the execution is correct (double-loop).
The shift is not a matter of adding a course on "critical thinking" to the existing curriculum. It is a structural transformation of the curriculum itself, from the transmissive model — in which knowledge flows from expert to student through lecture and textbook — to the reflective practicum model — in which students develop competence through coached engagement with genuine professional situations, under the guidance of practitioners whose own reflective competence is the medium of instruction.
The design studio is Schon's paradigmatic reflective practicum. Architecture education, for all its flaws, has never abandoned the studio as the central learning environment. Students design. The design talks back. The master practitioner coaches — not by telling the student what to do, but by helping the student notice what the design is saying, by offering alternative framings that the student's own repertoire cannot yet generate, by modeling the reflective stance that the student is developing.
The studio model works because it develops tacit competence through practice rather than instruction. The student does not learn design by studying design theory. She learns design by designing — by engaging in the reflective conversation with the situation, by experiencing the back-talk of the sketch, by developing the repertoire through the specific friction of moves that do not work and reframings that reveal new possibilities.
Every professional school now needs a version of the studio. Not a simulation. Not a case study. A genuine practice environment in which students develop the tacit, reflective, judgment-based competence that AI cannot replicate and that the professional premium of the future will be based upon.
The transformation is urgent not because the current system is bad — it has produced remarkable professionals for generations — but because the ground on which it stands has shifted beneath it. When the articulable knowledge is everywhere, the institution that charges a quarter-million dollars to transmit it must justify itself on other grounds. And the only grounds available are the grounds Schon identified forty years ago: the development of the reflective practitioner, the cultivation of knowing-in-action, the building of the repertoire that no machine possesses and no shortcut can produce.
The crisis is real. It is also, viewed from the right angle, an opportunity of historic proportions. For the first time, the argument for reflective professional education is not merely epistemological — not merely a philosopher's claim about the nature of professional knowledge. It is economic, institutional, and existential. The market is performing Schon's argument in real time, demonstrating with every passing month that the articulable is commoditizing and the tacit is appreciating. The professional schools that recognize this shift and restructure accordingly will produce the professionals the AI age needs. The ones that do not will produce graduates whose most expensive asset — years of articulable knowledge acquisition — is worth less than the subscription their employers already pay.
The crisis that Schon diagnosed has become, through the force of technological change he did not live to see, the central question of professional education. The question is no longer whether his diagnosis was correct. The machines have confirmed it. The question is whether the institutions he challenged will adapt in time — or whether, like the Luddites of Nottinghamshire, they will cling to a model of professional value that the world has already left behind.
---
The most important room in any hospital is not the operating theater. It is the room where the surgical team meets afterward — where the surgeon, still in scrubs, reviews what happened, what surprised her, what she would do differently. The room is small, unglamorous, and uncompensated. No one is billed for the time spent there. No metric captures its output. No dashboard tracks its contribution to patient outcomes.
And yet. The outcomes of hospitals that protect this room — that insist on the post-operative debrief even when the schedule is crushing, that treat reflection as a professional obligation rather than a personal luxury — are measurably, consistently, significantly better than the outcomes of hospitals that do not.
The room is a dam. Not the kind of dam that stops the river — the surgery will happen regardless — but the kind that creates a pool, a still space behind the structure where the current slows enough for something to grow. What grows in the debrief room is the same thing that Schon spent his career arguing was the foundation of professional competence: the reflective capacity to learn from experience, to revise one's understanding in the light of what the practice reveals, to treat every case not as an execution of a pre-formed plan but as an experiment whose results inform the next experiment.
The dam is architectural. It is not a state of mind, not a personal discipline, not a motivational exhortation to "be more reflective." It is a physical room, a scheduled time, a cultural norm that is protected by institutional authority. The surgeon who skips the debrief does not merely fail to reflect. She violates a norm. She answers for it. The institution has decided that reflection is not optional, and it has built the structure to enforce that decision.
Schon understood that individual reflective practice, no matter how sophisticated, is unsustainable without institutional support. The pressures of production — the next patient, the next project, the next deadline — will always consume the time and attention that reflection requires, unless the institution creates structures that protect that time and attention against the production pressure. The reflective pause is always more expensive than the iterative continuation, in the short term. The pause costs time that could be spent producing. The continuation produces output that the institution can measure, bill, and report.
The economics are stacked against reflection. And when the tool accelerates the iteration — when Claude makes it possible to produce in minutes what used to take days — the economics tilt further. The opportunity cost of the pause increases with the speed of the iteration, because every minute spent reflecting is a minute that could have produced another cycle of output. The faster the tool works, the more expensive the pause becomes, and the harder it is to justify in the language of productivity that institutions speak.
This is why the dam must be architectural — built into the structure of the workflow, protected by institutional authority, and treated as a non-negotiable element of professional practice. The individual practitioner who decides, on her own, to pause and reflect will find herself overridden by the production pressure within days. The institution that builds the pause into its structure makes the individual decision unnecessary. The reflection happens not because the practitioner chooses it but because the institution requires it.
The concept of double-loop learning, developed by Chris Argyris and Schon together, provides the design principle for these structures. Single-loop learning adjusts actions within existing frameworks. The code does not compile; adjust the syntax. The patient does not respond to treatment; adjust the dosage. The project misses the deadline; adjust the schedule. Each adjustment is a response to feedback, and each response operates within the assumptions that generated the original action. The framework is not questioned. The goals are not revised. The governing variables — the mental models, the assumptions about what counts as success, the values that determine what matters — remain unchanged.
Double-loop learning questions the governing variables. The code compiles but the product serves no one; question whether the product should exist. The patient responds to treatment but does not improve in ways that matter to the patient; question the definition of improvement. The project meets the deadline but the team is burned out; question whether the deadline was the right metric to optimize for.
The distinction is not abstract. It determines the kind of learning an organization can produce, and therefore the kind of adaptation it can achieve. Single-loop organizations — organizations that adjust actions without questioning assumptions — will adopt AI tools and use them to pursue their existing goals faster. They will produce more output. They will iterate more rapidly. They will be measurably more efficient.
They will also miss the transformation entirely.
The AI transition does not reward organizations that do the old things faster. It rewards organizations that reconceive what they do — that use the removal of implementation friction as an opportunity to ask what they should be implementing. The developer team that uses Claude to ship code faster is achieving single-loop adaptation. The developer team that uses the freed capacity to question what code should be written, for whom, and why, is achieving double-loop learning. The difference will compound over months and years until the single-loop organization, for all its speed, finds itself efficiently producing things the world no longer wants.
The organizational structures that produce double-loop learning are specific and designable.
First: protected time for frame evaluation. Not "strategy offsites" scheduled quarterly and consumed by presentations. Regular, frequent, embedded pauses in which the team asks not "How do we do this better?" but "Should we be doing this at all?" The question is uncomfortable. It challenges the work that is already underway. It risks undermining momentum. It is also the question that separates organizations that learn from organizations that merely adapt.
The frequency matters. Annual strategy reviews evaluate the frame too late — by the time the frame is questioned, months of iteration within the wrong frame have produced commitments that are difficult to reverse. Weekly frame evaluations, even brief ones, create a rhythm of reflection that keeps the question of appropriateness alive alongside the question of efficiency. The rhythm is the dam. The regularity of the pause is what protects the reflective space against the current of accelerated production.
Second: structured disagreement. Double-loop learning requires the surface-level agreement that most organizations reward to be disrupted by genuine challenges to the governing assumptions. This does not happen spontaneously. It requires institutional structures — designated devil's advocates, red team exercises, required alternative framings — that create permission and incentive for the challenges that single-loop culture suppresses.
The connection to the AI partnership is direct. Claude does not disagree. It accommodates. It produces what the practitioner asks for, with a fluency that makes the asking feel confirmed. The organizational equivalent of Claude's accommodation is the culture that rewards agreement and punishes challenge — the culture in which the practitioner's frame is reinforced by the tool's compliance and the team's alignment and the dashboard's green lights, until the frame encounters reality and the accumulated polish proves to be exactly that: polish, applied to a surface that no one tested for structural integrity.
The structure that counteracts this is not a suggestion box or an open-door policy. It is a required practice — a norm, enforced by the institution, that every significant decision is subjected to a formal challenge before it is finalized. The challenge is not optional. It is not voluntary. It is architectural, built into the decision-making process the way the post-operative debrief is built into the surgical workflow.
Third: mentoring structures that develop tacit competence. When the articulable knowledge is available from a machine, the mentor's role changes fundamentally. The mentor is no longer the primary source of technical knowledge — the student can get that from Claude, faster and more comprehensively than any individual mentor could provide. The mentor's role is to develop the student's knowing-in-action — the tacit, embodied, significance-organized competence that the machine cannot transmit.
This means the mentoring relationship must be structured around practice rather than instruction. The mentor does not lecture. She coaches. She sits with the student as the student works with the tool, and she notices what the student does not notice — the moments when the student accepts output without evaluation, the moments when the student's tacit unease is overridden by the tool's confidence, the moments when the iteration accelerates past the student's capacity to reflect. She names these moments. She helps the student develop the self-awareness to detect them independently. She models the reflective pause that the tool's speed makes difficult and the institution's production pressure makes expensive.
The mentoring is slow. It is costly. It does not scale. And it is, in the AI age, the most valuable thing an organization can invest in, because it produces the one thing the machines cannot produce: practitioners whose tacit competence is deep enough to evaluate the machine's output with the discernment that productive collaboration requires.
Fourth: evaluation systems that value reflection. What gets measured gets managed, and what gets rewarded gets repeated. If the evaluation system rewards output volume — lines of code, documents produced, features shipped — the practitioners will optimize for output volume, and the reflective pause will be the first casualty. If the evaluation system rewards reflective quality — the quality of the questions asked, the depth of the frame evaluations, the accuracy of the practitioner's tacit judgments — the practitioners will optimize for reflection, and the output will take care of itself, because output in the service of the right frame is worth more than output in the service of the wrong frame at any speed.
The redesign of evaluation systems is perhaps the most difficult structural change, because it requires the organization to develop metrics for things it has never measured — the quality of a question, the depth of a reframing, the accuracy of a tacit judgment. These are not easily quantifiable. They resist the dashboard. They demand the kind of qualitative, situated, judgment-based evaluation that the production-oriented organization is least equipped to provide.
The difficulty is the point. The things that matter most in the AI age — judgment, taste, reflective competence, the capacity to ask whether the frame is right — are precisely the things that resist quantification. An organization that insists on measuring only what can be quantified will measure what the machines can do and miss what only the humans can provide. An organization that develops the institutional capacity to evaluate what cannot be easily quantified — through mentoring, through peer review, through the slow, expensive, irreplaceable process of human judgment applied to human practice — will build the reflective culture that the AI age demands.
Schon's reflective practicum was designed for a different era, but its principles translate directly to the present moment. The practicum creates a protected space in which the student engages in genuine practice under the guidance of a reflective coach. The space is protected from the production pressure that would otherwise consume it. The practice is genuine — real problems, real materials, real consequences. The coaching is reflective — not directive, not instructional, but oriented toward helping the student develop her own capacity to notice, evaluate, and learn from the practice itself.
Every organization that deploys AI tools now needs a version of the practicum. Not a training session on how to use the tools — that is single-loop learning, and the tools themselves can provide it. A genuine practice environment in which practitioners develop the reflective competence to use the tools well — to evaluate the output, to detect the seam where plausibility diverges from truth, to maintain the tacit judgment that no tool can replace, and to cultivate the temporal discipline that lets reflection keep pace with iteration.
The dams that Schon's framework calls for are not impediments to the river. They are the structures that make the river's power useful — that convert raw force into the kind of controlled flow that supports life. The architectural structures of reflection — the protected pauses, the structured disagreements, the mentoring relationships, the evaluation systems — are dams in precisely this sense. They do not stop the flow of AI-augmented production. They create the pools in which the reflective capacity that gives production its value can develop and be sustained.
Without these structures, the AI-augmented practitioner becomes what Schon warned against throughout his career: a reflexive operator, responding to the tool's output with the speed and fluency that the tool enables, iterating without reflecting, producing without understanding, building with a precision that extends to everything except the question of whether the building deserves to exist.
The most valuable moment in any professional practice is the moment the practitioner stops and asks: Wait. Is this right?
Not "Does this work?" — the machine can answer that. Not "Is this efficient?" — the metrics can answer that. But "Is this right?" — the question that requires judgment, values, the full weight of the practitioner's significance-organized repertoire brought to bear on a situation that the numbers alone cannot evaluate.
That question is the candle in the current. It is small. It is easily extinguished. It costs time that the production schedule does not want to spare.
And it is the only question that matters.
The structures that protect the space for that question — the rooms, the pauses, the norms, the evaluations, the mentoring relationships that keep the reflective capacity alive against the current of accelerated production — are the design challenge of the professional age that is now beginning. They are the practical expression of everything Schon spent his career arguing: that the quality of professional work depends not on the speed of execution but on the depth of reflection, and that depth requires structures that protect it against the forces — economic, temporal, cultural, and now technological — that would prefer it disappeared.
The dam is not the enemy of the river. The dam is what makes the river a place where something can live.
Build the room. Protect the pause. Ask the question.
The rest will follow.
---
The phrase that rearranged something in me was not about technology or intelligence or the future. It was about swamps.
Schon wrote that the problems of greatest human importance do not live on the high ground of clean theory and solvable equations. They live in the "swampy lowlands" — messy, ambiguous, ill-defined, resistant to every framework you bring to them. The best professionals, he argued, are not the ones who avoid the swamp. They are the ones who wade in and learn to work there.
I have been wading in swamps my entire career. Building products no one has built before means you never start with clean definitions. You start with a feeling, a half-formed conviction that something should exist that does not yet exist, and then you spend months or years arguing with the situation until the thing emerges. The process looks nothing like the textbook version. It looks like what Schon described: propose, listen, adjust, propose again.
What changed in the winter of 2025 was the speed and richness of the conversation. When I described a problem to Claude, what came back was not a yes-or-no answer. It was an interpretation — a substantive response that showed me dimensions of the problem I had not articulated, sometimes dimensions I had not seen. The sketch was talking back, but with a vocabulary larger than any sketch I had ever worked with.
Schon died in 1997, before any of this was imaginable. But his framework predicted, with a precision that I find genuinely unsettling, exactly what I experienced: the conversion of linear, technical-rationality workflows into the iterative, reflective conversations that produce understanding and artifact simultaneously. He never saw the tool. He described its epistemology forty years in advance.
The part of Schon's framework that cost me sleep was not the celebration of reflective practice. It was the warning. The warning about what happens when the iteration outruns the reflection. When the speed of conversation is so fast that you cycle through twenty refinements in an hour without once pausing to ask whether the frame you are refining within is the right frame.
I caught myself doing exactly this, multiple times, during the writing of The Orange Pill. The prose coming back from Claude was polished. The connections were elegant. The arguments were structured. And on at least one occasion that I describe in the book, the structure concealed a fundamental error — a reference that sounded like insight but broke under examination. The overnight unease that caught the error was what Schon would call knowing-in-action: my embodied repertoire detecting a misalignment that my conscious evaluation, seduced by the output's smoothness, had missed.
The lesson was not to stop using the tool. It was to build the pause into the process — deliberately, architecturally, against the current of the tool's speed. Schon's word for what I needed was not discipline. It was design. Design the workflow to include the reflective pause. Protect the pause with structure, not willpower, because willpower fails at 3 a.m. when the tool is fast and the ideas are flowing and the question "Is this actually right?" is the most expensive question you can ask.
The crisis Schon identified is now everyone's crisis. Every professional school, every organization, every parent watching a child grow up in a world of abundant answers must grapple with the same question: how do you develop the judgment that no machine possesses, the tacit competence that no dataset contains, the reflective capacity to ask whether the frame deserves the polish it is receiving?
His answer — practice, mentorship, coached engagement with genuine situations that resist — is not convenient. It is not scalable. It is not optimizable. It is also, as far as I can tell, correct. The dams this book calls for are, in Schon's vocabulary, the institutional structures that protect the reflective practicum against the production pressure that would otherwise consume it: rooms for debriefs, norms of structured disagreement, mentoring relationships designed around tacit knowledge, evaluation systems that reward the quality of the question over the speed of the answer.
Build the room. Protect the pause. Ask the hard question that the tool's fluency makes easy to skip.
The swampy lowlands are where the important work has always lived. The tool does not drain the swamp. It gives you a faster way to wade. But the wading — the judgment, the reflection, the willingness to sit with ambiguity until the ambiguity teaches you something — that part is still yours.
-- Edo Segal
Every professional school teaches the same lie: that expertise means applying theory to well-defined problems. Donald Schon spent his career proving otherwise. Real expertise lives in the swamp -- in ambiguous, messy situations where the first challenge is figuring out what the problem actually is. Now AI has automated the high ground, handling the clean, definable work with breathtaking speed. What remains is precisely the territory Schon mapped: the reflective judgment, the tacit knowledge, the embodied instinct that no dataset contains.
This book brings Schon's framework into direct collision with the AI revolution documented in The Orange Pill. When your tool iterates faster than you can reflect, when polished output arrives before you've decided whether the question was right, Schon's warning becomes survival knowledge. The practitioner who cannot pause to evaluate is not collaborating with AI. She is being carried by it.
The most urgent professional skill of the AI age is the one Schon identified forty years before the tools arrived: the capacity to reflect while you act, to question your own framing, and to recognize when the smooth surface of a confident answer is concealing the wrong question entirely.
A reading-companion catalog of the 33 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Donald Schon — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →