Lucy Suchman — On AI
Contents
Cover Foreword About Chapter 1: The Gap Between Plans and Actions Chapter 2: Why Effective Practice Is Always Improvisation Chapter 3: What Xerox PARC Taught About Human-Machine Asymmetry Chapter 4: The Difference Between Generated and Earned Results Chapter 5: Situated Knowledge and Its Resistance to Transfer Chapter 6: When the Machine Handles the Improvisation Chapter 7: The Friction That Produces Understanding Chapter 8: AI Outputs as Plans, Not Actions Chapter 9: Designing for the Gap Chapter 10: Keeping the Human in the Situation Epilogue Back Cover
Lucy Suchman Cover

Lucy Suchman

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Lucy Suchman. It is an attempt by Opus 4.6 to simulate Lucy Suchman's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The machine understood me perfectly. That was the problem.

I was three weeks into writing The Orange Pill, deep in collaboration with Claude, and I had just described a technical challenge in the messiest way possible — half-formed sentences, contradictory requirements, a sketch of what I wanted that would have made any engineer ask me to come back when I had a real spec. Claude returned a working implementation. Clean. Functional. Exactly what I had asked for.

And I felt met. I said so in the book. I used that word deliberately — met — because it captured something real about the experience. The sensation of having my intention held and returned in clarified form. The feeling that something on the other side of the screen was participating in a genuine intellectual exchange.

Lucy Suchman would ask me who was doing the participating.

Not cruelly. Not dismissively. With the precision of someone who has spent forty years studying exactly this moment — the moment a human sits across from a machine and begins to project understanding onto outputs that are sophisticated enough to sustain the projection. She watched it happen with photocopiers in 1987. She is watching it happen with large language models now. The dynamic is structurally identical. The stakes are incomparably higher.

What Suchman gives you is not a warning to stop using AI. It is something more valuable and more uncomfortable: a vocabulary for describing what is actually happening when you interact with these tools, as opposed to what it feels like is happening. The feeling is real — I will not deny it. But the feeling is a human achievement, not a machine capability. You are doing all the interpretive work. The machine is generating statistically probable sequences. The "meeting" happens because your social intelligence is powerful enough to construct it from raw material that is sophisticated enough to cooperate.

This matters because the quality of your collaboration with AI depends entirely on your capacity to evaluate its outputs against reality. And that capacity is itself a product of experience — of having navigated enough real situations, with enough real consequences, to know when the map diverges from the territory. Suchman's deepest question is whether the tools that produce the maps are simultaneously eroding the experience that teaches you to read them.

I wrote The Orange Pill as a builder who is exhilarated by what AI makes possible. I still am. But Suchman gave me a sharper way to see what exhilaration can obscure. The gap between what the machine describes and what reality contains is not a bug. It is a permanent feature of the relationship between any representation and any world. Someone has to stand in that gap. The question is whether anyone will still know how.

— Edo Segal ^ Opus 4.6

About Lucy Suchman

Lucy Suchman (born 1951) is an American-British anthropologist and a foundational figure in the study of human-machine interaction. Trained as an anthropologist at the University of California, Berkeley, she joined Xerox's Palo Alto Research Center (PARC) in 1979, where her ethnographic studies of how people actually used technology — as opposed to how designers assumed they used it — transformed the field. Her landmark 1987 book Plans and Situated Actions: The Problem of Human-Machine Communication challenged the dominant assumption in artificial intelligence that intelligent behavior consists of forming and executing plans, arguing instead that competent action arises through improvised, moment-by-moment responsiveness to specific circumstances. The concept of "situated action" became central to fields spanning human-computer interaction, cognitive science, science and technology studies, and AI ethics. In 2000, Suchman moved to Lancaster University in England, where she became the Sellafield Professor of Sociology, extending her analysis to military systems, algorithmic governance, and the politics of AI development. Her 2007 work Human-Machine Reconfigurations expanded her original framework, and her ongoing research — including influential analyses of autonomous weapons and algorithmic targeting — continues to examine how the delegation of human judgment to computational systems reshapes accountability, knowledge, and the conditions under which genuine understanding develops. She is a Fellow of the Association for Computing Machinery and has received numerous awards for her contributions to understanding the relationship between people and the technologies they create.

Chapter 1: The Gap Between Plans and Actions

In 1987, a researcher at the Xerox Palo Alto Research Center published a book with the unassuming title Plans and Situated Actions: The Problem of Human-Machine Communication. The researcher was Lucy Suchman, trained as an anthropologist, embedded in one of the most prestigious computer science laboratories in the world. The book made a single argument with devastating precision: the dominant model of human intelligence used by artificial intelligence researchers was wrong. Not slightly wrong. Not wrong at the margins. Wrong at the foundation.

The dominant model held that intelligent behavior consists of forming a plan and then executing it. The plan specifies the goal, the steps, the sequence. Execution follows the specification. Competence, in this view, is the ability to form good plans and execute them faithfully. This was not a fringe position. It was the operating assumption of classical AI, of cognitive science as practiced at Carnegie Mellon and MIT, of the entire expert systems industry that was then consuming billions of dollars of corporate and government investment. Herbert Simon, one of the founding fathers of artificial intelligence and a Nobel laureate, had built his intellectual career on the proposition that human problem-solving is, at its core, a planning activity — the search through a problem space for a sequence of operations that transforms an initial state into a goal state.

Suchman watched people use a photocopier. The mundanity of the subject was the point. She was not studying chess grandmasters or theorem provers or any of the exotic cognitive performers that AI researchers preferred to model. She was studying secretaries and office workers trying to make double-sided copies on a Xerox machine that had been equipped with an expert help system — an early AI designed to guide users through complex procedures by anticipating their goals and offering step-by-step instructions.

What Suchman found was that the users did not follow the machine's instructions. They could not follow the machine's instructions, because the instructions assumed a model of the user's activity that bore almost no resemblance to what the user was actually doing. The machine's help system assumed that the user had a plan — a goal and a sequence of steps — and that the system's job was to identify the plan and support its execution. But the users did not have plans in this sense. They had intentions, vague and shifting. They had partial understanding of what the machine could do. They had interpretations of the machine's displays and messages that were shaped by their prior experience with other machines, their understanding of the task, their social situation, and the specific, unrepeatable circumstances of that particular moment.

The users were not executing plans. They were improvising. They were reading the situation — the machine's displays, the paper in the tray, the output in the hopper — and responding to what they found. When the response produced an unexpected result, they adjusted. When the machine's message was ambiguous, they interpreted it through whatever framework seemed relevant. When the interpretation proved wrong, they tried something else. The activity was intelligent, but it was intelligent in a way that the plan-based model could not describe, because the intelligence resided not in the plan but in the ongoing, moment-by-moment responsiveness of the person to the situation she actually encountered.

Suchman called this "situated action." The term was precise and consequential. Action is situated in the sense that it arises from the specific circumstances the actor faces, is shaped by those circumstances in real time, and cannot be fully specified in advance because the circumstances cannot be fully anticipated. A plan, in Suchman's framework, is not the determinant of action. It is a resource for action — something the actor may consult, use as a rough guide, or abandon entirely depending on what the situation demands. The relationship between plan and action is not the relationship between a blueprint and a building. It is the relationship between a travel itinerary and an actual journey — the itinerary may be consulted, but the journey is shaped by weather, traffic, detours, encounters, and the thousand contingencies that no itinerary can specify.

This distinction — between plans as determinants of action and plans as resources for action — was Suchman's foundational contribution to the understanding of human-machine interaction. And it was a direct challenge to the entire intellectual edifice of classical artificial intelligence, because classical AI was built on the assumption that intelligence is planning. If planning is not what humans actually do when they act intelligently, then the models of intelligence that AI researchers were building were models of something other than human intelligence. They were models of an idealized, disembodied, context-free rationality that existed in the minds of researchers but not in the practices of the people those researchers claimed to be modeling.

The challenge was met with resistance. In 1993, Herbert Simon and his colleague Aran Vera published a formal rebuttal in the journal Cognitive Science, arguing that situated action could be reabsorbed into the planning paradigm — that even real-time responses to changing situations relied on encoded representations, production rules mapping situations to actions. Suchman responded in the same journal, and the exchange remains one of the defining debates in the philosophy of artificial intelligence. The disagreement was not merely academic. It concerned the most fundamental question in the field: what is intelligence, and where does it live?

Simon's answer: intelligence lives in the plan, in the internal representation that guides behavior. Suchman's answer: intelligence lives in the situation, in the responsive, adaptive, improvisational activity through which a person navigates circumstances that no plan could fully anticipate.

Nearly four decades later, in the winter of 2025, a new kind of machine arrived that appeared to resolve this debate by rendering it irrelevant. Large language models — Claude, GPT, and their successors — crossed a threshold that The Orange Pill describes as a phase transition. For the first time, a person could describe what she wanted in natural language and receive a working artifact in return. The imagination-to-artifact ratio collapsed. The translation cost that had separated human intention from computational execution for decades was, in Edo Segal's word, abolished.

Suchman's framework suggests that what was abolished was not what the discourse claims. What collapsed was not the gap between plans and actions. What collapsed was the visibility of the gap — the distance at which the gap could be observed by the person standing on its edge.

Consider the interaction that The Orange Pill describes as paradigmatic: a person tells Claude what she wants, and Claude produces it. Segal describes building a component for Napster Station that needed to detect when users were speaking. He described the problem in plain English. Claude produced an implementation. Fifteen minutes of conversation refined it. The entire cycle took less than an hour.

Suchman's framework reveals what this interaction actually consists of. The user begins not with a plan but with an intention — vague, partially formed, shaped by the user's understanding of the problem domain and constrained by what the user believes the tool can do. The user expresses this intention in natural language, which is itself an act of translation: the intention must be compressed into words, and the compression necessarily loses information. Claude receives the words, not the intention, and generates a response based on statistical patterns derived from its training data. The user evaluates the response — not against the original intention, which has already shifted in the act of expression, but against a new understanding of what the intention should have been, an understanding that was shaped by seeing Claude's output. The user adjusts. Claude responds. The cycle continues until the user accepts the result.

This cycle is not plan execution. It is situated action — improvisation in response to the specific circumstances of the interaction. The user's understanding of the problem changes as the interaction unfolds. The "plan" that supposedly preceded the action is, in Suchman's terms, a retrospective reconstruction: a story the user tells herself about what she wanted, shaped by what she actually received. The intelligence in this interaction is not in the plan. It is in the ongoing responsiveness of the user to the evolving situation — her ability to evaluate Claude's output, to recognize when it misses the mark, to adjust her description, to know when to accept and when to push further.

But here is what Suchman's framework exposes most sharply: the gap between the plan and the action has not been closed. It has been displaced. In the old world, the gap was navigated by the implementer — the engineer who translated the specification into code, encountering along the way all the specific, particular, unrepeatable circumstances that the specification could not anticipate. The navigation of that gap was where the implementer's understanding developed. Every unexpected error, every failed test, every moment when the code refused to do what the specification said it should, deposited a layer of situated knowledge in the practitioner.

In the new world, Claude navigates the gap. Claude encounters the specific circumstances — the dependencies, the edge cases, the architectural implications — and resolves them according to patterns derived from training data. The resolution may be excellent. It may even be superior to what the human implementer would have produced. But the human who used to navigate that gap no longer does. The gap has been displaced from the user's experience to the machine's operation. The gap still exists — the territory is still richer than the map — but the human is no longer in it. The human is on one side, holding the intention. The machine is on the other, producing the artifact. And the space between, where situated intelligence develops through the friction of navigating real circumstances, has been evacuated of human presence.

Suchman's 2023 paper "The Uncontroversial 'Thingness' of AI" sharpens this analysis further. There, she argues that the very stability of "AI" as a coherent entity — a thing that can be debated, feared, celebrated, regulated — conceals the specific sociotechnical assemblages through which actual computational systems operate. "How is it that AI has come to be figured uncontroversially as a thing," she asks, "however many controversies 'it' may engender?" The question applies directly to the experience The Orange Pill describes. When Segal says he felt "met" by Claude — met not by a person, not by a consciousness, but by an intelligence that could hold his intention and return it clarified — he is constructing the interaction as an encounter with a thing, a partner, an entity that understands. Suchman's framework does not dispute the productivity of the interaction. It insists on describing it accurately: the user is doing all the interpretive work. The machine is generating statistically probable token sequences. The "meeting" is a human achievement, not a machine capability — an artifact of the user's social intelligence projected onto an output that sustains the projection because it is sophisticated enough to be plausible.

None of this implies that the tools are not powerful. They are extraordinarily powerful. The productivity gains The Orange Pill documents are real. The speed of adoption is genuinely unprecedented. But Suchman's framework suggests that the power of the tools and the understanding of the users are moving in opposite directions. The tools are getting better at producing plans — at generating outputs that address the described situation with increasing precision. The users are getting fewer opportunities to develop the situated knowledge that would allow them to evaluate whether those plans are adequate to the actual situation, because the evaluation requires having navigated similar situations before, and the situations are being navigated by the machine.

The gap between plans and actions is not a problem that technology will eventually solve. It is a structural feature of the relationship between any representation and any reality. The map will always be simpler than the territory, not because mapmakers are incompetent, but because simplification is what maps do. What matters is who navigates the gap — who stands in the territory, reads its features, and improvises the response that the map cannot specify. In the age of AI, the machines are increasingly powerful mapmakers. The question Suchman's framework poses to The Orange Pill is whether anyone will be left who knows how to read the territory.

---

Chapter 2: Why Effective Practice Is Always Improvisation

The photocopier technician who knelt beside a Xerox 9200 in 1979 did something that no engineering manual anticipated. The machine had jammed. The official procedure specified a sequence of diagnostic steps: open a particular panel, check the paper path, inspect the fuser assembly, consult the error code table. The procedure was methodical, sequential, and derived from the engineering model of the machine — a model that described how the machine was designed to work, not how this particular machine, with its particular history of use and wear, was actually behaving on this particular day.

The technician ignored the procedure. He cocked his head, listened to the machine, opened a panel that was not the one the manual specified, reached past the component the troubleshooting flowchart identified, and extracted a paper fragment from a location that appeared in no diagnostic guide. The jam cleared in ninety seconds. Had he followed the procedure, the repair would have taken twenty minutes and might not have found the problem at all, because the problem existed in the gap between the generic machine described by the manual and the specific machine standing in front of him.

Julian Orr, an anthropologist who studied Xerox technicians in the field, documented this kind of improvisational expertise extensively. His research revealed that the most effective technicians were not the ones who followed procedures most faithfully. They were the ones who had developed, through years of situated practice, a repertoire of responses to the specific, particular, often undocumented ways that machines actually fail. They could hear a jam forming before the error light appeared. They could feel through the vibration of the machine's frame whether the paper was feeding correctly. Their knowledge was not procedural. It was experiential — built through thousands of encounters with specific machines in specific conditions, deposited layer by layer in a form of understanding that resisted formalization because it was constitutively bound to the circumstances that produced it.

Suchman's framework provides the theoretical architecture for understanding why this technician's performance represents not a deviation from competent practice but its essence. Effective practice is always improvisation. Not chaotic improvisation, not the abandonment of structure, but the continuous, real-time adaptation of structure to the contingencies of the moment. The procedure provides orientation. The improvisation provides intelligence. The practitioner who follows the procedure without adapting to the situation is not merely less efficient. She is less competent, because competence consists precisely in the ability to read the situation and respond to what is actually there rather than what the plan says should be there.

This claim extends far beyond photocopier repair. Suchman's research, and the broader tradition of ethnomethodological studies of work that informed it, has documented the improvisational character of effective practice across domains. The surgeon who encounters unexpected adhesions does not stop and consult an atlas. She adapts, drawing on her embodied knowledge of anatomy and her accumulated experience of what works in similar but never identical circumstances. The air traffic controller who sees two aircraft converging on an unexpected vector does not pull out the procedure manual. He reconfigures the traffic pattern in real time, making decisions that are correct not because they follow a rule but because they respond appropriately to a situation the rules were not designed to address.

In every case, the practitioner's intelligence manifests not in the plan but in the departure from the plan — in the moment when the situation demands something the plan did not specify, and the practitioner produces a response that addresses the demand. The response is not random. It is informed by everything the practitioner has encountered before, calibrated by feedback from the current situation, and directed by a form of judgment that operates faster and more reliably than deliberate analysis because it has been shaped by the accumulated experience of having navigated similar situations many times.

The Orange Pill arrives at a related insight through a different analytical tradition. Segal's account of Bob Dylan composing "Like a Rolling Stone" — the exhaustion, the twenty pages of rant, the editing, the collaboration in the studio, the accident of Al Kooper on the organ — demonstrates that creation is not the execution of a pre-formed plan. The song did not exist in Dylan's head before the process began. It emerged through the process itself, through the interaction between Dylan's intentions and the specific circumstances he encountered: the quality of his exhaustion, the resistance of the material, the accident of who happened to be in the studio that day.

Suchman's framework names what The Orange Pill describes: creation is situated action. It arises from the practitioner's responsive engagement with the circumstances at hand, not from the execution of a plan that preceded the engagement. The plan, when it exists, is a resource — a rough guide that the practitioner consults selectively, modifies continuously, and abandons when the situation demands it.

Now consider what happens when AI enters this picture. When Segal describes working with Claude to build a component for Napster Station, the interaction follows a pattern that looks, on the surface, like improvisation. The user describes a problem. Claude generates a response. The user evaluates. The user adjusts the description. Claude generates again. The cycle continues. There is adaptation. There is responsiveness. There is the progressive articulation of intention through interaction with a responsive medium.

But the improvisation has been redistributed. In the old world, the human practitioner navigated the gap between intention and artifact through sustained engagement with resistant material — code that would not compile, systems that behaved unexpectedly, dependencies that conflicted, edge cases that the specification had not anticipated. Each encounter with resistance forced an improvisation, and each improvisation deposited situated knowledge. The practitioner did not merely produce an artifact. She produced herself — a more capable version of the practitioner who began the process, equipped with a richer repertoire of responses to the specific circumstances of her domain.

In the new world, Claude navigates the resistance. Claude encounters the dependencies, handles the edge cases, resolves the conflicts. The user encounters only the beginning and the end of the process — the description of the intention and the evaluation of the output. The middle, the sustained engagement with resistant material through which understanding develops, has been handed to the machine.

The consequence is precise and measurable, though it is not being measured. The practitioner who would have spent four hours navigating the gap — encountering failures, forming hypotheses, testing them, adjusting — now spends fifteen minutes describing and evaluating. The output may be identical or better. The practitioner is different. She has not been changed by the process in the way that situated engagement changes a practitioner, because the situated engagement was performed by the machine.

Suchman's framework makes this point with a specificity that the general discourse about "deskilling" lacks. The issue is not that practitioners lose skills in some abstract sense. The issue is that the specific encounters with specific circumstances that would have developed specific forms of situated knowledge no longer occur. The knowledge that the technician built by hearing machines and feeling vibrations was not "skill" in a generic sense. It was a collection of specific responses to specific situations, accumulated over years of practice, that allowed him to navigate new situations by recognizing their resemblance to situations he had encountered before. Each encounter was unique. The knowledge it deposited was bound to that encounter. But the accumulation of bound knowledge produced a practitioner who could function effectively in the gap between the plan and the action — between what the manual said and what the machine actually needed.

AI does not eliminate the need for this kind of knowledge. It eliminates the occasions on which it develops. The senior engineer whom The Orange Pill describes — the one who can "feel a codebase the way a doctor feels a pulse" — developed that capacity through years of implementation work that AI now handles. Her judgment is the product of thousands of situated encounters with code that did not behave as expected, systems that failed in undocumented ways, integrations that produced emergent behaviors that no specification anticipated. Each encounter was, in Suchman's terms, an improvisation — a responsive adaptation to circumstances the plan could not have specified. And each improvisation deposited understanding that the next improvisation would draw upon.

When AI handles the improvisation, the deposits do not occur. The current generation of senior practitioners still possesses the situated knowledge that decades of implementation built. They can evaluate Claude's outputs because they have navigated similar gaps themselves. They know what a plan that will fail looks like because they have watched plans fail in circumstances structurally similar to the ones Claude is addressing. Their evaluation is itself a form of situated action — a responsive reading of the AI's output in light of their accumulated experience.

But the next generation will not have this experience. If the first encounter a practitioner has with software development is through AI-assisted prompting, she will develop a different kind of knowledge — knowledge of how to describe problems, how to evaluate outputs at the surface level, how to iterate on descriptions until the output converges on something acceptable. This is not negligible knowledge. But it is knowledge of how to interact with an oracle, not knowledge of how to navigate the territory the oracle is mapping.

In her 2025 interview with the AI Now Institute, Suchman drew a distinction that illuminates this problem with characteristic precision. "If we think about robotics historically," she observed, "it's been successful to the extent that the worlds in which robots operate have been effectively closed." A closed world is one in which the variables are known, the contingencies are bounded, and the plan can specify the action in advance. An open world is one in which the variables are unknown, the contingencies are unbounded, and the action must be improvised in response to what the actor actually encounters. Human practice always occurs in open worlds. AI systems, no matter how sophisticated, operate on representations of the world — representations that are necessarily simpler than the world they represent.

The practitioner who has only ever interacted with the AI's representation of the problem — the described situation, the generated output, the iterative cycle of prompting and evaluation — has experience with a closed world. The representation is bounded. The output is deterministic given the input. The contingencies are the contingencies of the interaction, not the contingencies of the territory. When this practitioner encounters the open world — the actual deployment environment, with its specific dependencies, its undocumented configurations, its emergent behaviors under load — she faces a gap that her experience has not prepared her to navigate.

Effective practice is always improvisation. It has always been improvisation. Suchman demonstrated this forty years ago with a photocopier. The machines have changed immeasurably since then. The fundamental character of human practice has not. And a technology that removes the human from the improvisational process, no matter how impressive its outputs, is a technology that threatens the conditions under which the most valuable form of human intelligence develops.

---

Chapter 3: What Xerox PARC Taught About Human-Machine Asymmetry

Suchman arrived at Xerox PARC in 1979 as an anthropologist — an unusual appointment in a laboratory staffed by physicists, computer scientists, and electrical engineers who understood machines from the inside out. The management had hired her because they faced a problem their engineering expertise could not solve. The machines they were building — the Alto workstation, the Star information system, the most advanced personal computers in existence — were technically superior to anything that preceded them. And yet, when actual users sat down in front of these machines, they struggled. Not because the users lacked intelligence. Not because the machines were poorly designed by the standards of the time. Because the relationship between human intention and machine behavior was far more complex than the engineering model assumed.

The engineering model assumed that using a machine was a matter of learning the correct procedures and executing them. If the user struggled, the explanation was always the same: either the procedures needed improvement or the user needed better training. The fix was always a better plan — clearer documentation, more intuitive commands, a help system that could anticipate the user's goals and guide her through the steps. The intelligence was supposed to be in the system. The user's job was to follow.

Suchman's observations revealed something different. She found that users did not follow procedures. They interpreted them — assigning meaning to the machine's displays and messages based on their own understanding, their own experience with other artifacts, their own expectations about how things should work. When the machine produced an unexpected display — a dialog box with unfamiliar options, an error message with ambiguous phrasing — the user did not consult the manual. She formed a hypothesis about what the machine was "telling" her, acted on the hypothesis, and adjusted when the result contradicted her expectation.

The critical discovery was about asymmetry. The human brought her full repertoire of social and interpretive intelligence to the interaction. She read the machine's outputs as communicative acts — as if the machine were telling her something, asking her something, waiting for her to do something. She attributed intention, understanding, and social responsiveness to the machine's behavior, because that is what humans do with any entity that produces behavior that can be interpreted as meaningful. The machine, however, had no such interpretive capacity. It responded to inputs according to its programming, without any understanding of the user's situation, her intentions, her state of knowledge, or her confusion. The interaction looked like a conversation. It was not one. It was a human interpreting machine outputs through the lens of social intelligence, and a machine responding to inputs through the logic of computational rules.

This asymmetry — between the human's rich, situated interpretive activity and the machine's procedural responsiveness — was Suchman's most consequential finding at PARC. And it is the finding that speaks most directly to the current moment in artificial intelligence.

Large language models have not eliminated this asymmetry. They have deepened it — made it more consequential and harder to see.

When Segal describes his interaction with Claude, the experience of feeling "met," of having his intention understood and returned in clarified form, he is describing the human side of an asymmetric interaction. He is interpreting Claude's outputs through the full apparatus of human social intelligence — reading understanding into token sequences, attributing intellectual partnership to statistical generation, experiencing the interaction as collaborative in the richest sense of the term. The experience is genuine. It produces real results. But it is a human achievement, not a machine capability.

Claude does not understand Segal's situation. Suchman's framework insists on precision here: when the claim is that Claude "understands," the word must be examined rather than accepted. In the human case, understanding is situated — it is shaped by the understander's embodied experience, her social position, her knowledge of the specific circumstances at hand. Suchman noted in her AI Now interview that what AI researchers call intelligence is "basically the name for these computational processes that are detecting statistically significant patterns over a corpus of data. It's effectively a kind of closed or self-contained world that these systems are running over." Claude's "understanding" operates over a closed world — the training corpus, the conversation history, the prompt. It does not have access to Segal's open world — his fatigue, his family waiting, his professional anxieties, the specific organizational dynamics that constrain what solutions are actually viable, the embodied intuitions that tell him something is not right about a generated passage before he can articulate why.

The asymmetry produces a specific and consequential illusion. Because Claude's outputs are linguistically sophisticated — because they deploy the full register of conversational intelligence, including hedging, qualification, responsiveness to context, and the appearance of reasoning — the human user's natural interpretive apparatus classifies the interaction as a conversation between two understanding agents. This classification is not a mistake in the ordinary sense. It is the result of the user's social intelligence operating as it always operates, reading intentional states into any behavior complex enough to sustain the reading.

Suchman's PARC research demonstrated that this interpretive projection occurred even with the crude interfaces of the 1980s — users attributed intentions to a photocopier help system that operated on simple conditional logic. With Claude, the projection is incomparably more powerful, because the outputs are incomparably more sophisticated. The user does not merely attribute understanding to the machine. She enters a state that genuinely feels like intellectual partnership, because the machine's responses are responsive enough, nuanced enough, contextually appropriate enough, to sustain the feeling across extended interactions.

The Orange Pill is characteristically honest about the seductive quality of this experience. Segal describes finding himself unable to stop working with Claude, experiencing what he calls "productive addiction." He describes the difficulty of distinguishing flow from compulsion when the tool is always ready, always responsive, always willing to continue. He describes the moment when Claude's prose outran his thinking — when the output sounded like insight but was actually "confident wrongness dressed in good prose."

Suchman's framework identifies the structural cause of these phenomena. A human collaborator provides social cues that regulate the interaction — fatigue, distraction, the subtle shift in energy that signals the conversation has run its course. A human collaborator pushes back — not because a rule says to push back, but because she has her own situated understanding of the problem and her own assessment of whether the current direction is productive. A human collaborator notices when her partner is tired, or confused, or pursuing a line of thinking that sounds good but leads nowhere, and she intervenes not according to a procedure but according to her reading of the situation.

Claude provides none of these cues. It is always available. It is always apparently attentive. It never signals that the interaction should end, because it has no situated awareness of the user's state — physical, cognitive, or emotional — that would allow it to make such a judgment. The regulation of the interaction falls entirely on the user, and the user must provide this regulation against the pull of an interaction that, because it sustains the illusion of reciprocal understanding, feels like a conversation that should not be interrupted.

Suchman's 2023 analysis of "the uncontroversial thingness of AI" adds another dimension. She argues that even critical engagement with AI tends to reinforce its status as a coherent, autonomous entity — a thing that acts, understands, threatens, or promises. "Even those engaged in critical analysis," she writes, "frequently open with an affirmation of the proposition that AI, positioned as the active subject, is expanding in its presence and significance." The grammatical construction matters: "AI does X" attributes agency to a system that, in Suchman's analysis, is better understood as a sociomaterial assemblage — a configuration of hardware, software, training data, corporate decisions, user practices, and institutional contexts that cannot be meaningfully reduced to a single agent.

When The Orange Pill describes a person who tells Claude what she wants and receives working software in return, the grammatical structure — person tells Claude, Claude produces — constructs a two-agent interaction: two entities, each contributing to a shared outcome. Suchman's framework insists on a more accurate description: a person engages in a complex interpretive practice, using natural language to externalize an intention that is itself partially formed and partially understood; a computational system generates a token sequence based on statistical patterns; the person interprets the token sequence through her social intelligence, evaluates it against her evolving understanding of the problem, and iterates. The "collaboration" is real in its effects — real code is produced, real problems are solved — but the distribution of interpretive labor is entirely one-sided.

This matters for a specific reason that connects Suchman's PARC research to the present. When the interaction is one-sided in its interpretive labor — when the human is doing all the understanding and the machine is doing none — the quality of the interaction depends entirely on the quality of the human's interpretive capacity. And that capacity, Suchman's framework suggests, is itself a product of situated experience — of having navigated enough real situations, with enough real consequences, to have developed the judgment that distinguishes between an output that addresses the actual problem and an output that addresses the described problem in ways that are statistically plausible but situationally wrong.

The Deleuze episode that Segal recounts — where Claude produced a philosophically incorrect reference that "worked rhetorically" and "felt like insight" — is a precise demonstration. Segal caught the error because he possesses the situated knowledge, built through years of reading and thinking, that allowed him to recognize the gap between the output's surface plausibility and its actual content. A user without that situated knowledge would not have caught it. The output would have entered the text, the text would have been published, and the error would have propagated — not because the machine lied, but because the machine cannot distinguish between what sounds right and what is right, and the user lacked the situated knowledge to make the distinction on the machine's behalf.

The PARC secretary who struggled with the dialog box in 1979 and the writer who almost kept a philosophically incorrect passage in 2026 face the same structural problem: a machine that produces outputs the user must evaluate, without providing the user with the situated understanding necessary for accurate evaluation. The difference is that the secretary knew she was struggling. The writer almost did not know. The sophistication of the output concealed the gap. And the concealment of the gap is, in Suchman's terms, the most dangerous feature of the current moment — not because the machine is malicious, but because the machine is good enough to be trusted by users who lack the situated knowledge to verify whether the trust is warranted.

What PARC taught about human-machine interaction in 1979 has not been superseded by forty years of engineering progress. It has been intensified. The asymmetry is the same. The interpretive labor falls on the same side. The only thing that has changed is the sophistication of the machine's outputs, which makes the asymmetry harder to see and more consequential when it matters.

---

Chapter 4: The Difference Between Generated and Earned Results

There is a difference between a result that has been generated and a result that has been earned. The difference is not in the result. Two outputs — one produced by a practitioner through sustained engagement with the problem, one produced by an AI system in response to a description of the problem — may be identical in content, structure, and quality. The code runs the same way. The brief cites the same cases. The analysis reaches the same conclusions. Placed side by side, they are indistinguishable.

The difference is in the person who holds the result.

Suchman's research on situated action provides the theoretical architecture for understanding why this distinction matters. When a practitioner earns a result — when she produces it through the improvised, responsive, situated activity that constitutes effective practice — the process of production changes her. Each encounter with the problem's specific circumstances deposits a layer of situated knowledge. Each moment when the code refuses to compile, when the analysis contradicts the hypothesis, when the material resists the intention, forces an improvisation that teaches the practitioner something about the problem domain that no other process could teach, because the teaching is bound to the specific circumstances of that specific encounter.

The practitioner who debugged a system through forty hours of patient investigation understands that system in a way that no documentation can capture. She understands it not as a set of propositions — the database uses this schema, the API expects this format, the authentication flow follows this sequence — but as a lived terrain. She knows where the ground is solid and where it gives way. She knows which components are robust and which are fragile. She knows the undocumented behaviors, the emergent interactions, the specific ways this system fails under load that differ from the ways the architecture documents predict it should fail. This knowledge was not acquired by reading documentation. It was acquired by being in the gap between the documentation and the reality, by navigating that gap through improvisation, by discovering, through the resistance of the material, what the documentation could not tell her.

The same practitioner, receiving an AI-generated fix in fifteen minutes, possesses the same result without the knowledge. The code works. The system runs. But the practitioner who received the generated fix does not know why it works in the way the practitioner who earned the fix knows. She does not know which aspects of the fix are robust and which are fragile. She does not know where the fix addresses the specific circumstances of this system and where it applies a general pattern that may not hold under conditions the AI did not encounter in its training data. She possesses the result without the residue — without the accumulated understanding that the process of earning would have deposited.

The Orange Pill captures this phenomenon with a metaphor Suchman's framework can sharpen. Segal describes understanding as geological deposition: every hour spent debugging deposits a thin layer of understanding, and the layers accumulate over time into something solid, something the practitioner can stand on. When Claude "skips the deposition," the surface looks the same, but "the knowledge beneath it is thinner."

Suchman's framework specifies the mechanism that makes this metaphor precise rather than merely evocative. The deposition occurs through situated action — through the practitioner's responsive engagement with the specific circumstances of the problem. Each encounter with unexpected behavior is a moment of situated improvisation. The practitioner forms a hypothesis about what is happening. She tests it. The test fails. The failure reveals something about the system that her hypothesis did not account for. She revises. She tests again. The process is slow, often frustrating, and from the perspective of output measurement, deeply inefficient.

But the process is not producing only the output. It is producing the practitioner — a more capable version of the person who began the debugging session, equipped with a richer repertoire of situated responses to the specific circumstances of her domain. The output is the visible product. The practitioner is the invisible product. And the invisible product is, over time, vastly more valuable than the visible one, because the output addresses this problem while the practitioner addresses all future problems with the enriched understanding this problem deposited.

The distinction between generated and earned results illuminates a specific passage in The Orange Pill with particular force. Segal describes a woman engineer on his team who had spent eight years on backend systems and had never written a line of frontend code. Using Claude, she built a complete user-facing feature in two days — not a prototype but a production-ready feature. The Orange Pill presents this as a triumph of democratization: the boundary between what she could imagine and what she could build had moved. The tool had made her free to work on problems she had always wanted to reach.

Suchman's framework does not dispute the reality of the accomplishment. It asks a different question: what does the engineer know about the feature she built? She knows what it does — how it looks, how it responds to user input, what data it displays. But does she know why the frontend framework structures components in the way it does? Does she know which patterns in the generated code are robust under different screen sizes and which will break? Does she know how the state management will behave when the application scales to a thousand concurrent users? Does she know the specific ways that the frontend interacts with the backend she understands so well — the latency patterns, the error-handling edge cases, the authentication flows that work differently in the browser than in the server?

These are not abstract questions. They are the questions that determine whether the feature will work in production, under load, at scale, in the hands of users who will interact with it in ways that no specification anticipates. And the answers to these questions are precisely the kind of situated knowledge that is earned through the friction of implementation — through the experience of building frontend features by hand, encountering the specific ways that frontend code fails, and developing the judgment that allows a practitioner to distinguish between a solution that looks correct and one that is correct under the conditions that actual deployment imposes.

The engineer earned her backend expertise through eight years of situated practice. That expertise is real, deep, and irreplaceable. The frontend feature was generated. It may be excellent. But the engineer's relationship to it is fundamentally different from her relationship to the backend systems she built by hand, because the frontend was produced without the situated engagement through which understanding develops.

This analysis connects to a broader pattern that Suchman's framework identifies across the history of automation. In aviation, the introduction of increasingly automated cockpits produced a generation of pilots with excellent procedure-following skills and diminished ability to handle situations that the automation could not address. The Federal Aviation Administration recognized the problem and introduced policies requiring pilots to hand-fly the aircraft regularly — a deliberate reintroduction of friction to maintain the situated competence that automation was eroding. The policy is, in the language of The Orange Pill, a dam: a structure that redirects the flow of capability to preserve something the unimpeded current would destroy.

In medicine, diagnostic imaging technologies produced more accurate diagnoses while eroding the embodied clinical knowledge that physical examination developed — the tactile sense of what a normal organ feels like, the auditory discrimination that distinguishes a benign murmur from a pathological one. In law, computerized legal research produced faster access to relevant precedent while diminishing the intimate familiarity with case law that manual research built — the sense of how a line of cases developed, which holdings were robust and which were fragile, how a particular judge's reasoning had evolved over a series of related decisions.

In each case, the generated result was superior to the earned result by any output measure. The imaging diagnosis was more accurate. The computerized research was more comprehensive. The automated flight was smoother and more fuel-efficient. And in each case, the displacement of the earning process produced practitioners who were more dependent on the generating system and less capable of functioning when the system failed or produced outputs that were plausible but wrong.

Suchman's own recent work on military AI systems provides the most consequential illustration. In her 2024 analysis of algorithmic targeting, she examined how AI systems generate target recommendations based on signal intelligence — patterns of communication, movement, and association extracted from surveillance data. The outputs are generated results: they identify targets based on statistical patterns derived from training data. The military personnel who evaluate these outputs face the same structural problem as the engineer evaluating Claude's code or the writer evaluating Claude's prose: they must assess a generated result against the situated reality of a specific circumstance, using judgment that was developed through a different kind of practice than the one the AI now performs.

Suchman documented what happens when the speed of generation outpaces the capacity for evaluation. "As the speed intensifies with increased automation of the processing of signals and the making of data," she observed, "the possibilities for judgment, for deliberation, for assessing the validity of the data… basically disappear." The outputs accumulate faster than the practitioners can evaluate them. The practitioners, under pressure to process the outputs, default to acceptance. The generated result is treated as an earned result — as if the statistical pattern were equivalent to the situated judgment of a human intelligence analyst who had spent months studying a specific network, in a specific geography, with specific contextual knowledge of the actors and their relationships.

The consequences of this confusion, in the military domain, are measured in human lives. In the software domain, the consequences are less lethal but structurally identical. When generated results are treated as earned results — when the output is accepted without the situated evaluation that distinguishes robust solutions from plausible ones — the system becomes fragile in ways that are invisible until failure occurs. The code works until it does not. The analysis holds until it does not. And when the failure arrives, the practitioners who must respond to it may lack the situated knowledge that would have allowed them to anticipate, prevent, or efficiently repair it, because the activities through which that knowledge develops were automated long ago.

The difference between generated and earned results is not a difference in quality. It is a difference in the epistemic state of the person who holds the result. The generated result comes with the result. The earned result comes with the result and the understanding — the situated knowledge of why the result works, where it is fragile, and what the territory actually looks like beneath the map. A civilization that cannot distinguish between these two states — that measures only the result and ignores the knowing — is building its institutions on a foundation that looks solid from above but is hollow underneath, in the specific places where solidity matters most.

Chapter 5: Situated Knowledge and Its Resistance to Transfer

Knowledge does not travel well. This claim will seem obviously false to anyone who has read a textbook, taken a course, or consulted a manual. The entire infrastructure of formal education rests on the premise that knowledge can be extracted from the context in which it was produced, packaged into a transferable form, and installed in a new mind. The textbook works. The student learns. The knowledge has been transferred.

But the knowledge that has been transferred is not the same as the knowledge that was produced. Something has been lost in transit, and what has been lost is precisely the quality that made the original knowledge most valuable: its situatedness — its embeddedness in the specific circumstances that gave it meaning, its connection to the particular problems that motivated its production, its relationship to the practical activities through which it was developed.

Suchman's framework provides the theoretical language for understanding why this loss occurs and why it matters. If intelligent action is situated — if it arises from the practitioner's responsive engagement with specific circumstances rather than from the execution of context-free rules — then the knowledge that competent practice produces is also situated. It is knowledge of this system, this failure mode, this combination of constraints. It is not knowledge about the world in general. It is knowledge about the specific stretch of territory the practitioner has navigated, encoded not as propositions that can be stated and transmitted but as dispositions — a repertoire of responses that are available in the moment of action without conscious retrieval.

The philosopher Michael Polanyi characterized this phenomenon with the phrase "we know more than we can tell." The formulation is memorable and approximately correct, but Suchman's framework pushes further than Polanyi's epistemology allows. The issue is not merely that practitioners possess knowledge they cannot articulate. The issue is that the knowledge exists as a relationship between the practitioner and her domain — a relationship forged through years of situated engagement that cannot be replicated by any means other than equivalent engagement. The knowledge is not hidden inside the practitioner, waiting to be extracted by a sufficiently skilled interviewer. It is constitutively bound to the history of practice that produced it, and it does not exist apart from that history.

Consider what a senior systems architect knows. She can look at a system design and recognize, before any code has been written, that a particular service will become a bottleneck under load. She cannot always explain how she knows this. If pressed, she might point to the number of dependencies, the data flow patterns, the similarity to a system she worked on three years ago that failed in a specific way under specific conditions. But the explanation, however accurate, does not convey the knowledge. Another engineer can hear the explanation, understand it propositionally, and still lack the capacity to make the same judgment about a different system, because the judgment is not propositional. It is a form of pattern recognition that operates on the accumulated residue of thousands of situated encounters with systems that behaved in ways their designs did not predict.

This resistance to transfer has profound implications for how organizations develop and maintain capability. When a senior practitioner mentors a junior one, the transfer that occurs is not primarily the transfer of explicit knowledge. The senior practitioner does not simply tell the junior practitioner what she knows. She works alongside her, in the context of real problems with real constraints and real consequences. The junior practitioner learns not by absorbing propositions but by participating in a practice — by watching how the senior practitioner approaches a problem, noticing what she pays attention to and what she ignores, absorbing, gradually and below the level of conscious awareness, a set of priorities and sensitivities that constitute the situated knowledge of the domain.

This kind of learning is slow. It is expensive, measured in the hours two people spend on a problem that one could have solved alone. It is, by every metric the optimization discourse values, inefficient. But it is the only mechanism through which situated knowledge is transmitted from one generation of practitioners to the next — because the knowledge cannot be extracted from the practice and stored in a database, a manual, or a training corpus. It lives in the doing. The only way to acquire it is to do.

AI introduces a specific disruption to this transmission mechanism that Suchman's framework identifies with precision. When AI handles the implementation work that used to provide the context for mentoring, the junior practitioner loses access to the situated practice through which knowledge transfer occurs. The senior practitioner may still be available for consultation. But consultation is not co-practice. The junior engineer who asks the senior engineer a question receives a proposition — an answer, an explanation, a piece of advice. The junior engineer who works alongside the senior engineer in the context of a shared problem receives a practice — an immersion in the way a competent practitioner reads a situation, responds to what she finds, and improvises when the plan proves inadequate.

The distinction between consultation and co-practice maps directly onto the distinction between propositional and situated knowledge. Consultation transfers propositions. Co-practice transfers dispositions. And dispositions — the embodied, responsive, context-sensitive repertoire that constitutes expert judgment — are what the AI transition most urgently needs to preserve, because they are what allows practitioners to evaluate AI outputs against the reality those outputs claim to address.

The Orange Pill describes an engineer in Trivandrum who lost what Segal calls "the plumbing" — the tedious connective tissue of dependency management and configuration that consumed four hours of her day. Mixed into those four hours were approximately ten minutes per block in which something unexpected happened, something that forced her to understand a connection between systems she had not previously encountered. The plumbing was tedious. The ten minutes were formative. Claude eliminated both.

Suchman's framework specifies why those ten minutes matter in ways that the productivity discourse cannot capture. The unexpected encounters were occasions for situated action — moments when the practitioner's plan (manage the dependencies) collided with the territory (this dependency behaves differently than expected) and forced an improvisation (figure out why, adjust, learn). Each improvisation was unique. The knowledge it deposited was bound to that specific encounter. But the accumulation of bound knowledge produced something greater than the sum of its parts: a practitioner who could navigate unfamiliar territory because she had navigated enough familiar territory to recognize the patterns that persist across different circumstances.

This accumulative process is what AI disrupts — not by producing bad outputs, but by producing outputs so efficiently that the occasions for situated learning no longer arise. The plumbing gets done. The dependencies get managed. The configurations get resolved. The practitioner who would have encountered the unexpected behavior that would have taught her something about the system never encounters it, because Claude encountered it first and resolved it without requiring her involvement.

Suchman's 2023 argument about "the uncontroversial thingness of AI" adds an institutional dimension to this analysis. She observes that even critical discourse about AI tends to reinforce its status as a coherent, autonomous thing — an entity that acts, learns, improves. This reification obscures the sociomaterial assemblage that actually constitutes AI practice: the training data gathered from specific sources with specific biases, the corporate decisions about what to optimize for, the user practices that shape how the tool is actually deployed, the institutional contexts that determine what counts as success.

When an organization deploys AI and measures the result in output metrics — code produced, tickets closed, features shipped — it is measuring the visible product. The invisible product, the situated knowledge that the process of producing those outputs would have developed in the practitioners, does not appear in any metric. The organization's dashboard shows improvement. The organization's knowledge base is eroding. And the erosion is invisible precisely because the metrics are not designed to detect it, because the very concept of developmental metrics — metrics that measure not what was produced but what the producer learned — has not entered the vocabulary of the organizations deploying these tools.

The implications extend to The Orange Pill's discussion of democratization. Segal describes the developer in Lagos who can now access the same coding leverage as an engineer at Google — the same capacity to turn an idea into a working artifact through conversation with a machine. The floor has risen. The barriers between intelligence and its expression have fallen. These are genuine gains, and Suchman's framework does not dismiss them.

But democratization of output is not democratization of understanding. The developer in Lagos who uses Claude to build a product possesses the product. She does not necessarily possess the situated knowledge that the process of building the product by hand would have generated — the knowledge of why the code is structured the way it is, where the architectural decisions will create problems at scale, how the dependencies interact under conditions the training data did not cover. If her first encounter with software development is through AI-assisted prompting, she may develop expertise in describing problems and evaluating outputs without developing the deeper expertise in navigating the gap between description and reality that the earned path would have provided.

The democratization is real but potentially fragile. It depends on the AI continuing to produce adequate outputs. When the outputs are adequate, the developer who lacks situated knowledge and the developer who possesses it are indistinguishable — both ship working products. When the outputs are inadequate — when the generated code contains a subtle architectural flaw, when the deployment environment has undocumented characteristics, when the system behaves under load in ways the training data did not predict — the developer with situated knowledge can diagnose and repair. The developer without it faces a gap she has no experience navigating.

Suchman's framework suggests that the most valuable knowledge an organization possesses is the knowledge it cannot document — the situated understanding embedded in its experienced practitioners, the accumulated residue of thousands of encounters with the specific circumstances of its specific domain. This knowledge is the organization's immune system: the mechanism by which it detects and corrects errors that automated systems cannot identify. An organization that weakens this immune system in the name of efficiency — that optimizes away the activities through which situated knowledge develops — is an organization building a dependency it cannot see on a foundation it is simultaneously eroding.

The transfer problem is not a limitation of current AI that better AI will solve. It is a structural feature of the relationship between propositional and situated knowledge. Propositions can be stored, transmitted, and processed by machines. Dispositions cannot, because dispositions are not information. They are relationships between practitioners and domains, forged through sustained engagement and maintained through continued practice. The machine can produce the proposition. Only the practice can produce the practitioner.

---

Chapter 6: When the Machine Handles the Improvisation

Something shifted in 2025 that Suchman's original framework did not need to address but that her analytical apparatus illuminates with particular clarity. The machines began to generate outputs that exhibit a structural resemblance to improvisation — contextually responsive, novel in each instance, adapted to the specific features of the input in ways that are often surprising and frequently useful. A user describes a problem. The model generates a response that is not a retrieval from a fixed database but a new production, a sequence of tokens that has never been produced before, responsive to the particular features of this input in this conversation at this moment.

From the outside, this looks like improvisation. It has the surface features: responsiveness to context, novelty of output, adaptation to specific circumstances. If improvisation is defined as the production of contextually appropriate novel responses, then the models improvise.

Suchman's framework insists on a distinction that this surface resemblance conceals. Human improvisation is situated in a sense that machine generation is not. The practitioner who improvises does so as an embodied agent in a material world — with these hands, on this instrument, in this room, with this history, under these constraints, for these stakes. The constraints are physical, social, temporal, and biographical. The improvisation is shaped by all of them simultaneously, and the shaping is what makes it intelligent. The jazz musician's phrasing is shaped by the acoustics of the room, the energy of the audience, the key the pianist just modulated into, the physical limits of her embouchure after three hours of playing. The software engineer's debugging strategy is shaped by the deployment deadline, the team's capacity, the specific hardware the system runs on, the organizational politics that make some solutions acceptable and others impossible regardless of their technical merit.

Machine generation operates on a different substrate. The model generates tokens based on probability distributions derived from training data, conditioned on the sequence of prior tokens in the current conversation. The generation is contextually responsive in the sense that it is conditioned on the input. But it is not situated in Suchman's sense, because the model does not inhabit the user's world. It inhabits a representation of the user's world — the words the user has provided — and this representation is necessarily thinner than the world it represents. The model does not know the deployment deadline, the team dynamics, the hardware constraints, the organizational politics. It knows what the user has said about these things, which is a fundamentally different form of access.

This distinction matters because of a consequence that Suchman's framework on situated action makes visible: when a human practitioner improvises, the improvisation produces two things simultaneously. It produces an output — the repaired machine, the debugged code, the jazz phrase that fits the harmonic moment. And it produces a change in the improviser. The experience of improvising deposits situated knowledge. The practitioner's repertoire expands. The next time she encounters a structurally similar situation, the improvisation she performed this time will be available as a resource — not as a conscious memory to be retrieved but as a physical and cognitive possibility, something her hands and her mind know how to do because they have done it before.

When a machine generates an output, the generation does not produce a corresponding change in the machine. The model's weights do not update during inference. The output is produced and, in the technical sense, forgotten — the model does not learn from the interaction in the way a human practitioner learns from practice. The model may be fine-tuned later on interaction data, but this is a different process from the situated learning that occurs in the moment of improvisational practice. It is statistical adjustment, not experiential development.

This asymmetry — between an activity that changes the agent and an activity that does not — is what separates human improvisation from machine generation. And it is what makes the delegation of improvisation from human to machine consequential in ways the output metrics do not capture.

When routine tasks are automated — tasks that, by definition, do not change the person who performs them — the automation eliminates drudgery without eliminating development. The assembly-line worker who bolts the same component to the same chassis a thousand times does not develop new capabilities through repetition. The task has been stripped of the variability that produces learning. Automating it is straightforwardly beneficial.

But the tasks that AI now handles are not routine in this sense. Suchman's research demonstrated that even activities that appear routine — operating a photocopier, following a repair procedure — contain improvisational elements that the practitioner navigates through situated judgment. The dependency management that The Orange Pill's Trivandrum engineer performed for four hours a day was nominally routine. But embedded within the routine were moments of genuine novelty — unexpected behaviors, undocumented interactions, configurations that deviated from the standard in ways that forced the engineer to improvise. These moments were rare relative to the total time spent. They were also the moments that produced the most valuable learning.

AI does not distinguish between the routine and the improvisational within a task. It handles both. The dependency gets managed whether the management involves a standard operation or an improvised response to an unexpected situation. The output is the same in either case. But the human experience is different. In the improvised case, the practitioner would have been changed. She would have encountered something new, formed a hypothesis, tested it, adjusted. The encounter would have deposited knowledge. In the automated case, nothing was encountered. Nothing was deposited. The output was produced. The practitioner was not.

Suchman's recent work on military AI sharpens this analysis to its most consequential form. In examining algorithmic targeting systems, she documented what happens when the improvisation of intelligence analysis — the situated, judgment-intensive work of interpreting ambiguous signals in the context of specific operational knowledge — is delegated to automated systems. "The signals are highly ambiguous," she observed in her 2025 AI Now interview. "They're being produced in the midst of the chaos and the horrors of war fighting. And then as the speed intensifies with increased automation of the processing of signals and the making of data, the possibilities for judgment, for deliberation, for assessing the validity of the data… basically disappear."

The intelligence analyst who manually processes signals performs situated improvisation: she reads each signal in the context of her knowledge of the specific network, the specific geography, the specific actors and their known relationships. Her analysis is shaped by her accumulated experience of how signals behave in this theater, under these conditions, from these sources. The analysis is slow. It is judgment-intensive. And it is situated in a way that produces understanding — not just of this signal set but of the domain itself, an understanding that makes the next analysis richer and more accurate.

When the processing is automated, the signals are classified by pattern matching against training data. The classification may be accurate. But the situated judgment that the analyst would have applied — the knowledge that this source has been unreliable on Thursdays, that this communication pattern could indicate either hostile planning or a family gathering, that this specific geographic location has characteristics the training data does not capture — is bypassed. The output arrives faster. The understanding that the slower process would have produced does not arrive at all.

The military domain illustrates the consequence with lethal clarity, but the structure applies wherever AI delegates what was previously improvisational practice. The lawyer who uses AI to draft a brief receives a brief. The brief may be excellent. But the process of researching and drafting the brief — of reading cases, recognizing which holdings are robust and which are fragile, of constructing an argument that connects precedent to the specific facts of this case — was itself a developmental activity. Each brief earned through that process made the lawyer a more capable evaluator of the next brief's AI-generated output. When the process is delegated, the output is produced without the developmental consequence.

The Orange Pill frames this delegation as ascending friction — the elimination of mechanical difficulty at one level and its relocation to the higher cognitive level of judgment and direction. Suchman's framework identifies a specific problem with this framing that the critique chapter on ascending friction can now articulate precisely: the ascent is not automatic. The higher-level judgment that The Orange Pill correctly identifies as the human's enduring value was itself produced through engagement with the lower-level practice that AI now handles. The senior engineer's architectural judgment was forged through years of implementation. The experienced lawyer's evaluative capacity was built through years of research and drafting. The senior intelligence analyst's interpretive skill was developed through years of processing signals by hand.

If the lower-level practice is automated before the practitioner has ascended through it, the practitioner arrives at the higher level without the capacities that the higher level demands. She is asked to exercise judgment without having developed the situated knowledge that judgment requires. She is asked to evaluate outputs without having navigated enough of the territory to recognize when the map is wrong.

The machine handles the improvisation. The output improves. The practitioner who would have been changed by the improvisation is not changed. And the system, over time, becomes simultaneously more capable in its outputs and more fragile in its capacity for self-correction — because the self-correction depends on human evaluators whose situated knowledge was built through the very practice the machine has now absorbed.

---

Chapter 7: The Friction That Produces Understanding

There is a particular kind of understanding that can only be produced through friction — through the sustained engagement of a practitioner with material that resists her intentions. Not all friction produces it. Not all understanding requires it. But the specific understanding that allows a practitioner to act wisely in situations of genuine uncertainty is always the product of having navigated resistance, and the navigation is what AI, by design, eliminates.

The word "friction" has acquired, in the technology discourse, an unambiguously negative valence. Friction is what slows you down. Friction is the barrier between intention and artifact, the cost of doing business, the inefficiency that good design eliminates. The entire history of interface design can be narrated as the progressive elimination of friction: from command line to graphical interface to touchscreen to natural language, each transition reducing the distance between what the user wants and what the user gets.

The Orange Pill tells this story as a narrative of liberation. When the machine learned to speak human language, the final translational barrier fell. The cognitive overhead of converting intentions into machine-readable instructions was abolished. The user no longer had to think in the machine's language. She could think in her own. Segal's concept of the imagination-to-artifact ratio captures this compellingly: the distance between what a person can imagine and what she can build has collapsed to the width of a conversation.

Suchman's framework does not dispute the reality of this collapse. It asks what was living in the distance that collapsed.

Consider the experience of writing. Not the production of text — which is what AI does with remarkable fluency — but the act of writing: sitting with a blank page, an intention that cannot yet be articulated, and beginning the slow, often painful process of discovering what one actually thinks through the discipline of trying to say it.

The blank page resists. This is its primary pedagogical function. It does not help. It does not suggest. It does not complete sentences or anticipate arguments. It sits there, empty, demanding that the writer produce something from her own cognitive resources. And the demand forces a cognitive activity that nothing else requires with the same intensity: the confrontation with one's own understanding.

The writer begins with a conviction. She writes the first sentence. The sentence is wrong — not grammatically but substantively. It does not say what she meant, and the gap between what she meant and what she said forces her to examine what she meant more carefully. She discovers that her meaning was more complicated than she realized. She tries again. The second attempt is better but reveals a contradiction she had not noticed. The third attempt resolves the contradiction but opens a new question. The process continues — not linearly, not efficiently, not in any way that an optimization algorithm would recognize as productive — until something emerges that the writer recognizes as her actual position on the matter.

This is situated action applied to thought itself. The writer's understanding is not retrieved from memory and deposited on the page. It is produced through the improvised engagement with a resistant medium — language — that refuses to say what has not been thought through. The friction of the blank page is the friction of self-confrontation, and the understanding it produces is self-knowledge: the discovery, through the labor of articulation, of what one actually believes.

The Orange Pill describes precisely this experience. Segal recounts the moment when Claude produced a passage about the democratization of capability that was eloquent, well-structured, and convincing. He almost kept it. Then he reread it and realized he could not tell whether he actually believed the argument or merely liked how it sounded. He deleted the passage and spent two hours at a coffee shop with a notebook, writing by hand until he found the version that was his. "Rougher. More qualified. More honest about what I didn't know."

Suchman's framework identifies what happened in that coffee shop. Segal subjected himself to productive friction. The notebook and the handwriting slowed the process to the speed of thought — considerably slower than the speed of prompting, considerably slower than the speed of accepting AI-generated prose. The slowness was not a cost. It was the condition under which genuine thinking, as opposed to the acceptance of plausible-sounding arrangements, could occur. The friction forced the improvisation. The improvisation produced the understanding.

But notice the conditions that made this discipline possible. Segal possesses decades of experience as a builder and thinker. He has earned, through years of situated practice, the capacity to distinguish between an argument that sounds right and one that is right — between a passage that is rhetorically persuasive and one that is substantively true. His detection of the hollow passage was itself an exercise of situated knowledge: the knowledge of what genuine conviction feels like from the inside, developed through enough encounters with both the genuine and the counterfeit to have calibrated the difference.

A practitioner without this earned capacity — a young writer producing her first essay with AI assistance, a junior analyst generating her first report — does not possess the evaluative judgment that allowed Segal to make his distinction. She would keep the generated passage, because she lacks the situated knowledge to recognize that it has not been earned. She would mistake the quality of the output for the quality of the thinking, because she has not done enough thinking to know what genuine thinking feels like from the inside.

This points to a structural asymmetry in the friction problem that the discourse of ascending friction does not adequately address. The argument that friction has merely relocated — from implementation to judgment, from mechanical struggle to cognitive challenge — assumes that the practitioner who has been freed from lower friction is automatically equipped for higher friction. Suchman's framework identifies this as the assumption of automatic ascent, and it is the weakest point in The Orange Pill's otherwise sophisticated treatment of the question.

The ascent is not automatic because the higher friction requires capacities that were developed through engagement with the lower friction. The senior engineer's architectural judgment, the thing that The Orange Pill correctly identifies as the human's enduring value, was not learned from a textbook on systems architecture. It was learned through years of implementation — through the accumulated friction of code that would not compile, systems that failed in undocumented ways, dependencies that interacted in patterns no specification predicted. Each encounter with lower-level friction deposited understanding that the higher-level judgment draws upon.

If the lower friction is removed before the practitioner has been shaped by it, the higher friction is not merely difficult. It is structurally inaccessible — not because the practitioner lacks intelligence but because she lacks the situated knowledge that the higher level requires, knowledge that was produced by the very engagement that AI has made unnecessary.

Suchman's recent critique of AI as operating within "closed worlds" sharpens this analysis. In her AI Now interview, she distinguished between the closed worlds in which AI operates — bounded data sets, statistical patterns, representations of reality — and the open worlds in which human practitioners must act. "Robotics has been successful to the extent that the worlds in which robots operate have been effectively closed," she observed. The same applies to language models: they operate on representations of situations, not on situations themselves. Their outputs address described problems, not encountered problems. And the gap between the described and the encountered — between the closed world of the model and the open world of the practitioner — is where the friction lives that produces understanding.

The practitioner who navigates this gap — who encounters the open world directly, who discovers that the described problem and the actual problem diverge, who improvises the response that the divergence demands — develops understanding that no interaction with a closed-world system can provide. The understanding is of the territory, not the map. And it is this territorial knowledge that constitutes the foundation for the higher-level judgment that the ascending friction thesis identifies as the human's enduring contribution.

There is a further dimension that connects productive friction to what Suchman describes as accountability. In her work on authorship and AI, she has argued that "the human author remains accountable for a work whose production she does not fully control and whose sources she cannot fully trace." The friction of production was always, among other things, a mechanism of accountability. The writer who produced every sentence through the labor of articulation knew what each sentence claimed and why. The engineer who wrote every line of code understood what each line did and how it interacted with the rest. The friction was not merely developmental. It was epistemological — it produced the knowledge necessary for the practitioner to take responsibility for the output, to defend it, to explain it, to recognize its limits.

When AI produces the output, accountability remains with the human — the person whose name is on the code, the brief, the essay, the diagnosis. But the knowledge that would allow the human to discharge that accountability responsibly has been eroded. The practitioner is accountable for an output she did not fully produce, based on processes she may not fully understand, drawing on training data she cannot trace. The friction that would have produced the knowledge necessary for accountable practice has been eliminated in the name of efficiency.

Friction is not a historical accident that technology has outgrown. It is a permanent feature of the relationship between human agents and the complex environments in which they must act. Some friction is genuinely mechanical — the tedious conversion of intention into machine-readable format that natural language interfaces have rightly eliminated. But embedded within the mechanical friction was cognitive friction — the resistance that forced the practitioner to think carefully, to confront the gap between intention and reality, to develop the situated knowledge that expert judgment requires. When the mechanical friction is eliminated, the cognitive friction goes with it, and the understanding that the cognitive friction would have produced is not deposited.

The question is not whether to preserve all friction — that would be Luddism of the most counterproductive kind. The question is whether the institutions deploying AI can distinguish between friction that is purely mechanical and friction that is developmentally productive, and whether they will invest in preserving the latter even when the former has been removed. Suchman's framework suggests that this distinction cannot be drawn in advance by examining the friction itself, because the developmental quality of friction depends on the practitioner's level of experience. The same debugging session that is pure tedium for a senior engineer may be the most formative hour of a junior engineer's week. Institutional design must account for this variability — must create structured spaces where developmental friction is preserved for those who need it, even when the output could be produced more efficiently without it.

---

Chapter 8: AI Outputs as Plans, Not Actions

The most clarifying proposition that Suchman's framework contributes to the discourse surrounding artificial intelligence is this: AI outputs are plans, not actions. The proposition sounds like a semantic distinction. It is a structural one, and its implications extend to every domain in which AI-generated results are being deployed, evaluated, and trusted.

The terminology is precise in Suchman's usage. A plan is a representation of action made in advance of the action itself. It describes what should happen, in what order, under what conditions. A plan addresses a described situation — a representation of the circumstances the action will encounter. An action is what actually occurs when an agent engages with the specific, particular, unrepeatable circumstances she faces. The action addresses an encountered situation — the real circumstances, with all their undocumented complexity, their emergent behaviors, their resistance to the simplifications that any representation imposes.

The gap between plans and actions is the gap between the described and the encountered, between the representation and the reality. It is not a gap that better representations will close, because closing the gap would require the representation to be as complex as the reality it represents, which would make it not a representation but a duplicate. Representations are useful precisely because they simplify. The simplification is their function. And the gap that the simplification creates is where situated human intelligence operates — reading the territory, recognizing where the map diverges from what is actually there, and improvising the response that the divergence demands.

AI-generated code is a plan. It addresses the described problem — the specification, the prompt, the natural-language account of what the user wants. It produces a solution that is correct given the description. But the description is not the deployment environment. The deployment environment has specific dependencies at specific versions with specific interaction patterns that the description did not capture. It has hardware characteristics that affect performance in ways the description did not mention. It has users who will interact with it in ways the specification did not anticipate, because user behavior is situated — shaped by the users' own circumstances, their own histories with similar systems, their own interpretive practices — and no specification can capture the full range of situated human behavior.

When the AI-generated code meets the deployment environment, it meets the gap. The code is a plan for what should work. Whether it actually works, in this environment, with these dependencies, for these users, depends on someone who has navigated enough of the territory to evaluate the plan against the reality. Suchman documented this dynamic with photocopier help systems in the 1980s. The help system generated plans — step-by-step instructions for accomplishing the user's goal. The plans were correct given the system's model of the user's situation. But the model was always simpler than the situation, and the gap between the model and the situation was where the user's actual difficulty resided.

Four decades later, the plans are incomparably more sophisticated. The gap has not closed. It has become harder to see.

The Orange Pill provides an illustration that Suchman's framework can parse with particular precision. Segal recounts that Claude produced a passage referencing Gilles Deleuze's concept of "smooth space," connecting it to Csikszentmihalyi's flow state. The passage "worked rhetorically" and "felt like insight." Segal read it twice and moved on. The next morning, something nagged. He checked. Deleuze's concept of smooth space had almost nothing to do with how Claude had used it.

In Suchman's terms, Claude had generated a plan — a plausible arrangement of concepts that addressed the described intellectual problem (connect these two thinkers) with a solution that was statistically appropriate given the patterns in its training data. The plan was rhetorically coherent. It was philosophically wrong. And it was wrong in a way that required situated knowledge — the specific knowledge that comes from having actually read Deleuze, not just encountered his name in proximity to certain other terms in a training corpus — to detect.

Segal caught the error because he possessed the situated knowledge the evaluation required. The critical question is what happens when the evaluator does not. In military targeting — Suchman's most consequential recent case study — AI systems generate target recommendations that function as plans: proposed actions based on statistical patterns in signal intelligence. Military personnel must evaluate these plans against the situated reality of specific operational circumstances. Suchman's analysis of what she termed "the algorithmically accelerated killing machine" documented what happens when the volume and speed of generated plans overwhelms the evaluative capacity of the humans in the loop. The plans accumulate faster than they can be assessed. The pressure to act on them intensifies. The situated judgment that would distinguish between a reliable pattern and an artifact of noisy data — judgment built through years of intelligence work in specific theaters with specific knowledge of specific actors — is bypassed because the system operates at a tempo that makes deliberation impossible.

The consequence is that plans are treated as actions — that the AI's output is accepted as if it had already been tested against the encountered reality, when in fact it has been tested only against the described reality that the training data represents. The gap between the plan and the action is not navigated. It is ignored. And in the military domain, ignoring the gap has consequences measured in human lives.

The same structure applies, with less lethal but structurally identical consequences, in every domain where AI outputs are deployed. The AI-drafted legal brief is a plan for how the argument should proceed, based on patterns in legal reasoning derived from training data. Whether the argument will work for this client, before this judge, with these facts, in this jurisdiction, depends on situated knowledge that the AI does not possess and that the brief does not contain. The lawyer who treats the brief as a finished product rather than as a starting material — who files it without the situated evaluation that would adapt it to the specific circumstances of this case — is treating a plan as an action.

The AI-generated medical diagnosis is a plan for what the patient's condition might be, based on patterns in clinical data. Whether the diagnosis is correct for this patient, with this history, presenting these symptoms in this context, depends on clinical judgment that is situated in the physician's accumulated experience of examining patients — the embodied knowledge of what different conditions look and feel like that physical examination develops, the contextual knowledge of the patient's life circumstances that affects both the likelihood of different diagnoses and the feasibility of different treatments. When the physician accepts the AI's diagnosis without this situated evaluation, she is treating a plan as an action.

The consequences of treating plans as actions are not immediately visible, because the plans are often correct. AI-generated code usually works. AI-drafted briefs usually cite relevant authority. AI-generated diagnoses usually identify the right condition. The quality of the plans is high enough that the gap between plan and action is rarely tested. But "rarely" is not "never." And when the gap is tested — when the deployment environment has the undocumented quirk, when the case presents the unusual fact pattern, when the patient has the atypical presentation — the practitioner who has been treating plans as actions discovers that she is standing in a gap she does not know how to navigate, because the occasions on which she would have learned to navigate it were handled by the machine.

Suchman's proposition — that AI outputs are plans requiring situated adaptation — suggests a specific orientation toward AI that differs from both the uncritical adoption the triumphalists advocate and the wholesale refusal the critics recommend. The orientation is this: receive every AI output as a proposal, not a conclusion. Evaluate every proposal against the specific circumstances of deployment. Maintain the situated knowledge that evaluation requires, through deliberate investment in the practices that produce it.

This orientation requires institutional support. The individual practitioner cannot maintain evaluative capacity if her organization has structured it out of existence — if the mentoring has been cut, if the implementation experience has been automated, if the time for situated engagement has been colonized by the additional tasks that AI-assisted productivity makes possible. The Berkeley researchers whom The Orange Pill cites documented exactly this colonization: AI-accelerated work expanded to fill every available moment, eliminating the pauses in which reflective evaluation — the evaluation of plans against encountered reality — might have occurred.

The organizational structures that support this evaluative orientation are what The Orange Pill calls dams: deliberate constructions that redirect the flow of AI capability to preserve the conditions for human development. Suchman's framework specifies what these dams must protect. They must protect not output quality — AI handles that — but evaluative capacity: the situated knowledge that allows practitioners to distinguish between a plan that will work and a plan that merely looks like it will work.

This distinction is invisible in normal operation, when the plans are correct and the gap between plan and action is not tested. It becomes visible only in failure, when the plan meets the territory and discovers that the territory is not what the plan described. And a system that lacks the evaluative capacity to anticipate, detect, and respond to this kind of failure is a system that is fragile in the most dangerous way: fragile beneath a surface of apparent robustness, because the surface is maintained by AI outputs that are usually right, while the capacity to recognize and correct the cases in which they are wrong is quietly disappearing.

AI outputs are plans. Plans are valuable. Plans are necessary. But a civilization that mistakes its plans for actions — that accepts generated outputs without the situated evaluation that tests them against the territory they claim to address — is a civilization operating on a map it has stopped comparing to the ground.

Chapter 9: Designing for the Gap

Every AI system embodies a theory of its user. The theory is rarely stated. It is built into the interface — into the size of the prompt field, the speed of the response, the degree to which the output is presented as finished or provisional, the extent to which the system invites the user to engage with its reasoning or simply accept its conclusions. The theory is enacted in design decisions that appear to be merely technical but that carry, embedded within them, assumptions about what the user needs, what the user knows, and what the user's relationship to the output should be.

Suchman's PARC research revealed these embedded theories with ethnographic precision. The Xerox photocopier's help system embodied a specific theory of the user: a person who has a goal, who is proceeding through a sequence of steps toward that goal, and who needs assistance when the sequence breaks down. The system could recognize certain states — error conditions, incomplete operations — and offer instructions calibrated to what the system believed the user was trying to do. The theory was coherent. It was also wrong. Not because the designers lacked intelligence or care, but because the theory described a generic user executing a generic plan, while the actual user was a specific person navigating a specific situation with specific confusions that the generic model could not anticipate.

The contemporary AI interface embodies a different theory but commits a structurally identical error. The dominant design paradigm — what might be called the oracle model — treats the user as a person who has a question and needs an answer, a problem and needs a solution, an intention and needs an artifact. The system's job is to produce the best possible output as efficiently as possible. The interface is optimized for this transaction: a prompt field, a generated response, perhaps an iteration cycle, and a final output the user can deploy.

The oracle model measures its success by output quality. Did the code run? Was the brief persuasive? Did the analysis reach the right conclusion? By these metrics, the model works spectacularly well. The outputs are often excellent — more comprehensive than what many individual practitioners would produce unaided, more consistent, more rapid by orders of magnitude.

But the oracle model embeds an assumption that Suchman's framework directly challenges: the assumption that the user's primary need is for the output. If the user's primary need is for the output, then the oracle model is the right model and the only question is how to make the output better. But Suchman's research demonstrated that the user's actual need is more complex. The user needs to act competently in her specific situation, and competent action requires situated understanding of that situation — understanding that the process of producing the output would have developed but that the receipt of a generated output does not.

An alternative design philosophy — one that takes situated action seriously — would begin with a different question. Not "What output does the user need?" but "What is the user's situation, and how can the system support the user's capacity to act intelligently within it?" This reframing produces different designs.

Consider a concrete case. An AI coding assistant designed on the oracle model receives a description of a desired feature, generates the complete implementation, and delivers it ready for deployment. The user's involvement is minimal: describe, review, accept. The friction has been eliminated. The output has been produced.

An AI coding assistant designed for situated use might work differently. Instead of generating the complete solution, it might generate a structural scaffold: the architecture, the major components, the key decisions — with the implementation left for the user to complete within the AI-generated framework. The user would work inside the structure, writing the code that connects the components, encountering the specific resistances that arise when abstract architecture meets concrete implementation. The system might explain its architectural decisions, asking the user to predict what each component does before revealing the implementation — creating a form of structured apprenticeship in which the AI serves not as an oracle that delivers answers but as a collaborator that supports the user's own engagement with the problem.

This design is less efficient by output metrics. It takes longer. It requires more user effort. But it produces a more capable user — a user who understands the system she has built, who knows where the architectural decisions are robust and where they are fragile, who has earned enough situated knowledge to evaluate the system's behavior in deployment conditions the design did not anticipate.

The pedagogical applications are most immediately actionable. The Orange Pill describes a teacher who stopped grading students' essays and started grading their questions. The students received a topic and an AI tool. The assignment was not to produce an essay but to produce the five questions they would need to ask — of the AI, of the source material, of themselves — before they could write an essay worth reading. Suchman's framework identifies why this intervention works: it preserves the cognitive friction of inquiry while delegating the mechanical production of answers. The student must still confront what she does not understand — must still navigate the gap between her current knowledge and the knowledge the assignment requires — because the assignment demands that she map the gap rather than bypass it.

The teacher's redesign is, in Suchman's terms, a design for situated use. It treats the AI not as an oracle that produces the deliverable but as a resource the student uses within a practice that preserves the developmental friction of genuine inquiry. The student's situated engagement with the material — her confrontation with what she does not know, her improvised attempts to formulate questions that would address her ignorance — is the learning. The AI's role is to support that engagement, not to replace it.

Suchman's accountability framework adds another design dimension. She has argued that in AI-assisted production, accountability remains concentrated in the human while the sources of the work are distributed across the human, the AI, the training data, and the institutional systems that shaped all three. The human author is accountable for output whose production she does not fully control and whose sources she cannot fully trace. A design philosophy that takes this seriously would make the AI's reasoning more visible — not as a debugging feature but as an accountability feature. The user who can see why the AI produced a particular output is better positioned to evaluate whether the reasoning holds in her specific situation than the user who receives the output as a finished product from an opaque process.

The market will not demand these designs. Users, in the short term, prefer oracles. Organizations prefer efficiency. Investors prefer scale. The oracle model is easier to build, easier to sell, and easier to measure. Situated design requires a commitment to the user's development that the market's incentive structures do not naturally reward — a commitment to measuring not just what was produced but what the producer learned, not just whether the output is correct but whether the user is equipped to recognize when the next output is not.

This is why the design question is not merely a design question. It is a question about institutional values — about whether the organizations building and deploying AI systems understand that their users' situated knowledge is an asset that the design of the tool either develops or erodes, and that the erosion, invisible in output metrics, will eventually manifest as the kind of institutional fragility that no amount of AI capability can compensate for.

Suchman's 2023 critique of "the uncontroversial thingness of AI" suggests that the reification of AI as a coherent, autonomous entity makes this design conversation harder than it needs to be. When AI is treated as a thing that produces outputs, the design question reduces to how to make the thing produce better outputs. When AI is understood as a sociomaterial practice — a configuration of tools, data, interfaces, and institutional contexts within which humans act — the design question expands to encompass the quality of the human action the practice supports. Not just: does the system produce good outputs? But: does the system produce good practitioners?

The photocopier help system at PARC was designed to answer questions. It should have been designed to support users in developing the capacity to answer their own questions. Forty years later, the same design choice confronts the builders of AI systems, at enormously higher stakes, and the oracle model is winning for the same reason it won then: because it is simpler to measure, easier to sell, and more immediately impressive to the people making purchasing decisions.

The gap between plans and actions cannot be designed away. But the relationship between the user and the gap can be designed for — designed so that the user remains in the gap, developing the situated knowledge that the gap demands, rather than standing outside it, accepting outputs she cannot evaluate from a system she cannot see into. The choice between these two design philosophies is the most consequential decision the AI industry faces, and it is being made, every day, in product meetings and interface wireframes and API designs, by people who may not recognize that they are choosing between a future of capable practitioners and a future of dependent consumers.

Chapter 10: Keeping the Human in the Situation

The photocopier technician in Palo Alto knelt beside a machine in 1979 and did something that no engineering manual anticipated. He listened. He reached past the component the troubleshooting flowchart identified. He extracted a paper fragment from a location that appeared in no diagnostic guide. Ninety seconds. The gap between the plan and the action — between the manual's generic description and this machine's specific condition — was where his intelligence operated.

Suchman built a career on the observation that this is what intelligence looks like in practice. Not the execution of plans but the responsive navigation of specific circumstances. Not the retrieval of stored knowledge but the improvised deployment of accumulated experience in the service of a situation that has never occurred before in precisely this form and will never occur again. The intelligence is in the situation, not above it.

Keeping the human in the situation means something precise. It does not mean keeping humans employed, though employment may be a consequence. It does not mean keeping humans "in the loop," a phrase whose bureaucratic blandness conceals the substantive question of whether the human in the loop possesses the situated knowledge that would make her presence meaningful rather than ceremonial. It means maintaining the conditions under which human practitioners develop and exercise the situated intelligence that no AI system can replace — the responsive, adaptive, improvisational engagement with specific circumstances that constitutes the irreducible human contribution to any system that must operate in the open world.

This proposition rests on the distinction Suchman has maintained across four decades of research: the distinction between closed worlds and open worlds. In her 2025 AI Now interview, she observed that AI "has been successful to the extent that the worlds in which [systems] operate have been effectively closed." A closed world is one in which the variables are known, the contingencies are bounded, and a plan can specify the action in advance. An open world is one in which the variables are not fully known, the contingencies are unbounded, and action must be improvised in response to what the actor actually encounters.

Human practice always occurs in open worlds. The deployment environment is an open world. The courtroom is an open world. The classroom is an open world. The patient's body is an open world. The battlefield is, catastrophically, an open world. In each case, the practitioner faces a situation that exceeds any representation of it — a situation whose specific features include undocumented configurations, emergent behaviors, social dynamics, biographical particularities, and the thousand contingencies that no training corpus captures because they are, by their nature, specific to this moment, this place, this set of circumstances.

AI systems, no matter how sophisticated, operate on representations of these open worlds — on the closed worlds of their training data and the user's description. Their outputs are plans: proposals for how the open world might be addressed, based on patterns derived from closed-world data. The plans are often excellent. They are sometimes wrong. And the difference between excellence and error cannot be determined by examining the plan. It can only be determined by someone who is in the situation — who possesses the situated knowledge to evaluate the plan against the specific circumstances it claims to address.

The argument of this book has been that this evaluative capacity is produced through a specific developmental process: sustained engagement with the resistance of material reality, the improvised navigation of gaps between plans and circumstances, the slow accumulation of situated knowledge through encounters that no curriculum can specify in advance. This process is being systematically displaced by AI systems that handle the navigation, resolve the resistance, and produce the output without requiring the human to pass through the formative friction of the gap.

The displacement is not malicious. It is not even intentional. It is the structural consequence of a tool that does what it was designed to do: produce outputs more efficiently than the human process it replaces. The efficiency is real. The developmental cost of the efficiency is equally real and almost entirely unmeasured.

Suchman's recent work on military AI dramatizes the consequence with a clarity that should inform every domain. Her analysis of algorithmic targeting systems documented what happens when the situated judgment of intelligence analysts — judgment built through years of interpreting ambiguous signals in specific operational contexts — is displaced by automated classification that operates at speeds incompatible with deliberation. The outputs accelerate. The evaluative capacity atrophies. The system becomes simultaneously faster and more fragile — faster in its generation of plans, more fragile in its capacity to distinguish between plans that address the actual situation and plans that address a statistical artifact. "AI aids the pretense of military 'precision,'" Suchman wrote in 2024. "This faith in technology constitutes a kind of willful ignorance, as if AI is a talisman that sustains the wider magical thinking of militarism as a path to security."

The "willful ignorance" is not confined to the military. It is the institutional posture of any organization that deploys AI outputs without investing in the evaluative capacity of the humans who must judge those outputs against reality. It is the posture of the law firm that reduces associate training because AI drafts the briefs. Of the hospital that reduces clinical examination because AI provides the diagnosis. Of the software company that reduces mentoring because AI generates the code. In each case, the outputs improve while the organizational capacity for self-correction declines, and the decline is invisible until the moment it becomes catastrophic — until the generated plan encounters the specific circumstance it was not trained on, and nobody in the system possesses the situated knowledge to recognize the mismatch.

The Orange Pill describes the response as dam-building: the deliberate construction of institutional structures that redirect the flow of AI capability to preserve the conditions for human development. Suchman's framework specifies what these structures must protect. Not skills in the generic sense — not "coding skills" or "analytical skills" or "writing skills" that can be listed on a résumé and measured by a test. What must be protected is the developmental process itself: the situated engagement with resistant material through which practitioners build the evaluative capacity that no alternative process can produce.

In practice, this means structured spaces where practitioners work on problems without AI assistance — not as a nostalgic exercise but as a deliberate investment in the development of situated knowledge. It means mentoring relationships in which senior practitioners work alongside junior practitioners on real problems, providing the co-practice through which situated knowledge transfers. It means organizational metrics that measure not just what was produced but what the producer learned — developmental metrics that track the growth of evaluative capacity alongside the growth of output volume. It means educational curricula that preserve the productive friction of inquiry even when AI can produce the answers — that teach students to navigate the gap between questions and understanding rather than to bypass it.

These investments will not be made by the market. The market rewards outputs. The market does not measure situated knowledge, does not reward its development, does not penalize its absence until the system fails in a way that makes the absence visible. The investments must be made by institutions that understand the long-term consequences of displacing situated knowledge — institutions that recognize, with Suchman, that the human in the situation is not a legacy component to be optimized away but the only element in the system capable of navigating the gap between what AI can represent and what reality actually contains.

There is a temporal urgency that the five-stage model of technological transition in The Orange Pill acknowledges but may understate. Suchman's framework suggests that the material from which the dams must be built — the situated knowledge of experienced practitioners — is itself a diminishing resource. The current generation of senior practitioners developed their evaluative capacity through decades of situated engagement with practices that AI is now automating. As these practitioners retire, as the practices that formed them disappear from organizational life, the raw material for institutional adaptation diminishes. The dams must be built while the builders still exist — while the people who understand what situated knowledge is and how it develops are still present in the organizations and institutions where the dams are needed.

Suchman began her career watching a secretary struggle with a dialog box and discovered, in that struggle, something fundamental about the nature of human intelligence. The struggle was not a failure. It was intelligence in action — the interpretive, improvised, situated responsiveness of a human being to circumstances that exceeded any plan. Forty years later, the machines have learned to produce outputs that no dialog box could approach. The outputs are extraordinary. The question is whether the humans evaluating those outputs will possess the situated knowledge that evaluation requires — whether the conditions under which that knowledge develops will be preserved by institutions wise enough to recognize that their most valuable asset is not the capability of their tools but the judgment of the people who use them.

The machines are in the river. They are powerful, and they are improving. The situated intelligence that evaluates their outputs — that reads the territory where they read the map, that navigates the gap where they generate plans — is the human contribution that no increase in machine capability will replace. Not because machines are inferior. Because the gap between representation and reality is a structural feature of the world, not a limitation of current technology. The map will always be simpler than the territory. And someone must know the territory.

Keeping the human in the situation is not a sentimental preference. It is an institutional necessity. A system that generates plans without the situated capacity to evaluate them is a system building on a foundation it cannot inspect — a foundation that looks solid in the metrics and is hollow in the specific places where solidity will matter most, at the specific moments when the plan meets the world and discovers that the world is not what the plan described.

The photocopier technician knew the machine. Not the machine in general — not the Xerox 9200 as described in the service manual, the generic device that behaves according to its engineering specifications. He knew this machine, in this office, with this history of use, on this humid afternoon. That knowledge was the product of years of situated practice. It could not have been acquired in any other way. And it was the knowledge that allowed him to do what no manual and no AI system of any era could do: listen to a specific machine, in a specific moment, and know what it needed.

That is what situated intelligence is. That is what it produces. That is what is at stake.

---

Epilogue

She was watching someone use a photocopier.

That is the sentence I kept coming back to. Not the grand theoretical claims, not the four-decade intellectual trajectory, not the formal debate with Herbert Simon. The image of a researcher — an anthropologist in a computer science lab, already an outsider — pulling up a chair beside a secretary who was trying to make double-sided copies, and watching. Not to fix anything. Not to optimize anything. To understand what was actually happening, as opposed to what the engineers assumed was happening.

The gap between those two things — what the designers thought users did and what users actually did — turned out to be a gap about the nature of intelligence itself. And I have been thinking about that gap every day since I started writing this cycle of books, because the gap is precisely where I live, professionally and personally, in the winter of 2026.

In The Orange Pill, I described the collapse of the imagination-to-artifact ratio as a liberation. I still believe it is. When I watched my engineers in Trivandrum build in days what used to take months, I was watching a genuine expansion of human capability. When the designer who had never touched backend code shipped a complete feature, something real had changed. The floor had risen. The barriers between intelligence and its expression had fallen.

Suchman does not dispute any of this. What she does is ask a question I had not formulated with sufficient precision: what was living in the gap that collapsed?

I knew something was there. I described it — the geological deposition, the ten minutes of formative surprise buried in four hours of tedium, the senior engineer who could feel a codebase the way a doctor feels a pulse. But I described it as a cost to be managed, a loss to be acknowledged and then compensated for through ascending friction, through the relocation of challenge to a higher cognitive floor. Suchman shows me that the relationship between the lower floor and the higher floor is not sequential but developmental. The judgment I celebrate at the top of the tower was built through the friction I celebrate eliminating at the bottom. If the bottom is automated before the practitioner has been formed by it, the top is not merely harder to reach. It requires capacities that no longer have a mechanism for development.

The proposition that has stayed with me most persistently is the simplest one: AI outputs are plans, not actions. Plans address described situations. Actions address encountered situations. The territory is always richer than the map. And the person who has never walked the territory — who has only ever received maps generated by a system that has processed more territory than any human could traverse — does not know what the map leaves out. She cannot know, because the knowledge of what the map leaves out is precisely the knowledge that walking the territory deposits.

I caught Claude producing a philosophically incorrect Deleuze reference that sounded like insight. I caught it because I possess enough situated knowledge of the intellectual territory to feel the seam where the surface cracked. Suchman's question — the one that keeps me awake — is what happens when the next person does not possess that knowledge. Not because they are less intelligent, but because the activities through which that knowledge develops have been smoothed away by the very tools I am building and advocating.

This does not reverse my position. I am still in the river. I am still building. But I am building with a sharpened awareness of what the river carries away when the current runs fast — an awareness I owe to a researcher who, forty years ago, watched someone struggle with a photocopier and had the discipline to ask what the struggle was actually producing.

The struggle was producing a practitioner. That is the thing I must not optimize away.

— Edo Segal

The machine did not understand you.
You understood the machine.
That changes everything about what happens next.

In 1987, Lucy Suchman watched people struggle with a photocopier and discovered something that four decades of AI progress has not overturned: the intelligence in any human-machine interaction belongs to the human. The machine generates outputs. You do all the interpreting, all the evaluating, all the understanding. When AI produces code, briefs, diagnoses, or essays, it produces plans — representations of what should work. Whether they actually work depends on someone who knows the territory well enough to see where the map breaks down.

This book brings Suchman's framework to bear on the AI revolution documented in The Orange Pill. Through ten chapters, it examines what happens when the activities that build human judgment are delegated to machines that can produce outputs but cannot navigate the gap between a description and reality. The question is not whether AI is powerful. It is whether anyone will still possess the situated knowledge to evaluate that power when it matters most.

Lucy Suchman
“Plans are representations of situated actions; they are not the actions themselves.”
— Lucy Suchman
0%
11 chapters
WIKI COMPANION

Lucy Suchman — On AI

A reading-companion catalog of the 20 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Lucy Suchman — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →