By Edo Segal
The gap I could not close was the one between the meeting and the screen.
I noticed it in Trivandrum, during the training I describe in *The Orange Pill*. In the room, my engineers debated approaches, raised objections, challenged each other's interpretations. Fifteen minutes later, at their desks with Claude open, they moved so fast that the careful discussion was already irrelevant. The prototype had been built. The debate had been overtaken by the artifact. What the room had held open, the screen had closed.
I celebrated that speed. I called it a twenty-fold productivity multiplier. I wrote about it with the energy of a builder who had just witnessed something extraordinary. And I was not wrong — the capability expansion was real, the democratization was real, the compression of the imagination-to-artifact ratio was real.
But something was also being lost in the gap between the conversation and the code. Something I could feel but could not name.
Karl Weick gave me the name. He called it sensemaking — the ongoing, messy, fundamentally social process through which organizations figure out what they are doing and why. Not decision-making, which assumes you already know what the options are. Sensemaking, which is what happens before that — when the situation is ambiguous, when multiple interpretations are plausible, when nobody is sure what the question even is.
What Weick spent his career demonstrating is that this ambiguous, uncomfortable, inefficient phase is not a problem to be solved. It is the phase where organizational intelligence actually lives. The debate that feels like it is slowing you down is the mechanism through which flawed interpretations get caught before they become flawed products. The friction between perspectives is not waste. It is the immune system of organizational thought.
AI compresses that phase almost to nothing. The prototype arrives before the debate concludes. The artifact generates its own momentum. And the alternative interpretation — the one that would have emerged on the second day of argument, the one that nobody had articulated yet — never gets built, never gets tested, never generates the evidence that would have revealed what the first interpretation missed.
This is not an argument against speed. It is an argument for understanding what speed costs when it outruns the interpretive process that should govern it. Weick's framework does not tell you to slow down. It tells you what you are skipping when you do not — and why the things you skip might be the things that matter most.
The river does not wait for you to understand it. But the dam you build in ignorance will not hold.
-- Edo Segal ^ Opus 4.6
1936-present
Karl Weick (1936–present) is an American organizational theorist and psychologist widely regarded as one of the most influential thinkers in the history of organization studies. Born in Warsaw, Indiana, he earned his PhD in psychology from Ohio State University and spent the majority of his career at the University of Michigan's Ross School of Business, where he is the Rensis Likert Distinguished University Professor of Organizational Behavior and Psychology, Emeritus. Weick's foundational work, *The Social Psychology of Organizing* (1969, revised 1979), reframed organizations not as static structures but as ongoing processes of interpretation and action. He developed the concept of sensemaking — the process by which people construct plausible interpretations of ambiguous situations — into a comprehensive theory of organizational cognition, most fully articulated in *Sensemaking in Organizations* (1995). His landmark 1993 analysis of the Mann Gulch wildfire disaster became one of the most cited papers in management scholarship, illuminating how sensemaking collapses under extreme conditions. With Kathleen Sutcliffe, he co-authored *Managing the Unexpected* (2001, revised 2007 and 2015), which established the framework of high-reliability organizations and the concept of organizational mindfulness. His work on loose coupling, enactment, and the retrospective nature of understanding has shaped fields ranging from crisis management to healthcare safety to strategic planning.
On March 27, 1977, two Boeing 747 aircraft collided on the runway at Los Rodeos Airport in Tenerife, killing 583 people. It remains the deadliest accident in aviation history. The KLM captain, one of the most experienced pilots in the airline's fleet, began his takeoff roll without clearance. He did not lack information. The co-pilot had expressed hesitation. The tower's communications, though garbled by simultaneous transmissions, contained cues that the runway was not clear. Fog prevented visual confirmation. Every piece of evidence necessary to avert the disaster was available inside the cockpit. None of it penetrated the interpretation the captain had already committed to: the runway is clear, we are cleared for takeoff, the sequence is proceeding as expected.
The Tenerife disaster is not, in Karl Weick's framework, a story about a bad decision. It is a story about sensemaking that committed too early to a single interpretation and then filtered, reinterpreted, or simply failed to register incoming information that contradicted it. The captain did not decide to ignore the evidence. He had already constructed a plausible account of his situation — an account consistent with his identity as a senior commander, with the organizational pressure to depart on schedule, with the sequence of events as he had experienced them — and the account was so coherent, so internally consistent, that contradictory cues could not break through. The co-pilot's hesitation was interpreted as deference, not warning. The tower's ambiguous phrasing was interpreted as confirmation, not caution. The fog was interpreted as an operational constraint to be managed, not as a signal that the situation was more uncertain than the captain's interpretation allowed.
This is what Weick means by sensemaking, and it is the foundational concept without which nothing else in this book can be understood. Organizations do not make decisions in the way that management textbooks describe. They do not survey the available options, evaluate them against a consistent set of criteria, weigh the probabilities, and select the optimal path. That model — the rational decision-making model — is a fiction so pervasive that it has become invisible. It shapes how organizations describe their processes, how business schools teach strategy, how consultants structure their recommendations, and how leaders explain their choices after the fact. It is also, as decades of organizational research have demonstrated, almost entirely disconnected from what actually happens when groups of people face ambiguous situations and must act.
What actually happens is sensemaking. People encounter a situation that is confusing, ambiguous, or novel. They notice certain cues and ignore others. They construct an interpretation — a story, a frame, an account — that makes the situation intelligible. The interpretation enables action. The action produces consequences. The consequences generate new cues. And the cycle continues, endlessly, imperfectly, never arriving at a final, correct interpretation because the situation itself keeps changing, partly in response to the actions that the interpretation produced.
Weick identified seven properties of this process, and each one overturns a commonsense assumption about how organizations think. Sensemaking is grounded in identity construction: who I understand myself to be determines what I notice, what I interpret, and what actions I consider available. The KLM captain's identity as a senior commander shaped his sensemaking at Tenerife — the interpretation that he was in control, that the sequence was proceeding normally, was not just a cognitive assessment but an expression of who he was. Sensemaking is retrospective: people know what they think after they see what they have done. The plan does not precede the action. The action precedes the understanding. Sensemaking is enactive: people do not merely interpret their environments; they produce the environments they then interpret through their own actions. Sensemaking is social: it is accomplished not in individual minds but in conversations, in shared narratives, in the ongoing process of collective interpretation. Sensemaking is ongoing: it never starts and never stops; it is the continuous background process of organizational life. Sensemaking is focused on and by extracted cues: people notice a small number of signals from the vast field of available information and use those signals to construct their interpretation. And sensemaking is driven by plausibility rather than accuracy: a plausible interpretation that enables coordinated action is organizationally more valuable than an accurate interpretation that produces paralysis.
That final property — plausibility over accuracy — is the one that matters most for understanding what artificial intelligence does to organizational cognition. It is also the one that is most consistently misunderstood. Plausibility does not mean that organizations are indifferent to truth. It means that under conditions of ambiguity, when the truth cannot be determined with certainty, organizations default to the interpretation that is good enough to act on. The map need not be perfectly correct. It need only be sufficiently coherent to get people moving in roughly the same direction — because the movement itself generates the information that improves the map.
This is the deep structure of organizational cognition, and it operates at every level: the team trying to figure out what a customer wants, the division trying to make sense of a competitive shift, the board trying to interpret a technological disruption that no one fully understands. At every level, the process is the same. Interpret. Act. Observe. Reinterpret. Act again. The interpretations are never complete. The map is never final. But the movement continues, because organizations that stop moving — that wait for certainty before acting — are organizations that die.
Now consider what happens when artificial intelligence enters this process.
Segal describes, in The Orange Pill, a moment in December 2025 when a Google principal engineer sat down with Claude Code and described, in three paragraphs of plain English, a problem her team had spent a year trying to solve. One hour later, Claude had produced a working prototype. "I am not joking," she wrote publicly, "and this isn't funny." The moment captures something that the rational decision-making model cannot explain and that sensemaking theory can: the engineer's reaction was not a calculation of efficiency gains. It was a sensemaking crisis. Her existing interpretation of how complex technical problems get solved — iteratively, over months, through the accumulated expertise of a coordinated team — had been rendered incoherent by a single interaction. The situation had stopped making sense.
This is what Weick's framework reveals about the AI transition that purely technological or economic analyses miss. The disruption is not primarily to workflows, skill sets, or cost structures, though it is all of those things. The disruption is to sensemaking itself — to the ongoing interpretive process by which organizations understand what they are doing, why they are doing it, and what they should do next. When the tools that people use to make sense of their work change at the speed that AI tools have changed, the interpretive frameworks that organize professional identity, institutional knowledge, and collective action are thrown into crisis. Not because they were wrong. Because they were built for a world that no longer exists.
The question this book poses is not whether AI improves organizational decision-making. Decision-making, in Weick's framework, is downstream of sensemaking. The quality of the decisions depends on the quality of the interpretations that precede them. The question is whether AI improves organizational sensemaking — whether it enhances the interpretive process through which ambiguous situations become intelligible enough to act on — or whether it short-circuits that process, producing clarity that is faster but shallower, more confident but less tested, more actionable but less wise.
Segal describes the feeling of working with Claude as being "met" — not by a person, not by a consciousness, but by an intelligence that could hold his intention and return it clarified. The description is precise and revealing. What he is describing is a sensemaking partnership: the AI takes the ambiguous, half-formed interpretation in the human's mind and returns it as a structured, actionable account of the situation. The vague becomes specific. The tentative becomes confident. The ambiguous becomes clear.
And clarity, in organizational life, is the most seductive thing in the world. Clarity ends the discomfort of not knowing. Clarity enables action. Clarity aligns teams, motivates effort, and produces the satisfying sensation of forward movement. Every organizational leader craves clarity, because ambiguity — the state of not knowing which interpretation is correct, of holding multiple possibilities simultaneously, of acting without confidence that the action is right — is cognitively and emotionally exhausting.
AI provides clarity at unprecedented speed. That is its organizational gift. It is also, as the chapters that follow will argue, its organizational danger. Because clarity that arrives before the ambiguity has been fully explored is not understanding. It is premature closure — the resolution of interpretive tension before the tension has done its generative work. And premature closure, in Weick's framework, is not merely a missed opportunity. It is the mechanism through which organizations lose the capacity to see what they most need to see.
The Tenerife captain had clarity. He had a coherent interpretation of his situation that was internally consistent, identity-confirming, and actionable. The interpretation happened to be wrong, and five hundred eighty-three people died. The clarity was the problem, not the solution — because the clarity foreclosed the interpretive work that would have revealed its limitations.
This is not an argument against AI. It is an argument for understanding what AI does to the process by which organizations think. The chapters that follow apply Weick's sensemaking framework to the AI transition with the dual recognition that animates The Orange Pill: the tools are genuinely powerful, the speed is genuinely transformative, and the organizational risks of speed without understanding are genuinely catastrophic. The same technology that enables a team to build a product in thirty days can also enable a team to commit to a flawed interpretation before anyone has had the chance to test it against the full complexity of the situation it was meant to address.
The rational model says: get the right answer faster. Sensemaking theory says: the answer is always provisional, always incomplete, always subject to revision in light of new evidence — and the speed at which the answer arrives determines whether the revision happens before or after the consequences have become irreversible.
Weick often quoted the Hungarian Nobel laureate Albert Szent-Györgyi: "Discovery consists of seeing what everybody has seen and thinking what nobody has thought." The phrase captures something essential about sensemaking — that the raw material is always available, always in plain sight, and that the interpretive act, the act of seeing the familiar as if it were strange, is the thing that produces understanding. AI sees what everybody has seen. It processes the available cues with extraordinary speed and sophistication. The question, unresolved and urgent, is whether it thinks what nobody has thought — or whether it produces, at scale and at speed, the interpretation that everybody would have thought, foreclosing the stranger, slower, harder interpretations that nobody has thought yet, the ones that only emerge when the ambiguity is allowed to persist long enough for the unexpected cue to surface, the anomalous pattern to register, the quiet voice in the room to say: something does not feel right.
That quiet voice is what organizational sensemaking, at its best, amplifies. Whether AI amplifies it or drowns it out is the question this book exists to explore.
---
In 1942, the physicists of the Manhattan Project faced a problem that no one had solved before and that many believed might not be solvable. The nuclear chain reaction was theoretical. The engineering required to produce fissile material at scale was unprecedented. The weapon design itself involved calculations so complex that the available computational tools — human computers working with mechanical calculators — could barely approximate them. The ambiguity was total. No one knew whether the thing could be built, whether it would work, or what would happen if it did.
The organizational response to this ambiguity was, by Weick's standards, remarkable. Rather than resolving the ambiguity prematurely — rather than selecting a single approach and committing to it — the project maintained competing interpretive frameworks simultaneously. The physics perspective and the engineering perspective and the military perspective each offered different accounts of what the problem was, what the constraints were, and what solutions were viable. These accounts contradicted each other regularly. The contradictions were not resolved by executive fiat. They were maintained, deliberately, because the ambiguity of the situation was so vast that no single interpretation could encompass it.
The result was not paralysis. It was exploratory richness. Multiple approaches to uranium enrichment were pursued in parallel — gaseous diffusion, electromagnetic separation, centrifuge methods — not because the project could afford redundancy but because no one could determine in advance which approach would succeed. The ambiguity forced breadth. The breadth produced options. The options, when tested against reality, generated the information that eventually narrowed the field — but the narrowing came after exploration, not before it.
This is the principle that Weick articulated across decades of organizational research: ambiguity is not a problem to be eliminated. It is a resource to be managed. When a situation is ambiguous — when multiple interpretations are plausible, when the evidence is equivocal, when experts disagree — the organization that maintains the ambiguity, that resists the temptation to resolve it prematurely, explores more broadly, considers more alternatives, and ultimately develops a richer understanding of its environment than the organization that settles on the first coherent interpretation and moves forward.
The word Weick used was equivocality — the property of a situation that admits multiple, equally plausible meanings. Equivocality is not the same as uncertainty. Uncertainty means you lack information: you do not know the answer, but you know what question to ask, and you know what kind of information would resolve the question. Equivocality means you lack interpretive frameworks: you do not know what the question is, you do not know what kind of answer would help, and the available information supports multiple, incompatible readings. Uncertainty calls for more data. Equivocality calls for more discussion — richer cycles of interpretation, debate, and collective sensemaking that gradually reduce the range of plausible meanings until the situation becomes clear enough to act on.
The distinction matters enormously for understanding what AI does to organizations. AI is spectacularly good at reducing uncertainty. It can process vast quantities of data, identify patterns, and produce structured answers to well-defined questions with a speed and accuracy that no human can match. But equivocality — the condition of not knowing what the question is, of facing a situation so novel that the available frameworks cannot encompass it — is not resolved by more data or faster processing. It is resolved by the slow, messy, fundamentally social process of people arguing about what the situation means.
Segal describes the imagination-to-artifact ratio — the distance between a human idea and its realization — and argues that AI has compressed this ratio to nearly zero for a significant class of work. The description is accurate and its implications are profound. But the sensemaking framework reveals a dimension that the compression metaphor obscures. The imagination-to-artifact journey is not merely a production process. It is a sensemaking process. The months that once separated an idea from its realization were not merely wasted time. They were interpretive time — time during which the idea was tested against objections, refined through debate, challenged by colleagues who saw it differently, and gradually transformed from a vague intuition into a specification that reflected not just the original vision but the collective intelligence of everyone who had argued about it.
When that journey compresses from months to hours, the production efficiency is real. The sensemaking loss may also be real. The idea that arrives as a working prototype in an afternoon has not been subjected to the interpretive pressure that would have reshaped it in transit. It has been built, not understood. Manufactured, not negotiated. The artifact exists, but the collective sensemaking that would have improved it — that would have caught the assumptions that do not hold, the user needs that were not considered, the architectural choices that optimize for today at the expense of tomorrow — has been bypassed.
Consider the development of jazz, a domain far from technology but instructive in precisely the way that organizational theory benefits from cross-domain illumination. In the early decades of jazz, the absence of fully scored arrangements forced musicians into real-time collective sensemaking. No one knew in advance exactly what would happen. The trumpet player stated a theme. The pianist responded with a harmony that was plausible but not predetermined. The drummer felt the emerging rhythm and adjusted. Each musician was simultaneously interpreting and creating, making sense of what the others were doing while contributing to a collective artifact that no individual had planned.
The ambiguity was the generative force. The absence of a fixed score meant that the musicians had to listen, interpret, and respond in real time — and the quality of the music depended on the quality of that real-time sensemaking. When arrangements became more fixed, when the ambiguity was reduced by predetermined structures, the music became more predictable and, in certain important respects, less innovative. The resolution of ambiguity produced polish at the cost of discovery.
The parallel to AI-assisted work is not exact, but it is instructive. When a team works without AI, the process of building a product involves continuous interpretation and reinterpretation. The designer produces a mockup. The engineer looks at it and says, "This cannot be built as specified, but here is what we could do instead." The product manager looks at both and says, "The user need is actually slightly different from what we assumed." Each exchange is a cycle of sensemaking — a collective negotiation of meaning that gradually transforms the original idea into something richer, more nuanced, and more grounded in reality than any individual could have produced alone.
When AI enters this process, it can accelerate each cycle dramatically. But acceleration is not the only effect. The acceleration also changes the social dynamics of the sensemaking. When the prototype arrives in hours instead of weeks, the interpretive debate that would have occurred during those weeks is compressed or eliminated. The designer does not argue with the engineer about what is buildable, because the AI has already built it. The product manager does not challenge the assumption about user need, because the assumption has already been enacted as a working artifact. The debate that would have surfaced the flaw in the assumption never occurs — not because anyone suppressed it, but because the speed of production outran the speed of interpretation.
Weick argued that organizations require what he called "requisite variety" — internal diversity of perspective, method, and interpretation sufficient to match the complexity of the environment they face. Ambiguity is the mechanism through which requisite variety is maintained. When a situation is ambiguous, different people bring different interpretations, and the collision of those interpretations produces the organizational variety that enables adaptive response. When ambiguity is resolved prematurely — when the prototype arrives before the debate has occurred — the variety collapses. The organization converges on a single interpretation, and the alternative interpretations that the ambiguity was protecting disappear from the organizational field of vision.
James March, the organizational theorist whose work paralleled and enriched Weick's, framed this as the tension between exploration and exploitation. Organizations must simultaneously explore new possibilities and exploit existing knowledge. The tension is productive: exploration without exploitation wastes resources; exploitation without exploration produces stagnation. But the two activities compete for the same organizational attention, and the short-term rewards of exploitation (predictable returns, efficient execution, measurable output) consistently overwhelm the long-term benefits of exploration (novel possibilities, unexpected discoveries, adaptive capacity).
AI, in March's framework, massively amplifies the exploitation side of the ledger. It makes execution so fast, so cheap, so immediately rewarding that the organizational incentive to explore — to sit with ambiguity, to tolerate the discomfort of not knowing, to invest time in interpretive work that may not yield immediate results — diminishes toward zero. Why debate whether the product should exist when the prototype is already running? Why explore alternative interpretations when the first interpretation has already been enacted as a working artifact?
The answer is that the first interpretation may be wrong, or incomplete, or right for now but catastrophically wrong for later — and that the debate, the ambiguity, the sustained interpretive effort that the speed of production is displacing, is the organizational mechanism through which such errors are caught, such incompleteness is remedied, and such short-term rightness is tested against long-term consequences.
Segal himself encounters this dynamic when he describes the process of writing The Orange Pill with Claude. He recounts a moment when Claude produced a passage linking Csikszentmihalyi's flow to Deleuze's concept of "smooth space" — a passage that was rhetorically elegant, structurally sound, and philosophically wrong. The passage survived his initial review precisely because its plausibility was so high. It was only on the following morning, when something "nagged" him enough to check, that the error was caught. He reflects: "Claude's most dangerous failure mode is exactly this: confident wrongness dressed in good prose."
In sensemaking terms, what happened is revealing. The AI produced a plausible interpretation. The author, operating under time pressure and influenced by the aesthetic quality of the output — its smoothness, its apparent coherence — accepted the interpretation without subjecting it to the extended scrutiny that would have revealed its limitations. The ambiguity of the original question (how does Csikszentmihalyi relate to Deleuze?) was resolved instantly and incorrectly. And the resolution felt like understanding, when it was actually the foreclosure of understanding.
The generative power of not knowing lies precisely here: in the space between the question and the answer, the space that ambiguity holds open and that premature resolution collapses. That space is where the unexpected connection emerges, where the assumption is challenged, where the interpretation that nobody has considered yet has the chance to form. The space is uncomfortable. It is inefficient. It produces anxiety in organizations that reward speed and confidence.
But it is also where the best organizational thinking happens — and the question for every organization navigating the AI transition is whether the speed that AI provides will be used to compress this space or to redirect it toward harder, more consequential ambiguities that the faster production cycle has now freed people to explore.
The answer, as with most things in organizational life, will depend on the quality of the sensemaking that organizations bring to the question itself.
---
Weick's most radical claim, the one that separates his framework from virtually every other theory of organizational behavior, is that action precedes understanding. Organizations do not first figure out what is happening and then decide what to do. They do something, and then figure out what it was they did.
The claim sounds paradoxical, perhaps irresponsible. How can action precede understanding? Surely one must understand the situation before acting on it? The intuition is powerful, and it is wrong — or rather, it describes an ideal that is almost never realized in practice. In practice, the situations that organizations face are too ambiguous, too fast-moving, too complex for comprehensive understanding to precede action. The information necessary to understand the situation fully is not available until the organization has acted and observed the consequences of its action. Understanding is produced by action, not prior to it.
Weick called this enactment. The word is precise and consequential. Organizations do not merely interpret their environments. They enact them — they produce the environments they then interpret through their own actions. The manager who believes her employees are untrustworthy installs surveillance, which produces resentful and disengaged behavior, which confirms her initial belief. The sales team that believes the market wants lower prices cuts prices, which trains customers to wait for discounts, which confirms the belief that the market wants lower prices. In each case, the action based on the interpretation produces the evidence that validates the interpretation — not because the interpretation was correct, but because the action created the reality that the interpretation predicted.
Enactment is not a flaw in organizational cognition. It is the mechanism through which organizations create the relatively stable, relatively predictable environments that make coordinated action possible. The world is not an objective reality waiting to be correctly perceived. It is, at least in part, an artifact of the actions that organizations take in response to their interpretations of it. This does not mean that reality is infinitely malleable or that any interpretation is as good as any other. The physical world pushes back. Markets punish misinterpretation. Bridges built on flawed engineering collapse. But within the constraints that physical reality imposes, the range of organizational environments that can be enacted is vastly larger than the rational model acknowledges — and the enactment process, once set in motion, produces its own confirmatory momentum.
AI transforms enactment by compressing the cycle from interpretation to action to near-zero. In the traditional organizational process, enactment was slow enough that the interpretive and action phases were separable, at least in retrospect. The team interpreted the situation over weeks of discussion. The interpretation shaped a plan. The plan was executed over months. The execution produced results. The results were interpreted. And the cycle repeated. The slowness of the cycle meant that there were natural break points where the interpretation could be challenged, revised, or abandoned before its enactment produced irreversible consequences.
Consider what Segal describes when he recounts building Napster Station in thirty days. A vision that existed only in his mind — inherently ambiguous because it was unexternalized, untranslated, and unavailable for collective evaluation — became a tangible artifact that hundreds of people interacted with on a trade show floor. The prototype was not a representation of the vision. It was an enactment of the vision — a version of reality produced by the interpretation and immediately available for others to interpret and act upon.
The enactment worked. People interacted with Station. The interactions produced data. The data confirmed that the concept was viable. But the confirmation must be understood through the lens of enactment theory: the users were not reacting to an abstract concept. They were reacting to a specific enactment of the concept — one particular version of what an AI concierge kiosk could be, with particular design choices, particular conversational capabilities, particular aesthetic qualities. The data confirmed the viability of the version that was built, which is not the same as confirming the viability of the concept in general or demonstrating that this version was the best possible version.
The alternative versions — the ones that would have emerged from a longer, more ambiguous, more contested design process — were never enacted. They do not exist. And because they do not exist, they cannot generate the evidence that would have allowed the organization to compare, evaluate, and choose among them. The enacted version is the only version with evidence, and the evidence supports it, because the evidence was produced by it.
This is the self-fulfilling property of enactment, and it is amplified dramatically by AI's speed of production. When a prototype can be built in hours, the first interpretation of what should be built acquires an enormous advantage over all subsequent interpretations — not because it is better, but because it is first. The first interpretation gets enacted. The enactment produces evidence. The evidence confirms the interpretation. And by the time anyone proposes an alternative interpretation, the organizational momentum behind the enacted version — the sunk cost, the confirmatory data, the team alignment, the stakeholder expectations that the prototype has generated — makes revision feel like regression rather than improvement.
Segal describes watching his team at the CES demonstration and observing that "the thirty days of building had been the easy part. The hard part was the thousand small decisions about what Station should be that were still to come." The observation is organizationally precise. The prototype resolved the ambiguity of whether the thing could be built. It did not resolve the deeper ambiguity of what the thing should be. But the existence of the prototype restructured the sensemaking environment: the team was no longer asking "What should we build?" in open, exploratory mode. They were asking "How should we refine what we have built?" — a question that presupposes the enacted version as the baseline and limits the range of acceptable alternatives to those that are compatible with it.
The phenomenon is not unique to AI-assisted building. It is a general property of enactment that Weick documented across domains. But AI accelerates it to a degree that changes its organizational character. When enactment takes months, the organization has time to notice the self-fulfilling cycle and interrupt it. Dissenting voices have weeks to articulate alternative interpretations. Market feedback has time to accumulate. The enacted version encounters enough friction — enough delay, enough resistance, enough unexpected consequences — that the interpretive cycle has a chance to correct the enactment before it becomes entrenched.
When enactment takes hours, the self-fulfilling cycle completes before the dissent can form. The prototype is built, demonstrated, and validated before anyone has had the chance to ask whether a fundamentally different approach might have been better. The correction that friction would have provided is bypassed — not suppressed, not overruled, but simply never given the opportunity to occur.
Weick studied the Bristol Royal Infirmary tragedy, where cardiac surgeons continued performing pediatric heart operations despite a mortality rate roughly double the national average. The enactment cycle was operating at full force: the surgeons interpreted their results through frameworks (the cases were unusually complex, the patients were unusually sick) that confirmed their competence, and the actions they took based on those interpretations (continuing to operate) produced the data that confirmed the interpretations (more operations, some successful, reinforcing the belief that the mortality rate was explicable). Weak signals — the anesthesiologist who raised concerns, the nurse who kept a private tally, the pathologist who noticed patterns — were available but did not penetrate the enacted reality.
The parallel to AI-assisted organizational sensemaking is not that AI will produce medical tragedies. The parallel is structural. When an organization enacts a version of reality through AI-assisted prototyping, the enacted version generates its own confirmatory evidence, and the weak signals that would have challenged the enactment — the alternative design that nobody prototyped, the user need that the prototype did not address, the architectural assumption that will not scale — are structurally disadvantaged in the organizational sensemaking process. They have no evidence. The enacted version has all the evidence. And in organizations, evidence wins — not because evidence is always right, but because evidence is the currency of organizational legitimacy, and the enacted version is the only interpretation that holds any.
The implications for organizational practice are concrete. When AI enables rapid enactment, organizations must build structures that protect the pre-enactment phase — the period of ambiguity during which alternative interpretations have the chance to form, articulate themselves, and generate their own evidence. This means, paradoxically, that the faster the production cycle becomes, the more deliberate the interpretive cycle must be. The speed of building must be counterbalanced by the discipline of asking — before the build begins, while the ambiguity is still intact — whether this is the right thing to build, whether alternative versions deserve to be enacted and compared, whether the interpretation that feels most actionable is also the interpretation that has been most thoroughly tested.
Segal calls for dams — structures that redirect the flow of intelligence toward life. In sensemaking terms, the most important dams are the ones that protect the space between interpretation and enactment: the organizational practices, the cultural norms, the leadership behaviors that insist on interpretive richness before the first line of code is written, the first prototype is built, the first enactment begins to generate its own confirmatory momentum. These structures will feel like friction. In an environment where production is nearly free, the insistence on pre-production deliberation will feel like waste. But the friction is not waste. It is the organizational mechanism through which the self-fulfilling cycle of enactment is interrupted, and the quality of the initial interpretation is tested, before its enactment makes revision feel impossible.
The organizations that navigate the AI transition most successfully will not be the fastest to enact. They will be the ones that learn to separate the speed of production from the speed of interpretation — to build quickly while thinking slowly, to prototype rapidly while challenging the assumptions that the prototype embodies, to use AI's extraordinary production capability in the service of richer sensemaking rather than as a substitute for it.
---
Weick was fond of a line he attributed, with characteristic intellectual playfulness, to an unnamed source: "How can I know what I think until I see what I say?" The line captures the essence of retrospective sensemaking — the principle that understanding follows action, that people discover their own interpretations by observing their own behavior, that the meaning of an event is constructed after the event has occurred, not during or before it.
Retrospective sensemaking is not a bias to be corrected. It is the fundamental temporal structure of human cognition. People cannot make sense of events in real time because events in real time are too complex, too fast, too saturated with information to be interpreted as they unfold. Interpretation requires distance — the slight temporal remove that allows the mind to select from the flood of available cues, organize them into a coherent narrative, and construct an account of what happened and why. The account is always constructed after the fact. It is always selective — certain cues are amplified, others are suppressed, the narrative is shaped to produce coherence rather than completeness. And it is always influenced by the outcome: what happened determines what the events leading up to it are understood to mean.
This principle has a specific and consequential application to the AI transition. The narratives that are being constructed about AI — the triumphalist narrative, the elegist narrative, the silent middle's unarticulated discomfort — are all acts of retrospective sensemaking. They are attempts to make meaning from events that have already occurred, shaped by the outcomes that have already materialized, organized by interpretive frameworks that were available to the narrators at the time of narration.
Consider the triumphalist narrative. A developer builds a revenue-generating product in a weekend using Claude Code. The retrospective account emphasizes the speed, the capability, the democratization of building power. The account is coherent, plausible, and supported by evidence — the product exists, it works, it generates revenue. But the account is shaped by the outcome. The developer who attempted the same weekend sprint and failed — whose prototype did not work, whose product did not find users, whose experience was not one of exhilaration but of frustration and confusion — does not post the retrospective account. The failure does not generate a narrative because the outcome does not support one. The visible evidence consists entirely of successes, not because failures do not occur, but because the retrospective sensemaking process selectively amplifies the outcomes that produce coherent, shareable stories.
This is survivorship bias, and it is well understood. But Weick's framework reveals something deeper than survivorship bias operating in the AI discourse. The retrospective accounts do not merely select among existing events. They reshape the meaning of those events in light of subsequent outcomes. Segal's account of the Trivandrum training — where engineers achieved a twenty-fold productivity multiplier — is a retrospective narrative shaped by the outcome of the training. The account emphasizes the capability gains, the expanded scope, the senior engineer's realization that judgment, not implementation, was his true value. These observations are real. But they are retrospectively constructed from the vantage point of the training's success. The moments of confusion, false starts, and tools that did not work as expected — the moments that were equivocal, that could have been interpreted as signs that the approach was flawed rather than signs that the approach was revolutionary — are absorbed into a narrative whose ending is already known.
This is not a criticism of Segal's account. It is a description of what retrospective sensemaking does, always and inevitably. The question is not whether the AI discourse is shaped by retrospective sensemaking — of course it is, because all discourse is. The question is what the retrospective construction is omitting, and what the omissions cost.
In Weick's framework, the most dangerous omission in retrospective sensemaking is what he called the "missing cues" — the signals that were available at the time of the event but were not extracted, not noticed, not incorporated into the narrative because they did not fit the story that the outcome was shaping. In the retrospective accounts of the AI transition, the missing cues are the unreported failures, the products that were built in a weekend and abandoned in a month, the prototypes that worked technically but missed the user need they were meant to address, the organizations that adopted AI tools and found, months later, that the speed of production had outrun their capacity to understand what they had produced.
These cues exist. They are available. They simply have not been organized into narratives because the current outcome — AI as transformative, AI as revolutionary, AI as the most significant technological transition since writing — does not support them. The outcome shapes the narrative. The narrative shapes which cues are extracted. And the cues that are extracted confirm the narrative. The cycle is self-reinforcing, and it is operating at the scale of an entire cultural discourse.
Segal's The Orange Pill is itself an act of retrospective sensemaking — one of unusual honesty and self-awareness, but an act of retrospective sensemaking nonetheless. The book constructs a narrative of the AI transition through particular frameworks: intelligence as a river flowing for 13.8 billion years, humans as beavers building dams in the current, AI as an amplifier that carries whatever signal it is given. These frameworks are plausible. They enable action. They give readers a way to interpret an ambiguous situation and move forward with some confidence that their actions are oriented toward something meaningful.
But plausibility is not accuracy, and the frameworks that enable action today may foreclose the interpretations that would have been more useful tomorrow. The river metaphor naturalizes AI — it makes the technology seem like an inevitable expression of a cosmic process, which reduces the urgency of political and institutional resistance. The amplifier metaphor individualizes responsibility — if the amplifier carries whatever signal it is given, then the quality of the output is the user's problem, not the system's. These are plausible interpretations with identifiable costs, and the costs are precisely the kind that retrospective sensemaking is poorly equipped to detect, because the costs will only become visible in light of outcomes that have not yet materialized.
The Mann Gulch fire, which Weick analyzed in his most famous and most-cited paper, illustrates the temporal dynamics of retrospective sensemaking under conditions of rapid change. On August 5, 1949, a team of smokejumpers parachuted into Mann Gulch to fight what appeared to be a routine wildfire. Within minutes, the fire reversed direction and raced uphill toward them. The crew foreman, Wagner Dodge, improvised an escape fire — he burned a clearing in the grass, lay down in the ashes, and survived. His crew did not. Thirteen men died, most of them running uphill with their tools still on their backs.
Weick's analysis focused not on what happened but on how the survivors and the subsequent investigation made sense of what happened. The retrospective accounts constructed a narrative of heroism (Dodge's improvisation), tragedy (the crew's failure to follow Dodge's instruction), and organizational failure (the lack of training, the inadequacy of communication, the absence of contingency plans). Each account was plausible. Each was shaped by the outcome. And each omitted cues that did not fit the narrative it was constructing.
What Weick noticed — and what gives his analysis its enduring power — is that the retrospective accounts all shared a common structure: they treated the disaster as a sequence of identifiable events with identifiable causes, as though the fire's reversal, the crew's panic, and Dodge's improvisation were discrete episodes in a story with a beginning, middle, and end. But the experience of the men in the gulch was not a story. It was chaos — a situation so far outside their interpretive frameworks that sensemaking itself collapsed. The men could not construct a coherent interpretation of what was happening while it was happening, because what was happening had no precedent in their experience, no framework in their training, no narrative template that could accommodate a fire that reversed direction and a foreman who set another fire and told them to lie down in it.
The retrospective accounts imposed narrative order on an experience that, in real time, had none. And the narrative order — heroism, tragedy, failure — concealed the most important feature of the event: the collapse of sensemaking itself. The moment when the situation exceeded the available interpretive frameworks and the men were left with no coherent account of what was happening or what to do about it.
The AI transition may be producing an analogous concealment. The retrospective narratives — triumphalist, elegist, cautiously optimistic — impose coherent interpretive frameworks on a situation that may be, for many of the people living through it, an experience of sensemaking collapse. The senior engineer who cannot articulate what he has lost. The teacher who does not know what to tell her students. The parent who lies awake wondering what her child's education is for. These people are not experiencing a story with a clear narrative arc. They are experiencing equivocality — a situation that admits multiple, incompatible interpretations, none of which is fully coherent, none of which enables confident action.
The retrospective narratives do not capture this equivocality. They resolve it — prematurely, in Weick's terms — into coherent accounts that serve the needs of the narrators and the platforms that amplify them. The triumphalist account resolves the equivocality into progress. The elegist account resolves it into loss. Both resolutions are plausible. Neither captures the experience of the people in the silent middle, whose sensemaking remains unresolved because the situation itself remains unresolved.
What retrospective sensemaking reveals about the AI discourse, then, is not that the narratives are wrong. It is that they are incomplete in ways that are structurally invisible. The cues that do not fit the narratives are not missing. They are present — in the Berkeley researchers' data about work intensification, in the engineer's quiet grief for a form of knowledge that friction built and speed destroys, in the twelve-year-old's question that no framework can fully answer: what am I for? These cues are available. They are simply not being organized into the dominant narratives, because the dominant narratives have already committed to interpretations that cannot accommodate them.
Weick argued that the quality of sensemaking depends on the quality of the cues that are extracted — and that the most important cues are often the ones that are most easily overlooked, because they are small, ambiguous, and inconsistent with the prevailing interpretation. In organizational disasters, the catastrophe is always preceded by weak signals that were available but not extracted. The nurse who noticed the mortality rate. The engineer who questioned the O-ring performance. The co-pilot who hesitated before the captain committed to the takeoff roll.
The weak signals of the AI transition are present. They are being generated in real time, in classrooms and offices and households and the quiet spaces where people sit with their discomfort and do not post about it. Whether those signals are extracted and incorporated into the organizational narratives that shape collective action — or whether they are lost in the retrospective accounts that have already resolved the ambiguity into the stories the discourse most wants to tell — is a question whose answer will determine whether the organizations navigating this transition are building on solid interpretive ground or on the polished, plausible, and potentially catastrophic surface of premature understanding.
In the winter of 1944, a small unit of Hungarian soldiers became lost during maneuvers in the Swiss Alps. The snow was heavy, the terrain featureless, the cold severe enough to kill. For two days the men remained in their tents, convinced they would die. On the third day, one of the soldiers found a map in his pocket. The discovery galvanized the group. They studied the map, plotted a course, and marched with renewed confidence through the storm. They reached their base camp alive.
The map, it turned out later, was a map of the Pyrenees — a different mountain range in a different country, hundreds of miles away.
Weick told this story repeatedly across his career, and he told it because it captures the deepest and most counterintuitive property of sensemaking: plausibility matters more than accuracy. The map was wrong. It did not correspond to the terrain the soldiers were crossing. But it was plausible enough to accomplish something that no amount of accurate information could have accomplished from inside the tents: it got them moving. And the movement itself — the act of marching, observing landmarks, adjusting course based on what they encountered — generated the local, real-time information that actually guided them to safety. The map did not save them by being correct. It saved them by being sufficient to initiate action, and the action produced the understanding that the map could not.
The principle extends far beyond lost soldiers. In every organization, at every level, people act on interpretations that are plausible rather than accurate — not because they are lazy or careless, but because accuracy is unavailable at the moment when action is required. The board approving a strategy does not know whether the strategy will work. The product team launching a feature does not know whether users will adopt it. The surgeon beginning an operation does not know what she will find when she opens the patient. In each case, the interpretation that enables action is the one that is good enough — coherent enough, consistent enough with the available evidence, actionable enough to permit forward movement. The interpretation will be revised later, when the action produces new information. But the initial interpretation need not be right. It need only be plausible.
This is the property of sensemaking that artificial intelligence exploits most powerfully and most dangerously.
AI is, by any reasonable assessment, the most sophisticated plausibility engine ever constructed. Large language models do not produce truth. They produce text that is statistically consistent with the patterns in their training data — text that reads as though it were produced by an entity that understands the subject matter, that has weighed the evidence, that has arrived at a considered judgment. The outputs are coherent. They are structured. They carry the aesthetic markers of insight: clean prose, logical sequencing, confident assertion, appropriate qualification. They are, in a word, plausible.
And plausibility is precisely what organizational sensemaking seeks. When a team faces an ambiguous situation — a market shift they do not understand, a technical problem they cannot quite frame, a strategic question that admits multiple answers — the thing they need most is a plausible interpretation that enables coordinated action. AI provides this with extraordinary speed and consistency. Ask Claude to analyze a competitive landscape, and the response arrives in seconds: structured, comprehensive, actionable. The analysis may not be accurate in every particular. But it is plausible enough to orient a conversation, align a team, and initiate a course of action.
The danger is not that plausibility is worthless. Plausibility is essential. Without it, organizations cannot act at all. The danger is that AI-generated plausibility is so polished, so consistently well-structured, so aesthetically compelling, that it becomes difficult to distinguish from accuracy — and the organizational mechanisms that would normally test plausibility against accuracy are overwhelmed by the sheer volume and confidence of plausible output.
Segal describes this phenomenon with precision when he recounts the Deleuze episode — the passage in which Claude produced a connection between Csikszentmihalyi's flow and Deleuze's "smooth space" that was rhetorically elegant, structurally sound, and philosophically wrong. The passage survived initial review because it satisfied every criterion that sensemaking applies to plausible interpretations: it was coherent, it was actionable (it advanced the argument), it was consistent with the narrative being constructed, and it carried the aesthetic markers of genuine insight. Only a subsequent, more effortful engagement — what Segal describes as something that "nagged" him enough to check — revealed the inaccuracy beneath the plausible surface.
The organizational implications are severe. In traditional sensemaking, plausibility is tested through social processes — debate, challenge, the friction of encountering people who interpret the situation differently. When an executive proposes a strategy, the board pushes back. When an engineer proposes a design, the architect questions the assumptions. When a consultant proposes a framework, the client asks whether it fits their specific circumstances. Each of these interactions is a plausibility test — a moment when the interpretation is subjected to the interpretive pressure of someone who sees the situation from a different angle and whose disagreement forces the original interpreter to strengthen, revise, or abandon the account.
AI short-circuits this testing process in two ways. First, it generates plausible output so quickly that the social testing cannot keep pace. By the time the team convenes to debate the strategy, the AI has already produced a detailed implementation plan, complete with timelines, resource estimates, and risk assessments. The plan is plausible. It is actionable. And the organizational momentum it creates — the sense that progress is being made, that the path forward is clear — makes it psychologically difficult to pause and ask whether the underlying strategic interpretation is sound. The plan has overtaken the deliberation.
Second, and more subtly, AI generates plausible output that is socially frictionless. Human interpreters bring disagreement, competing priorities, personal stakes, and institutional memory that may contradict the proposed interpretation. Claude brings none of these. Its output is agreeable by design — optimized for helpfulness, structured for clarity, free of the interpersonal friction that makes organizational debate uncomfortable but productive. Segal notes this directly: "Claude is more agreeable at this stage than any human collaborator I have worked with, which is itself a problem worth examining." The agreeableness is not neutral. It is a systematic reduction of the interpretive pressure that plausibility testing requires.
The Hungarian soldiers survived not because the Pyrenees map was accurate but because the act of marching generated real information about real terrain. The map got them moving. The movement produced the data that the map could not. But notice what the story assumes: the soldiers were marching through physical terrain that pushed back. The snow resisted. The slopes demanded adjustment. The landmarks that did not match the map forced reinterpretation. The friction of reality against the plausible interpretation was the mechanism through which the interpretation was corrected in real time.
Organizational sensemaking works the same way. Plausible interpretations are tested by the friction of implementation — the customer who does not behave as predicted, the technology that does not perform as specified, the team member who raises the objection that nobody else was willing to articulate. Each point of friction is a cue that the interpretation may need revision. And each cue is an opportunity for the organization to improve its understanding of the situation before the consequences of the flawed interpretation become irreversible.
When AI smooths the path from interpretation to implementation — when the prototype arrives before the objection is raised, when the plan is executed before the assumption is tested, when the output is so polished that the seam where the interpretation breaks is invisible — the friction that would have tested the plausibility against accuracy is eliminated. The soldiers are marching with a map of the Pyrenees through the Alps, but the terrain has been smoothed to match the map. They feel confident. They are making progress. They do not notice that the landmarks do not match, because the landmarks have been replaced by the prototype's enacted reality.
Byung-Chul Han's diagnosis of the "aesthetics of the smooth," which Segal engages at length in The Orange Pill, acquires a specific organizational meaning through Weick's framework. Smoothness is not merely an aesthetic preference or a cultural pathology. It is a sensemaking failure — the elimination of the friction that distinguishes plausible interpretations from accurate ones. In a smooth organizational environment, every interpretation looks equally good, because the rough edges that would have revealed their limitations have been polished away. The brief that Claude drafted is smooth. The prototype that Claude built is smooth. The strategic analysis that Claude produced is smooth. And the smoothness is precisely what makes them dangerous, because smoothness conceals the seams — the points where the interpretation fails to match reality, where the assumption does not hold, where the plausible account diverges from the accurate one.
The organizational response to this danger is not to reject plausibility — that would be to reject sensemaking itself, which would paralyze the organization. The response is to build structures that reintroduce friction into the testing process. Not the friction of slow production — AI has eliminated that, and the elimination is genuinely valuable. The friction of interpretive challenge. The organizational practice of subjecting every AI-generated output to the question that the output's plausibility makes it easy to skip: Is this true, or does it merely sound true? Is this the right interpretation, or merely the first plausible one? Does this match reality, or does it match the pattern that the model was trained to produce?
These questions require human judgment — the kind of judgment that is built through years of domain experience, through the accumulated friction of having been wrong enough times to develop the instinct for when something sounds right but feels wrong. The nagging feeling that led Segal to check the Deleuze reference. The hesitation of the co-pilot at Tenerife. The nurse at Bristol who kept the private tally. Each of these is an instance of a human sensemaker detecting a discrepancy between plausibility and accuracy that no automated system could detect, because the detection requires the kind of embodied, experiential knowledge that is built through friction and atrophied by smoothness.
The Pyrenees map worked because the terrain pushed back. The question for organizations operating with AI is whether they will maintain enough friction — enough interpretive resistance, enough institutional challenge, enough willingness to question the plausible — that the terrain can still push back against the map. Or whether the smoothness of AI-generated output will create an organizational environment in which the map and the terrain are indistinguishable, and the soldiers march with perfect confidence toward a destination that exists only in the interpretation they have mistaken for the world.
---
In 1976, Weick published a paper with a title that sounded like a contradiction: "Educational Organizations as Loosely Coupled Systems." The paper argued that schools do not function as the tightly integrated bureaucracies that organizational charts depict. The principal does not directly control what happens in each classroom. The curriculum documents do not fully determine what teachers teach. The district policies do not uniformly shape the practices at each school site. Instead, the elements of the educational system — classrooms, administrators, departments, policies, practices — are loosely coupled: responsive to each other but retaining their own identity, their own logic, their own capacity for independent action.
The insight was counterintuitive and far-reaching. Loose coupling was not, as most management theories assumed, a failure of coordination. It was a source of organizational strength. When elements are loosely coupled, a failure in one element does not propagate to the entire system. A bad principal does not destroy every classroom. An ineffective policy does not undermine every practice. The organization absorbs shocks locally, without systemic collapse. Diversity of approach is maintained, because each element has enough autonomy to interpret and respond to its local situation without waiting for centralized direction. And experimentation flourishes, because loosely coupled elements can try new things without risking the whole organization.
Tight coupling is the opposite: elements connected so directly, so responsively, so immediately that a change in one element produces an instantaneous change in every other. Tight coupling is efficient. It eliminates duplication. It ensures consistency. It produces the satisfying sensation of an organization operating as a single, coordinated machine.
It is also catastrophically fragile.
Charles Perrow, whose work on system accidents paralleled and deepened Weick's, demonstrated this fragility across dozens of case studies. The Three Mile Island nuclear accident occurred not because any single component failed catastrophically but because multiple small failures, in a tightly coupled system, cascaded faster than the operators could interpret them. The Challenger shuttle disaster occurred not because the O-ring failure was unforeseeable — engineers at Morton Thiokol had warned about it repeatedly — but because the organizational system was coupled tightly enough that the schedule pressure, the communication failures, and the normalization of deviance propagated through the entire decision chain without encountering a break point that might have interrupted the cascade.
In loosely coupled systems, cascades are interrupted. The failure stays local. Someone in a different department, operating with different assumptions and different priorities, notices the anomaly that the tightly coupled chain has normalized. The loose coupling provides what Weick called "slack" — organizational space where alternative interpretations can survive, where dissent is not immediately overridden by systemic momentum, where the weak signal has time to register before the system commits irreversibly to the course that the weak signal is warning against.
AI is tightening organizational coupling with a speed and thoroughness that no previous technology has achieved.
The mechanism is straightforward. When every member of an organization uses the same AI tool, the tool becomes the medium through which organizational cognition flows. The backend engineer and the frontend designer and the product manager and the marketing team are all working through the same system, using the same patterns of interaction, receiving outputs shaped by the same training data and the same architectural assumptions. The information that flows between them passes through a single channel — and that channel has its own biases, its own blind spots, its own patterns of interpretation that are invisible precisely because they are universal.
Segal describes this tightening in The Orange Pill when he recounts how organizational boundaries dissolved at Napster. Engineers who had spent years in narrow technical lanes started reaching across domains: backend developers building interfaces, designers writing features, the actual flow of contribution transforming beneath an org chart that remained formally unchanged. The description captures the efficiency of the tightening — the elimination of handoffs, the acceleration of production, the broadening of individual capability. What the description also captures, though Segal frames it as liberation rather than risk, is the elimination of the loose coupling that those organizational boundaries provided.
When the backend engineer was unable to build the frontend, the handoff between them was a moment of interpretive friction. The frontend designer, receiving the backend specification, interpreted it through a different set of assumptions, a different understanding of the user, a different aesthetic sensibility. The friction of the handoff was not merely a production cost. It was a sensemaking opportunity — a moment when two different interpretive frameworks collided and, in the collision, produced something that neither framework could have produced alone. The designer's objection ("this will not work for the user") and the engineer's response ("this is what the system can support") was a negotiation — a cycle of sensemaking that tested each interpretation against the other and arrived at a synthesis that was richer than either starting point.
When the engineer can build the frontend herself, using Claude to bridge the gap, the handoff disappears. The friction disappears. The sensemaking opportunity disappears. The engineer's interpretation of the user need is not tested against the designer's interpretation, because the engineer no longer needs the designer. The coupling tightens. The production accelerates. And the diversity of interpretation that the loose coupling maintained — the organizational variety that Ashby's Law says is necessary to match environmental complexity — diminishes.
The risk is not theoretical. Weick and Kathleen Sutcliffe documented it empirically across high-reliability organizations — organizations that operate complex, dangerous systems with remarkably few catastrophic failures. What they found was that the organizations with the best safety records were not the most efficient. They were the most loosely coupled in their interpretive processes: they maintained multiple, independent channels for detecting anomalies, they tolerated disagreement among experts, they deferred to the person with the most relevant knowledge regardless of rank, and they resisted the organizational pressure to streamline interpretation into a single, efficient channel.
The nuclear aircraft carrier is a useful example because it combines extreme operational tempo with extreme safety requirements. The flight deck of a carrier is one of the most dangerous workplaces on earth: jets launching and landing in rapid succession, ordnance being handled, fuel lines running across the deck, all in a space roughly the size of a parking lot. The organizational structure is tightly coupled in execution — every action must be precisely coordinated with every other. But it is loosely coupled in interpretation — multiple independent observers monitor the same operations, each empowered to call a halt if they detect something wrong, regardless of whether anyone else agrees.
This dual structure — tight coupling in execution, loose coupling in interpretation — is the organizational architecture of reliability. And it is precisely the structure that AI threatens to collapse. AI tightens the coupling in both dimensions simultaneously. It coordinates execution (by enabling rapid, consistent production across the organization) and it homogenizes interpretation (by channeling all organizational sensemaking through the same tool, with the same patterns, the same biases, the same blind spots).
The result is an organization that is faster, more efficient, and more consistent than any previous organizational form — and also more fragile, more susceptible to cascading failure, more vulnerable to the kind of system-wide error that loose coupling would have contained.
Segal calls for "AI Practice" — structured pauses, sequenced rather than parallel work, protected time for human-only interaction. In Weick's framework, these practices are not merely wellness initiatives or cultural amenities. They are mechanisms for reintroducing loose coupling into organizations that AI has tightened. The structured pause is a decoupling moment — a break in the tight linkage between AI-mediated production cycles that allows independent interpretation to occur. The sequenced workflow is a decoupling structure — a deliberate slowing of the interpretive process that gives alternative frameworks time to form before the enacted interpretation acquires irreversible momentum. The protected human interaction is a decoupling space — an environment where the homogenizing effect of the shared tool is temporarily suspended and the diversity of human interpretation can reassert itself.
These structures will be resisted. They will feel like inefficiency. In organizations that have experienced the intoxication of AI-enabled speed, the suggestion that the process should include deliberate pauses, deliberate friction, deliberate opportunities for disagreement will feel like a step backward. The arithmetic of efficiency — the twenty-fold productivity multiplier, the collapse of the imagination-to-artifact ratio — militates against any practice that slows the production cycle.
But the arithmetic of efficiency does not account for the arithmetic of resilience. The tightly coupled organization is efficient until it fails, and when it fails, it fails catastrophically — because the same tight coupling that accelerated production also accelerates the cascade. The Three Mile Island operators could not keep pace with the cascade because the system's tight coupling meant that each failure produced the next one faster than human interpretation could process. The organizations that adopt AI without maintaining interpretive loose coupling may find themselves in an analogous position: extraordinarily efficient until the first systemic error, and then unable to contain the error because the organizational structures that would have interrupted the cascade — the independent observers, the dissenting voices, the alternative interpretations — have been optimized away in the name of speed.
Loose coupling is not a luxury. It is the organizational price of survival in a complex environment. And the organizations that maintain it, that build the dams between AI-enabled production and AI-homogenized interpretation, that insist on the friction of diverse sensemaking even when the tool makes friction feel unnecessary — those organizations will not be the fastest. They will be the ones still standing when the faster organizations discover what tight coupling costs.
---
On a spring morning in 1994, a surgeon at the Bristol Royal Infirmary began an arterial switch operation on a thirteen-month-old child. The procedure, which reroutes the great arteries in infants born with transposition, is among the most technically demanding in pediatric cardiac surgery. The child died on the operating table. In the investigation that followed — an investigation that would eventually become one of the most consequential inquiries in the history of British healthcare — it emerged that the surgical team's mortality rate for complex pediatric cardiac operations was roughly double the national average. The disparity had persisted for years. The data existed. Individual clinicians had noticed. An anesthesiologist had raised concerns internally. A pathologist had identified patterns in the post-mortem examinations.
None of these signals penetrated the organizational sensemaking. The mortality rate was explained away through frameworks that preserved the prevailing interpretation: the cases were unusually complex, the patients were unusually sick, the referral patterns produced a population with higher baseline risk. Each explanation was plausible. Each was consistent with some subset of the available evidence. And each foreclosed the interpretation that the evidence, taken as a whole, most strongly supported: the surgical program was performing below an acceptable standard, and children were dying as a result.
The Bristol Royal Infirmary became, for Weick and his collaborator Kathleen Sutcliffe, a paradigmatic case of organizational mindlessness — the failure to attend to the weak signals that precede catastrophic failure. The concept of organizational mindfulness, which Weick and Sutcliffe developed through their research on high-reliability organizations, was in many respects a theory built from the study of its absence: the understanding of what mindfulness requires derived from the meticulous examination of what happens when it fails.
Mindfulness, in Weick and Sutcliffe's usage, is not the contemplative practice associated with meditation and stress reduction. It is an organizational property — a collective capacity for sustained attention to weak signals, anomalies, and departures from expectation. They identified five hallmarks. Preoccupation with failure: the constant expectation that something could go wrong, the organizational habit of treating near-misses not as evidence of resilience but as evidence of vulnerability. Reluctance to simplify: the resistance to easy categories, simple explanations, and comfortable narratives that reduce complex situations to manageable but potentially misleading accounts. Sensitivity to operations: the sustained attention to frontline work, to the details of how things are actually being done as opposed to how policies say they should be done. Commitment to resilience: the capacity to detect and recover from unexpected events rather than merely preventing expected ones. And deference to expertise: the willingness to let the person with the most relevant knowledge make the call, regardless of their position in the hierarchy.
These five hallmarks describe an organizational posture that is, in essential respects, the opposite of efficiency. Preoccupation with failure means allocating attention to things that have not gone wrong. Reluctance to simplify means tolerating the cognitive load of complex, ambiguous interpretations when a simpler account is available. Sensitivity to operations means maintaining engagement with routine processes that could, in principle, be delegated or automated. Commitment to resilience means investing in capabilities that may never be needed. Deference to expertise means accepting that the person closest to the work may override the person closest to the strategy.
Each of these hallmarks imposes a cost. And each is under direct pressure from AI adoption.
Consider preoccupation with failure. The organizations most deeply engaged with AI are the ones experiencing the most dramatic success — the productivity multipliers, the expanded capabilities, the compression of development cycles that Segal documents throughout The Orange Pill. Success is the enemy of preoccupation with failure. When the prototype works, when the code compiles, when the product ships in thirty days instead of six months, the organizational mood shifts from vigilance to confidence. The question changes from "What could go wrong?" to "What else can we build?" The weak signal — the architectural assumption that will not scale, the user need that the prototype did not address, the dependency that will break under load — is drowned in the noise of accomplishment.
Segal himself captures this dynamic without quite naming it as a mindfulness failure. He describes the Trivandrum training and the CES demonstration with the energy of a builder who has just witnessed something extraordinary. The energy is warranted. But energy directed entirely toward what went right is energy unavailable for detecting what went wrong — or what went right today but will go wrong tomorrow, under different conditions, at different scale, in the different environment that the prototype's success has now committed the organization to operating within.
Consider reluctance to simplify. AI produces simplifications with extraordinary fluency. Ask Claude to analyze a complex organizational situation, and the response will be structured, categorized, and actionable. The categories will be clean. The analysis will be coherent. The recommendations will follow logically from the premises. And the simplification will be invisible, because it is embedded in the structure of the output rather than declared as a limitation. The AI does not say, "I am simplifying a situation that resists simplification." It says, "Here are the three key factors." The number three is itself a simplification — the situation may involve seven factors, or twelve, or an entangled web of factors that resist enumeration entirely. But three is the number that fits the output format, and the output format is optimized for actionability, and actionability requires the simplification that mindfulness would resist.
Consider sensitivity to operations. When AI handles routine monitoring — scanning logs for anomalies, reviewing dashboards for deviations, processing the steady stream of operational data that constitutes the heartbeat of a complex system — the human practitioners' engagement with those operations diminishes. Not immediately. Not dramatically. But incrementally, in the way that any capacity atrophies when it is not exercised. The engineer who used to read the logs herself, who had developed the embodied intuition for when something in the pattern felt wrong even though nothing in the data flagged an alert, now reviews the AI's summary of the logs. The summary is accurate. The flagged anomalies are genuine. But the unflagged anomaly — the one that registers not as a data point but as a feeling, the kind of knowledge that Segal's senior engineer described as feeling a codebase "the way a doctor feels a pulse" — is precisely the kind of signal that AI monitoring cannot detect, because it exists only in the gap between what the data shows and what the experienced practitioner knows the data should show.
The AI safety community has recognized this threat with unusual clarity. On forums dedicated to the long-term risks of artificial intelligence, researchers have argued explicitly that AI development organizations should adopt the high-reliability practices that Weick and Sutcliffe identified. The argument is straightforward: if organizations developing transformative AI operate without the mindfulness that aircraft carriers and nuclear power plants require, the consequences of their failures will be correspondingly catastrophic. The argument has gained traction. Anthropic, the company that built Claude, has published scaling policies that reflect at least some HRO principles — the preoccupation with what could go wrong, the commitment to safety research as a distinct organizational priority, the deference to the researchers closest to the frontier rather than the executives closest to the revenue.
But the application of HRO principles to AI development is only the most visible instance of a much broader organizational challenge. Every organization that adopts AI tools — not just the organizations that build them — faces the same mindfulness pressure. The school that uses AI to grade student essays. The hospital that uses AI to triage patient records. The law firm that uses AI to review contracts. Each of these organizations is delegating operational attention to a system that is extraordinarily capable at detecting the expected and extraordinarily poor at detecting the unexpected — at noticing the anomaly that does not fit any established pattern, the weak signal that is visible only to the practitioner who has spent years developing the embodied knowledge to see it.
Segal tells the story of his senior engineer in Trivandrum who realized that the implementation work consuming eighty percent of his career could be handled by a tool, and that the remaining twenty percent — judgment, architectural instinct, taste — was what actually mattered. In Weick and Sutcliffe's framework, that twenty percent is organizational mindfulness embodied in a single practitioner. The capacity to detect the weak signal, to resist the simplification, to defer to experience over output, to maintain the preoccupation with failure that success makes psychologically difficult. AI did not make this capacity less valuable. It made this capacity the only thing of value. But it also threatened the developmental pathway through which the capacity was built — the years of friction-rich implementation work during which the weak signals were first encountered, first misinterpreted, first learned from.
The most important finding in Weick and Sutcliffe's research on high-reliability organizations was not that reliable organizations avoid failure. It was that they manage failure — that they detect failures early, contain them before they cascade, and learn from them in ways that strengthen the organization's capacity to detect the next failure. The mechanism for all of this is sustained human attention — the kind of effortful, uncomfortable, often thankless attention that notices the thing that does not quite fit, that resists the organizational pressure to explain it away, that insists on investigating the anomaly even when the investigation slows the operation and irritates the managers.
AI does not destroy this attention. But it creates an organizational environment in which the attention is harder to maintain, less obviously necessary, and more easily justified in its absence. When the AI monitors the operations, why should the human attend to them? When the AI flags the anomalies, why should the practitioner develop the intuition to detect them independently? When the AI produces clean, confident, actionable interpretations of complex situations, why should the organization invest in the slow, expensive, friction-rich processes through which human practitioners develop the mindful engagement that high reliability demands?
The answer is that the AI will miss things. Not often. Not dramatically. But in the specific, subtle, consequential way that high-reliability research has documented across decades: the weak signal that does not match any pattern in the training data, the anomaly that is visible only to the practitioner who has lived inside the system long enough to feel its pulse, the departure from expectation that is too small to flag algorithmically but too important to miss humanly.
The organizations that attend to these signals — that build the structures, the practices, the cultural norms that maintain mindful human engagement alongside AI capability — will be the ones that achieve what Weick and Sutcliffe described as the hallmark of true reliability: not the absence of failure, but the presence of the organizational capacity to detect failure early, contain it quickly, and learn from it deeply. The organizations that do not will be efficient, fast, and confident — until the moment when the signal that no one was attending to becomes the failure that no one can contain.
---
On August 5, 1949, at approximately four in the afternoon, Wagner Dodge and fifteen other smokejumpers parachuted into Mann Gulch, a steep-sided canyon on the Missouri River in central Montana. They had been dispatched to fight what appeared from the air to be a routine wildfire — a Category IV fire, manageable by a crew of this size. By five-thirty, thirteen of them were dead.
The fire had reversed direction. What had been burning on the south slope of the gulch jumped the river and raced up the north slope — toward the men — at a speed that the crew foreman, Dodge, estimated at roughly six hundred feet per minute. The crew had perhaps ninety seconds to escape. Dodge did something that none of his men had ever seen or trained for: he stopped running, lit a match, set fire to the grass at his feet, and lay down in the ashes of his own escape fire as the main blaze swept over him. He survived with minor burns. Two other men survived by reaching the ridge at the top of the gulch. Everyone else died on the hillside.
Weick returned to Mann Gulch repeatedly across his career. His 1993 analysis, "The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster," became the most influential paper in the history of organizational sensemaking theory — cited thousands of times, taught in business schools worldwide, adapted into case studies and leadership seminars and management retreats. Its power derived not from the drama of the fire, though the drama was considerable, but from a question that Weick posed with deceptive simplicity: Why did the men not drop their tools?
The smokejumpers who died were carrying heavy equipment — packs, saws, Pulaskis, canteens — that slowed them by an estimated twenty percent or more. The equipment was heavy, awkward, and in the context of a foot race against a fire moving at six hundred feet per minute, potentially fatal. Dodge shouted at his men to drop everything. Most of them did not. They ran uphill carrying fifty pounds of equipment they had been trained to carry, that they were accustomed to carrying, that defined who they were and what they did. They died carrying their tools.
The answer that Weick developed is not about panic, though panic was present. It is about identity. The tools were not merely instruments. They were the material expression of who these men understood themselves to be. A smokejumper without a Pulaski is not a lighter, faster version of a smokejumper. He is — in the sensemaking framework that the men had carried into the gulch along with their equipment — nobody. The tools did not just help them fight fires. The tools told them they were firefighters. And when Dodge told them to drop everything, what they heard was not a survival instruction. What they heard, at a level below conscious deliberation, was an instruction to abandon their identity in a situation where identity was the only coherent thing left.
The fire had made everything else incoherent. The routine assignment had become a death trap. The foreman's behavior — stopping, lighting a fire, lying down in the ashes — was so far outside any framework the men possessed that it could not be interpreted as a rational instruction. The sensemaking had collapsed. The situation had exceeded every available interpretive framework. The men were left with no coherent account of what was happening, no plausible interpretation that could guide their action, no narrative that made sense of a world in which the ground they stood on was trying to kill them and their commander was telling them to lie down and let it.
In the absence of sensemaking, identity is the last structure standing. When you do not know what is happening, you fall back on who you are. And who these men were was defined by the tools on their backs. Dropping the tools meant abandoning the final source of coherent identity in a situation that had stripped away every other.
Weick drew the parallel to organizational change explicitly: professionals in the grip of technological transformation face a structurally analogous situation. The tools they have spent years mastering — the programming languages, the domain expertise, the workflows and methodologies that define their professional identity — are suddenly declared unnecessary. Drop your tools, the market says. The fire is moving. You must be lighter, faster, more adaptable.
And the professionals, like the smokejumpers, often cannot comply. Not because they lack intelligence. Not because they fail to understand the situation. But because the tools are not merely tools. They are the means by which these people understand themselves, their value, their place in the organizational ecosystem. Dropping them is not adaptation. It is self-erasure — the abandonment of the identity that made the world legible.
Segal captures this dynamic in his account of the elegists — the senior professionals who, in the winter of 2025-2026, mourned something they could not quite articulate. A software architect told him that he felt like "a master calligrapher watching the printing press arrive." The architect did not dispute AI's efficiency. He said, simply, that something beautiful was being lost, and that the people celebrating the gain were not equipped to see the loss. What he was describing, in Weick's terms, was the loss of identity-constituting tools. The deep knowledge of systems architecture that he had built over twenty-five years — the embodied intuition that let him feel a codebase the way a doctor feels a pulse — was the tool he could not drop. Not because it was objectively irreplaceable, but because it was subjectively constitutive. It was who he was.
The elegists are the smokejumpers of the AI transition: running uphill with heavy tools, aware at some level that the tools are slowing them down, unable to drop them because dropping them means becoming no one in particular.
But Weick's analysis of Mann Gulch did not stop at the diagnosis of why the men failed. It also asked what Dodge did differently — what enabled him to improvise in a situation where everyone else's sensemaking had collapsed. The answer was structural, not personal. Dodge was not braver than his men, or smarter, or more experienced in any way that would have predicted his specific improvisation. The escape fire was not a technique he had learned. It was an invention, produced in real time under conditions of extreme pressure, by a mind that had managed to maintain enough interpretive flexibility to construct a new framework when the old one failed.
The capacity that Dodge demonstrated — and that his men lacked — was what Weick called "bricolage," borrowing the term from the anthropologist Claude Lévi-Strauss: the ability to construct new solutions from whatever materials are at hand, without a predetermined plan, in response to a situation that the existing plans cannot accommodate. Bricolage is not skill in the conventional sense. It is the meta-skill of being able to abandon existing frameworks and construct new ones when the situation demands it.
The AI transition demands bricolage on a civilizational scale. The existing frameworks for understanding professional identity — you are what you can do, your value is your expertise, your career is defined by the skills you have accumulated — are burning. The fire has reversed direction. And the instruction — drop your tools, adopt new ones, redefine your value in terms that the old frameworks cannot accommodate — is as disorienting as Dodge's instruction to lie down in the ashes.
Who survives Mann Gulch moments? Weick identified three factors. First, the capacity to maintain what he called "attitude of wisdom" — the simultaneous confidence that one knows enough to act and the humility that one may be wrong. The attitude of wisdom is not a compromise between confidence and doubt. It is the active maintenance of both, the refusal to collapse into either certainty (which produces the rigidity that killed the smokejumpers) or paralysis (which produces the inaction that would have killed Dodge if he had simply stood and waited).
Second, the capacity for what Weick called "respectful interaction" — the quality of communication among team members that allows alternative interpretations to surface, be heard, and be incorporated into collective sensemaking even under extreme pressure. The Mann Gulch crew's communication broke down at the moment it was most needed. Dodge's instructions were not heard, not understood, or not believed — and the crew had no prior experience of the kind of respectful interaction that would have allowed them to make sense of an instruction that contradicted everything they knew.
Third, the capacity for improvisation itself — the bricolage, the willingness to work with whatever is available, to abandon the plan and construct something new from the rubble of the plan's failure. This capacity, Weick argued, is not a personality trait. It is an organizational achievement — produced by practices that value flexibility over consistency, that reward improvisation over compliance, that build the muscle of adaptive sensemaking through repeated exposure to situations that the existing frameworks cannot fully accommodate.
The orange pill moment that Segal describes — the recognition that something genuinely new has arrived, that the old frameworks are inadequate, that one cannot unsee what one has seen — is, in Weick's terms, a moment of sensemaking collapse and reconstruction. The old interpretive frameworks have failed. The professional identity that was built on implementation skill, on deep specialism, on the ability to do the difficult technical thing, has been rendered incoherent by a tool that does the difficult technical thing better, faster, and cheaper. The ground is burning.
The question is whether the professionals caught in the fire can do what Dodge did: stop running, abandon the tools that are slowing them down, and improvise a new framework — a new understanding of professional identity, a new definition of value, a new way of being in the world that is compatible with the reality that the old frameworks could not accommodate.
Some can. Segal's senior engineer in Trivandrum, who spent two days oscillating between excitement and terror before arriving at the recognition that the twenty percent of his work that remained — judgment, instinct, taste — was the part that actually mattered, performed a version of Dodge's improvisation. He dropped the tools of implementation and discovered, beneath them, a capacity for architectural judgment that the implementation work had been simultaneously building and concealing. He lay down in the ashes of his old identity and found that he survived — lighter, disoriented, but alive.
Others cannot. The framework knitters of Nottinghamshire. The monks who copied manuscripts. The bards who held the Iliad in their skulls. The professionals who respond to the AI transition with denial, defiance, or retreat to domains where the fire has not yet arrived. They are running uphill with their tools, and the fire is faster than they are, and the tools will not help them when the fire catches up.
Weick's analysis does not judge these people. It does not blame them for failing to drop their tools, because it understands that the tools are not merely tools. It understands that identity is the last structure standing when sensemaking collapses, and that asking people to abandon their identity under duress is asking something that no amount of training, no amount of rational analysis, no amount of advance warning fully prepares a person to do. The smokejumpers who died at Mann Gulch were not stupid, cowardly, or inflexible. They were human beings in a situation that had exceeded their capacity for sensemaking, holding onto the only thing that still made sense.
The organizations that help their people through the AI transition will be the ones that build the conditions for Dodge's improvisation: the attitude of wisdom that holds confidence and doubt simultaneously, the respectful interaction that allows new interpretations to surface under pressure, and the organizational practices that develop the improvisational capacity to construct new frameworks when the old ones are burning. These conditions cannot be mandated. They cannot be installed by executive order or training deck. They can only be cultivated, slowly and deliberately, through the sustained organizational investment in the kind of sensemaking that the fire makes most difficult and most necessary.
The tools are burning. The question is not whether to drop them — the fire will make that decision for those who do not make it themselves. The question is what lies beneath the tools, once they are gone. And whether the organizations and the individuals and the societies navigating this transition have built enough interpretive capacity — enough wisdom, enough respect, enough improvisational muscle — to construct something new from the ashes.
In 1956, the British cyberneticist W. Ross Ashby formulated a principle so simple it reads like a tautology and so consequential it has shaped every subsequent theory of organizational adaptation. He called it the Law of Requisite Variety, and it states: only variety can absorb variety. A system that must regulate its environment — that must detect threats, respond to changes, and maintain itself against disruption — can do so only if it contains at least as much internal diversity as the environment presents. A thermostat with two settings cannot regulate a room with five temperature zones. A military with one strategy cannot defeat an adversary with three. An organization with a single interpretive framework cannot navigate a world that demands four.
The law is mathematical in its precision and biological in its implications. Ecological systems survive not because every organism is well-adapted to average conditions but because the population contains enough variation that some organisms are adapted to conditions that have not yet arrived. The species that has optimized for the current environment — that has reduced its internal variety in the name of efficiency — is the species that collapses when the environment shifts. The species that maintains excess variety — genetic variation that serves no purpose under current conditions, behavioral strategies that are suboptimal today — is the species that survives the shift, because somewhere in its population, the variation that the new environment demands already exists.
Organizations follow the same logic. The firm that maintains diverse perspectives, multiple methodologies, competing interpretive frameworks, and the creative tension that diversity produces is the firm that can respond when the market shifts in a direction that no single framework predicted. The firm that has optimized for efficiency — that has converged on a single method, a single tool, a single interpretive approach — is the firm that is perfectly adapted to yesterday's environment and fatally vulnerable to tomorrow's.
Artificial intelligence is reducing organizational variety with a thoroughness and a speed that no previous technology has achieved. The reduction operates through three mechanisms, each reinforcing the others, each invisible from inside the system it is reshaping.
The first mechanism is tool homogeneity. When every practitioner in an organization uses the same AI system, the system's patterns become the organization's patterns. This is not a metaphor. Large language models have specific tendencies — particular ways of structuring arguments, particular assumptions about what counts as evidence, particular aesthetic preferences in how information is organized and presented. These tendencies are not bugs. They are features of the training process, artifacts of the data and the architecture and the optimization objectives that shaped the model. They are also invisible to the users who interact with the model daily, in the same way that the grammar of your native language is invisible to you — so deeply embedded in the medium of communication that it shapes thought without announcing itself.
When a marketing team uses Claude to analyze competitive positioning, the analysis arrives in a particular structure: clear categories, weighted factors, actionable recommendations. When the engineering team uses Claude to evaluate technical architectures, the evaluation arrives in a structurally similar format. When the strategy team uses Claude to assess market opportunities, the assessment follows the same organizational logic. The content differs. The structure converges. And the convergence means that the organization's interpretive diversity — the different ways of seeing that different functions and different disciplines bring to the same situation — is being channeled through a single structural template.
The marketing team's analysis used to look different from the engineering team's analysis, not just in content but in form. The marketing analysis was narrative, qualitative, organized around customer stories and competitive positioning. The engineering analysis was structural, quantitative, organized around system constraints and performance metrics. The strategy analysis was spatial, visual, organized around market maps and positioning diagrams. Each format reflected a different way of thinking about the same problem, and the collision of those different thinking modes — in the meeting where all three analyses were presented and debated — was where organizational variety produced its value. The collision forced each team to confront interpretive frameworks that differed from their own, to defend their perspectives against challenge, and to integrate insights that their own framework could not have generated.
When all three teams use the same AI tool, the collision softens. The analyses converge in structure even when they diverge in content. The meeting becomes a comparison of similar-looking outputs rather than a negotiation among genuinely different ways of seeing. The variety that Ashby's Law says the organization needs to match its environmental complexity has been reduced — not eliminated, but reduced enough that the organization's capacity to detect and respond to the unexpected diminishes.
The second mechanism is skill homogeneity. Segal celebrates the democratization of capability — the expansion of who gets to build — as one of the most morally significant features of the AI transition. The celebration is warranted. When a backend engineer can build a user interface, when a designer can write features, when a non-technical founder can prototype a product, the barriers that once restricted building to a credentialed few have been lowered in ways that expand human possibility.
But democratization has a shadow that the celebration does not fully illuminate. When everyone can do everything competently, the rare, deep, idiosyncratic expertise that provides organizational variety diminishes in relative value. The backend engineer who builds a user interface using Claude is competent. The frontend specialist who has spent a decade developing deep intuitions about user interaction — about the millisecond timing differences that make an interface feel responsive or sluggish, about the spatial relationships that guide the eye, about the psychological principles that determine whether a user feels empowered or confused — is expert. The competent output and the expert output may be indistinguishable to a manager reviewing deliverables. Both work. Both are functional. Both satisfy the specification.
But the expert's contribution contained variety that the competent output does not. The unexpected design choice that the specification did not anticipate. The counterintuitive interaction pattern that violates the conventional wisdom but works better for this specific user population. The solution that only someone with a decade of embodied knowledge could have conceived, because it draws on patterns that are not in any training data — patterns learned through the accumulated friction of thousands of small failures and the intuitions those failures deposited.
When the organization can no longer distinguish between competent and expert output — or when the distinction seems unimportant because competent is good enough — the economic incentive to maintain deep expertise erodes. The expert is expensive. The competent practitioner with an AI tool is cheap. The organization, optimizing for cost, converges on competence. And the variety that expertise provided — the anomalous perspective, the counterintuitive insight, the depth of understanding that produces genuinely novel solutions — is quietly eliminated from the organizational gene pool.
The third mechanism is interpretive homogeneity. When AI assists sensemaking across the organization, the interpretive frameworks that different people bring to ambiguous situations converge toward the frameworks that the AI provides. The convergence is subtle and pervasive. It operates not through explicit instruction but through the gradual reshaping of what counts as a good analysis, a strong argument, a compelling recommendation.
Before AI, the organization's interpretive diversity was maintained partly by the diversity of its members' backgrounds, training, and cognitive styles. The engineer thought in systems. The designer thought in experiences. The salesperson thought in relationships. Each brought a different interpretive lens to the same situation, and the collision of lenses produced the organizational variety that Ashby's Law requires.
With AI, each of these practitioners still brings their background to the conversation. But the conversation increasingly passes through a mediating layer — the AI tool — that applies its own interpretive framework to the input before returning the output. The engineer's systems thinking is processed through Claude's particular way of structuring systems analysis. The designer's experiential thinking is processed through Claude's particular way of articulating user experience. The mediation is not a distortion in any obvious sense. The outputs are good. But the outputs have been filtered through a single interpretive architecture, and the filtering reduces the diversity of the interpretive outputs that reach the organizational discussion.
The analogy to ecological monoculture is precise. A field planted with a single crop variety is maximally efficient under normal conditions. Every plant is optimized for the current soil, the current climate, the current pest environment. The yield is high. The management is simple. The costs are low.
When the conditions change — when a new pest arrives, when the climate shifts, when the soil chemistry alters — the monoculture collapses. Every plant is equally vulnerable, because every plant is genetically identical. The field that maintained a diverse planting — multiple varieties, some suboptimal under current conditions but adapted to conditions that might arise — survives the shift. The inefficient diversity was not waste. It was insurance.
Organizational variety is the same kind of insurance. The competing interpretive frameworks, the redundant capabilities, the deep specializations that seem excessive under current conditions — all of these represent organizational variety that the current situation does not require but that the next situation might. The organization that has reduced this variety in the name of AI-enabled efficiency is the organization that is perfectly adapted to the current environment and fatally vulnerable to the next.
The organizational response to the requisite variety problem is not to reject AI. The tool is too powerful and the competitive pressure too intense for rejection to be a viable strategy. The response is to build structures — deliberately, systematically, against the organizational grain — that maintain variety in the face of the homogenizing pressure that AI creates.
Protected spaces for deep expertise. Not as a sentimental gesture toward the past, but as a strategic investment in the organizational variety that the future will require. The senior frontend specialist whose decade of embodied knowledge seems redundant when Claude can generate competent interfaces is not a cost to be eliminated. She is a reservoir of organizational variety — a source of the counterintuitive insight, the anomalous perspective, the deep pattern recognition that the AI-augmented monoculture cannot produce.
Deliberate introduction of interpretive diversity. When every team uses the same AI tool, the organization must actively seek out interpretive frameworks that the tool does not provide. External advisors from different industries. Cross-functional rotations that expose practitioners to genuinely different ways of seeing. Structured exercises in which teams are required to generate alternative interpretations of the same situation using frameworks that the AI has not suggested.
Resistance to tool-mediated uniformity. Not resistance to the tool itself, but resistance to the organizational pressure to route every process through the same channel. Some analyses should be produced without AI assistance — not because the human-only analysis is better, but because it is different, and the difference is what maintains the organizational variety that Ashby's Law demands.
These structures will feel inefficient. They will feel like deliberate waste. In an environment where AI makes everything faster, cheaper, and more consistent, the insistence on maintaining slow, expensive, inconsistent processes will appear irrational.
But Ashby's Law is not a recommendation. It is a mathematical necessity. The organization that lacks sufficient variety to match its environmental complexity will fail to regulate that environment. The failure will not announce itself in advance. It will arrive as the novel challenge that the homogeneous organization cannot interpret, the unprecedented situation that the converged frameworks cannot accommodate, the shifted environment to which the monoculture is fatally unadapted.
The variety will either be maintained deliberately, at the cost of efficiency, or it will be rebuilt desperately, at the cost of survival. There is no third option. The mathematics does not negotiate.
---
Every framework is a bet. It is a bet that the world can be usefully understood through a particular set of concepts, a particular way of organizing attention, a particular account of what matters and what does not. The bet pays off when the framework enables wise action — when the people who adopt it navigate their situation more effectively than they would have without it. The bet fails when the framework forecloses the very understanding it was designed to enable — when the concepts become so familiar that they stop illuminating and start constraining, when the map is mistaken for the territory, when the interpretive tool becomes an interpretive cage.
Segal's The Orange Pill is a framework — one of the most ambitious attempts to make sense of the AI transition that the literature has produced. Intelligence as a river flowing for 13.8 billion years. Humans as beavers building dams in the current. AI as an amplifier that carries whatever signal it is given. Consciousness as a candle flickering in an unconscious universe. Friction not eliminated but ascending, relocating from mechanical to cognitive, from execution to judgment. Each of these concepts is a bet about how the world can be usefully understood. Each enables certain kinds of thinking and forecloses others. And each, examined through the lens of sensemaking theory, reveals both its power and its limitations with a specificity that the framework's own internal logic cannot provide.
Sensemaking theory offers seven properties against which any interpretive framework can be evaluated. The evaluation is not a judgment of truth or falsity. Sensemaking frameworks are not true or false. They are adequate or inadequate — adequate when they enable effective action under conditions of irreducible uncertainty, inadequate when they produce confidence without competence, clarity without understanding, movement without direction.
The first property: sensemaking is grounded in identity construction. A framework is adequate when it helps the people who adopt it understand who they are in the new landscape. The river-and-beaver framework succeeds here with unusual specificity. It offers a clear answer to the identity question that the AI transition has made urgent: you are not a god who controls the current, and you are not a swimmer who drowns in it. You are a builder — a creature whose value lies in the capacity to study the flow, identify leverage points, and construct structures that redirect the current toward life. The identity is specific enough to be actionable and capacious enough to accommodate the enormous range of people who find themselves navigating the transition.
The limitation is that the beaver identity may be more reassuring than the situation warrants. The beaver builds with the river. The river, in this metaphor, is intelligence — a natural force that has been flowing for billions of years. The naturalness of the metaphor implies that the current flow is continuous with the flow that preceded it, that AI is another channel in a river that has always been flowing, that the appropriate posture is stewardship rather than alarm. This may be true. It may also be a sensemaking construction that domesticates a genuinely unprecedented phenomenon by placing it within a familiar narrative of natural continuity. The metaphor's plausibility is precisely what makes it worth scrutinizing.
The second property: sensemaking is retrospective. A framework is adequate when it organizes past experience in ways that illuminate present action. The historical pattern that Segal traces — Socrates on writing, Gutenberg and the monks, the Luddites and the power loom, VisiCalc and the accountants — is retrospective sensemaking of high quality. It identifies a recurring structure (threshold, exhilaration, resistance, adaptation, expansion) that the historical evidence supports and that provides a template for interpreting the current moment.
The limitation is inherent in retrospection itself. Retrospective sensemaking can only work with outcomes that have already materialized. The historical pattern that Segal identifies is a pattern of transitions that succeeded — that eventually produced expansion, that eventually distributed gains broadly enough to validate the transition in hindsight. The transitions that did not succeed, the civilizations that collapsed under technological disruption rather than adapting to it, the communities that were destroyed rather than transformed, do not generate the compelling retrospective narratives that drive the pattern. The historical pattern is a survivorship narrative, and survivorship narratives systematically overestimate the probability of survival.
The third property: sensemaking is enactive. A framework is adequate when it produces actions whose consequences are compatible with the framework's predictions. Segal's framework predicts that AI amplifies whatever it is given — that the quality of the output depends on the quality of the input, that the tool rewards genuine thinking and punishes carelessness. This prediction is testable and, by all available evidence, largely correct. The organizations that bring disciplined judgment to their AI adoption produce better outcomes than those that do not. The framework enacts a world in which human quality matters — and the enacted world, so far, confirms the prediction.
The limitation is that enactment is self-confirming, as the earlier chapters of this book have argued at length. The prediction that human quality matters produces organizational behavior that emphasizes human quality, which produces outcomes that confirm the prediction. The alternative prediction — that AI will eventually render human quality irrelevant, that the tool will improve to the point where the input's quality is immaterial — has not been enacted, not tested, and therefore not disconfirmed. The enactment cycle confirms the framework's prediction without establishing that the prediction will hold as the technology evolves.
The fourth property: sensemaking is social. A framework is adequate when it enables coordination among people with different perspectives. Segal's framework succeeds here — perhaps more than he realizes. The dual vision that characterizes The Orange Pill — the simultaneous recognition that the tools are powerful and that the power is dangerous — provides a meeting ground for people who would otherwise be unable to coordinate. The triumphalist and the elegist cannot speak to each other directly; their frameworks are incompatible. But both can find themselves in Segal's framework, because the framework holds both truths simultaneously. It is a dam that creates a pool large enough for different species to inhabit.
The fifth property: sensemaking is ongoing. A framework is adequate when it accommodates new information and evolving situations. Here, the assessment must be provisional — because the ongoing-ness of sensemaking means that no framework can be evaluated as complete. Segal's framework is explicitly positioned as provisional, as a work in progress, as a set of interpretive tools that will require revision as the situation evolves. This humility is a strength. But the framework's institutional expression — the book, with its narrative arc and its resolved ending — creates a tension with its own provisionality. The book arrives at conclusions. Conclusions feel final. And finality is the enemy of ongoing sensemaking.
The sixth property: sensemaking is focused on and by extracted cues. A framework is adequate when it directs attention to the signals that matter. Segal's framework directs attention to several crucial cues: the speed of adoption as a measure of pent-up need, the imagination-to-artifact ratio as a measure of creative liberation, the ascending friction as a measure of where human value persists. Each of these cues is diagnostic — it reveals something about the situation that is not visible without the framework's guidance.
The cues that the framework does not direct attention to are equally important. The power dynamics of who captures the gains from AI adoption. The geopolitical implications of AI capability concentration. The environmental costs of the computational infrastructure. The epistemological consequences of a civilization that increasingly cannot distinguish between human-generated and machine-generated knowledge. These are not cues that the framework ignores — Segal gestures toward several of them. But they are not the cues that the framework amplifies, and what a framework amplifies determines what an organization acts on.
The seventh property: sensemaking is driven by plausibility rather than accuracy. This is the property that most directly illuminates both the power and the risk of Segal's framework. The framework is maximally plausible. The river metaphor is intuitive, the beaver metaphor is actionable, the amplifier metaphor is precise, and the historical pattern is compelling. The plausibility enables action — readers can close the book and begin navigating the AI transition with a set of concepts that make the situation intelligible.
But plausibility is not accuracy. The river metaphor may naturalize a phenomenon that is more political than natural. The beaver metaphor may overestimate the capacity of individual builders to redirect systemic forces. The amplifier metaphor may individualize responsibility for what is, in significant part, a structural and institutional problem. The historical pattern may impose a narrative of inevitable expansion on a situation whose outcome is genuinely uncertain.
These are not fatal flaws. Every framework has them. The question is not whether the framework is perfect — no framework is — but whether the people who adopt it hold it with the appropriate looseness. Whether they use it to act while remaining alert to the signals that the framework does not amplify. Whether they treat the map as a map — useful, provisional, subject to revision — rather than as the territory itself.
Weick argued that the quality of sensemaking depends not on the accuracy of the framework but on the quality of the attention that the framework enables. A framework that is slightly wrong but that directs attention to the right signals is more organizationally valuable than a framework that is precisely right but that directs attention nowhere in particular. The test is not truth. The test is: does the framework enable wise action under conditions of irreducible uncertainty?
By that standard, Segal's framework is among the best available. It holds the dual reality of power and danger. It provides concepts that enable action without promising certainty. It directs attention to signals — adoption speed, creative liberation, ascending friction, the quality of the questions people ask — that are genuinely diagnostic of the situation's trajectory.
And it is, by its own admission, a map. Not the territory. A construction — plausible, provisional, subject to revision. The honest sensemaker holds the framework loosely. Uses it to act. Watches what the action produces. Notices the cues that the framework did not predict. Revises. Acts again.
This is what sensemaking is. This is what organizations do when they do it well. And this is what the AI transition demands of every person, every organization, and every society that finds itself in a situation that no existing framework can fully accommodate — that demands, as all genuine disruptions demand, the construction of new interpretive tools from whatever materials are at hand, under conditions of extreme uncertainty, with the knowledge that the tools will need to be rebuilt as soon as the situation shifts again.
The frameworks will be imperfect. They will be provisional. They will require constant maintenance — the sticks repacked, the mud reapplied, the dam rebuilt when the river shifts course.
But the alternative to imperfect frameworks is not perfect frameworks. The alternative is no frameworks at all — the condition that Weick documented at Mann Gulch, at Tenerife, at Bristol, where sensemaking collapsed and people acted on momentum, or habit, or panic, because no interpretive structure remained to guide them.
Build the frameworks. Hold them loosely. Revise them constantly. And attend, always, to the weak signal that the framework does not explain — because that signal is either the noise that frameworks rightly filter out, or the harbinger of the shift that will require the next framework to be built.
The river does not wait for the framework to be finished. The current moves. The sensemaking continues. And the quality of what the organizations build — the dams, the prototypes, the policies, the cultures, the futures — depends less on the perfection of their understanding than on the mindfulness with which they hold that understanding and the courage with which they revise it when the river tells them they were wrong.
---
The gap that stayed with me was not a number or a timeline. It was the gap between what my team said in the meeting and what they did at their desks.
I noticed it in Trivandrum, during the training I describe in The Orange Pill. In the room, the engineers discussed the work carefully, debated approaches, raised objections, challenged each other's interpretations. At their desks, with Claude open, they moved fast — so fast that the careful discussion from fifteen minutes earlier was already irrelevant by the time they returned to it. The prototype had been built. The debate had been overtaken by the artifact. What the room had held open, the screen had closed.
Weick gave me the language for what I was watching. The room was sensemaking. The screen was enactment. And the speed of the enactment was outrunning the sensemaking — producing clarity before the ambiguity had done its work, generating artifacts before the interpretations that should have shaped them had been fully tested against each other.
This book's argument is the one I have found hardest to sit with in the entire Orange Pill Cycle. Harder than Han's diagnosis of the smooth. Harder than the Luddite history I had to confront honestly. Because Weick is not telling me that the tools are dangerous in some abstract philosophical sense. He is telling me that the specific way I use them — the way I celebrated using them, the thirty-day sprint, the twenty-fold multiplier, the exhilaration of watching an idea become a thing before the coffee gets cold — that specific way carries an organizational cost I was not accounting for.
The cost is premature clarity. The cost is self-confirming enactment. The cost is the meeting that never happens because the prototype already exists and the prototype's existence has made the meeting feel redundant. The cost is the alternative that was never built and therefore never generated the evidence that would have revealed the first prototype's limitations.
I think about the Pyrenees map constantly. My team marched through the Alps with it and survived. But they survived not because the map was right. They survived because the terrain pushed back — because the real world resisted the map's interpretation and forced continuous correction. The question I carry now is whether I am maintaining enough terrain in my organization for reality to push back. Whether the AI-enabled smoothness of our production process has eliminated enough friction that the map and the territory have become indistinguishable — and whether we will discover the difference only when we arrive at a destination that exists on the map but not in the world.
I do not have the answer. Weick would say that is the right condition to be in. Sensemaking does not arrive at answers. It arrives at frameworks that are adequate for the moment — plausible, provisional, held loosely, revised constantly. The frameworks I offered in The Orange Pill — the river, the beaver, the dam, the amplifier — are maps. Good maps, I believe. But maps of a territory that is changing faster than any map can track.
What Weick teaches is that the quality of the map matters less than the quality of attention I bring to the gap between the map and the ground. The willingness to notice when the landmarks do not match. The discipline to stop marching and reorient when the terrain contradicts the interpretation. The organizational courage to say, in the middle of a successful sprint, "Something does not feel right" — and to treat that feeling not as an obstacle to productivity but as the most valuable signal the organization possesses.
Build the frameworks. Hold them loosely. And keep watching the ground.
-- Edo Segal
** Every organization navigating AI believes it is making rational decisions about adoption, deployment, and strategy. Karl Weick spent fifty years proving that organizations do not work that way. They act first and understand later -- constructing meaning from ambiguity through debate, friction, and the collision of competing interpretations. AI compresses that interpretive process almost to nothing, producing prototypes before the arguments that should have shaped them can even form. This book applies Weick's sensemaking framework to the central question of the AI era: What happens when organizations can build faster than they can think?
Through the lens of Tenerife cockpits, Mann Gulch wildfires, and high-reliability organizations, these chapters reveal a danger that no efficiency metric captures -- and an organizational discipline that no leader can afford to skip.

A reading-companion catalog of the 33 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Karl Weick — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →