By Edo Segal
The number that broke my confidence was not twenty. It was one.
One week. That is how long it took for my entire engineering team in Trivandrum to reorganize itself around a tool none of them had used before. By Friday, the old workflow was not just slower — it was unthinkable. Nobody voted on that. Nobody filed a change request. It simply happened, the way water finds a new channel when you remove a rock.
I celebrated it. I wrote about it. I stood on stages and described the transformation with the specific pride of someone who believes he is witnessing the future and helping to build it.
Then I read James March, and I realized I had described the phenomenon without understanding the mechanism. I had measured the output without examining the process by which the organization arrived there — a process that, March would argue, was not a decision at all but the accumulated residue of a thousand small adaptations that no one tracked, no one authorized, and no one evaluated at the systemic level.
March spent fifty years studying how organizations actually behave, as opposed to how they claim to behave. What he found was uncomfortable. Organizations do not make decisions the way their strategy documents suggest. They drift. Solutions find problems rather than the other way around. The most consequential changes happen without anyone choosing them. And the thing that looks like strategic brilliance in retrospect was, more often than not, a collision of timing, available resources, and whoever happened to be in the room.
This matters now — urgently — because AI is the most powerful exploitation technology ever built. It makes organizations spectacularly better at what they already do. The returns are immediate, measurable, and intoxicating. I have felt the intoxication. I described it in *The Orange Pill* as productive vertigo.
What March's framework reveals is the shadow side of that vertigo. Every hour spent exploiting is an hour not spent exploring. Every resource captured by the productivity machine is a resource unavailable for the foolish, unjustifiable, probably wasteful experiments from which genuinely new ideas emerge. And the organizations that optimize hardest — the ones with the best quarterly numbers, the most impressive dashboards, the tightest alignment — are the ones most likely to be destroyed when the environment shifts beneath them.
March called this the competency trap. He called the antidote the technology of foolishness. Both concepts are more relevant right now than at any point since he first articulated them.
This book is another lens. Use it to see what the dashboards cannot show you.
— Edo Segal ^ Opus 4.6
James G. March (1928–2018) was an American political scientist, sociologist, and organizational theorist whose work fundamentally reshaped the study of how organizations make decisions. Born in Cleveland, Ohio, he earned his PhD from Yale University and held faculty positions at Carnegie Mellon University and Stanford University, where he taught for over four decades. With Richard Cyert, he co-authored *A Behavioral Theory of the Firm* (1963), which challenged classical economic models of the firm as a rational, unified decision-maker. His 1991 paper "Exploration and Exploitation in Organizational Learning" became one of the most cited works in management science, introducing the framework for understanding how organizations balance the refinement of existing capabilities against the search for new ones. With Michael Cohen and Johan Olsen, he developed the "garbage can model" of organizational choice, describing decision-making as the collision of loosely coupled streams of problems, solutions, participants, and opportunities. March also wrote extensively on ambiguity, organizational foolishness, and the role of imagination in leadership, drawing on sources as varied as computational modeling and Cervantes' *Don Quixote*, which he taught at Stanford for years as a text on organizational life. He received numerous honors, including election to the National Academy of Sciences and the American Academy of Arts and Sciences. His influence spans political science, sociology, economics, management theory, and the emerging study of AI adoption in organizations.
Every organization that has ever existed has faced the same problem. It is not the problem of finding customers, or managing costs, or hiring talent, though all of these are real. The foundational problem is simpler and more treacherous: the organization must simultaneously get better at what it already does and discover what it should do next. These two activities compete for the same finite resources — time, attention, money, talent — and the competition between them is not fair. One of them almost always wins, and it is almost always the wrong one.
James March named these two activities in a 1991 paper that became one of the most cited works in the history of management science. He called them exploitation and exploration. The terms are precise and should not be confused with their colloquial meanings. Exploitation, in March's framework, is not predatory. It is the refinement and extension of existing competencies, technologies, and paradigms. It is doing what the organization already knows how to do, but doing it better: faster, cheaper, more reliably, at greater scale. Exploitation produces returns that are proximate, predictable, and measurable. When a manufacturing firm optimizes its assembly line, that is exploitation. When a software company ships the next incremental release, that is exploitation. When a sales team refines its pitch based on last quarter's conversion data, that is exploitation.
Exploration is the search for new alternatives. It is experimentation with unfamiliar technologies, entry into unknown markets, the pursuit of ideas that may produce no return whatsoever. Exploration is inherently wasteful: most experiments fail. Its returns are distant, uncertain, and frequently negative in the short term. When a pharmaceutical company funds basic research into a molecular pathway that may never yield a drug, that is exploration. When a technology firm assigns engineers to a project with no clear business case, that is exploration. When a leader asks a question to which no one in the organization knows the answer, that is exploration.
March's insight was not that organizations need both — that observation, on its own, approaches banality. His insight was that the two activities are structurally antagonistic. They compete for the same resources, and the competition is rigged. Exploitation wins, not because exploitation is more important, but because exploitation is more legible. Its returns can be measured on a quarterly earnings report. Its progress can be tracked on a dashboard. Its value can be demonstrated to a board of directors in the time it takes to advance a slide.
Exploration cannot compete on these terms. How do you measure the value of an experiment that has not yet produced results? How do you justify the cost of a project whose returns may not materialize for five years, if they materialize at all? How do you explain to shareholders that the organization's most important activity is the one that looks, from the outside, like waste?
The answer, in most organizations most of the time, is that you do not. The exploitation case makes itself. The exploration case must be argued, defended, and re-justified at every budget cycle. The structural asymmetry produces a predictable outcome: organizations drift toward exploitation and away from exploration, not through deliberate strategic choice but through the accumulated weight of a thousand individually reasonable decisions, each of which favors the near over the far, the certain over the uncertain, the measurable over the meaningful.
March demonstrated this drift through a computational model that has been replicated and extended by researchers for three decades. The model is elemental in its simplicity. An organization consists of individuals who hold beliefs about the world. The world has an objective reality. Individuals learn from the organization (socialization), and the organization learns from individuals (innovation). The question is what happens to the organization's beliefs over time.
The answer depends on the balance between exploration and exploitation. When individuals conform rapidly to the organization's existing beliefs — high exploitation — the organization converges quickly on a set of shared beliefs. But the beliefs it converges on are frequently wrong, or at least not as good as the beliefs it would have discovered through more exploration. The organization gets very good at a mediocre solution and stays there, unable to escape, because everyone now believes the same thing and no one is searching for alternatives.
When individuals maintain their idiosyncratic beliefs longer — high exploration — the organization converges more slowly, but converges on better beliefs. The slow learners, the misfits, the people who refuse to get with the program, turn out to be the organization's most valuable long-term asset, not because they are right (they usually are not), but because their persistence in being different prevents the organization from locking in prematurely on a solution that is good enough but not excellent.
The organizational implications are counterintuitive and uncomfortable. The efficient organization — the one that socializes quickly, aligns rapidly, eliminates deviance — is the organization most likely to become trapped in a local optimum. The messy organization — the one that tolerates disagreement, moves slowly toward consensus, allows eccentrics to persist — is the organization most likely to find the global optimum. Efficiency and effectiveness are not the same thing, and in the domain of organizational learning, they frequently oppose each other.
This framework was developed to explain how organizations learn in general. It has now become the primary lens through which scholars examine how organizations learn to use artificial intelligence in particular, and the reasons for this adoption are diagnostic.
Artificial intelligence, as it arrived in the organizational landscape of 2025 and 2026, is the most powerful exploitation technology ever built. The claim requires specificity to be understood. AI does not merely improve existing processes. It improves them so dramatically, so visibly, so measurably, that the entire reward structure of the organization reorients around the improvement.
Consider what Edo Segal describes in The Orange Pill: twenty engineers in Trivandrum, India, each achieving productivity gains that he estimates at twenty-fold within a single week. The gains were real. They were measurable. They showed up in shipped features, in compressed timelines, in the visible, tangible evidence that the tool worked. A feature that had been on the backlog for four months was built in three days. An engineer who had never written frontend code built a complete user-facing feature in two days. The evidence was overwhelming, and it was overwhelmingly in favor of exploitation: use this tool, do what you already do, but do it twenty times faster.
No one in that room was exploring. Exploration would have meant asking whether the features on the backlog were the right features. Whether the product architecture that had been designed before the tool existed still made sense in a world where the tool existed. Whether the organization's entire conception of what it was building and for whom needed to be rethought from the ground up. Those are exploration questions, and they do not produce results in a week. They do not produce results that can be demonstrated on a Friday afternoon. They produce discomfort, ambiguity, and the specific organizational vertigo of realizing that the ground has shifted and the map no longer matches the terrain.
The exploitation gains were celebrated, as they should have been. The exploration questions were deferred, as they always are. Not because anyone decided they were unimportant. Because the exploitation results were right there, visible and extraordinary, and the exploration questions were abstract, uncomfortable, and had no timeline for resolution.
This is the structural trap that March identified thirty-five years ago, now operating at a scale and speed that his original model could not have anticipated. The exploitation returns of AI are so large that they do not merely crowd out exploration — they make exploration feel irresponsible. Why would you assign engineers to think about whether your product architecture needs rethinking when those same engineers could be shipping features at twenty times their previous rate? The question answers itself, and the answer is always exploitation.
March warned that this dynamic produces what he and Daniel Levinthal would later call "the myopia of learning" — the structural tendency of adaptive systems to optimize for the near at the expense of the far. But the myopia has a second, more insidious feature. The organization that exploits successfully becomes more committed to exploitation with each cycle. Success reinforces the strategy that produced it. The engineers who shipped twenty times faster are promoted, rewarded, held up as examples. The organizational culture calcifies around the exploitation model. The exploration muscle, unused, atrophies.
The drift is self-reinforcing and, past a certain point, self-concealing. The organization stops noticing that it has stopped exploring, because the exploitation results are so good that the absence of exploration does not register as a loss. The features ship. The quarterly numbers improve. The board is satisfied. The organization is getting better at what it does.
What it is not doing is asking whether what it does is still the right thing to do.
This distinction — between getting better at the current game and discovering whether the game itself has changed — is the fault line along which organizations will fracture in the age of AI. The fracture will not be visible in the quarterly numbers. It will be visible only in retrospect, when the organization discovers that the game changed while it was busy optimizing its play, and that the optimization, no matter how brilliant, was directed at a problem that no longer existed.
March's 1991 paper ends with a characteristic refusal of easy resolution. He does not prescribe an optimal balance between exploration and exploitation, because no such balance exists in the abstract. The optimal balance depends on the rate of environmental change, the cost of exploration, the discount rate applied to future returns, and a dozen other variables that shift constantly and unpredictably. What March offers instead is a structural observation: the balance is hard to maintain, the drift toward exploitation is relentless, and the organizations that fail to resist the drift will eventually be destroyed by it — not quickly, not visibly, but with the slow certainty of a river undermining a foundation that no one thought to inspect.
The foundation is what this book is about. AI is the river. And the inspection is long overdue.
---
In 1993, James March and Daniel Levinthal published a paper that extended March's earlier work on exploration and exploitation into a more troubling direction. The paper was titled "The Myopia of Learning," and its central argument was deceptively straightforward: learning systems are structurally biased toward the near, the certain, and the measurable. This bias is not a defect in particular organizations or particular leaders. It is a feature of learning itself, embedded in the architecture of how adaptive systems process feedback.
The myopia operates through three mechanisms, each of which March and Levinthal described with the precision of diagnosticians identifying the pathways of a chronic disease.
The first mechanism is temporal. Learning systems discount the future. An exploitation strategy that produces measurable returns this quarter will always outcompete an exploration strategy that might produce larger returns in three years, because the learning system updates its beliefs based on observed outcomes, and the outcomes of exploitation are observed first. By the time the exploration strategy would have matured, the learning system has already committed to exploitation — has already allocated resources, promoted the exploiters, restructured around the exploitation model. The future returns of exploration are not merely uncertain; they are systematically invisible to a system that learns from what has already happened.
The second mechanism is spatial. Learning systems favor the local over the distant. An exploitation strategy that improves performance in the organization's current market, using the organization's current technology, serving the organization's current customers, will always outcompete an exploration strategy that might open entirely new markets, using unfamiliar technology, serving customers the organization has never met. The local improvements are visible and attributable. The distant possibilities are speculative and unattributable. The learning system cannot evaluate what it cannot observe, and it cannot observe what has not yet been tried.
The third mechanism is failure-averse. Learning systems underweight negative outcomes from unexplored alternatives and overweight positive outcomes from current strategies. When the organization exploits successfully, the success is attributed to the strategy. When exploration fails — as most exploration does — the failure is attributed to the strategy as well, discouraging further exploration. The result is a ratchet: successes in exploitation encourage more exploitation, while failures in exploration discourage further exploration, and the system converges on a pure exploitation strategy that may be locally optimal but globally suboptimal.
March and Levinthal were not describing pathology. They were describing the normal operation of a well-functioning learning system. The myopia is rational. At each individual decision point, the exploitation choice is the better-supported choice. The evidence favors it. The returns are visible. The risk is lower. No individual decision-maker is making an error. The error is emergent — it appears only at the system level, over a time horizon longer than any individual decision, and it is visible only to an observer who can see the entire trajectory rather than any single step.
This structural myopia explains one of the most puzzling features of organizational life: why intelligent organizations staffed by intelligent people make collectively unintelligent choices. The answer is not that the people are stupid. The answer is that the learning system through which they coordinate their intelligence has a systematic bias that no individual can correct from within, because the bias is a property of the system, not of its components.
Artificial intelligence intensifies every mechanism of this myopia. The intensification is not subtle. It operates with the kind of overwhelming force that makes the previous equilibrium — the precarious balance between exploitation and exploration that organizations maintained through cultural norms, slack resources, and managerial intuition — suddenly and visibly inadequate.
Consider the temporal mechanism first. AI compresses the feedback loop between action and observed outcome from months to minutes. A developer working with Claude Code describes a feature, receives working code, tests it, iterates, ships — all within a single session. The exploitation returns are not merely proximate; they are immediate. The learning system does not have to wait for a quarterly report to observe the outcome. The outcome is visible before the developer has finished her coffee.
This immediacy is intoxicating, and the intoxication is the diagnostic clue. When Edo Segal describes the sensation of working with Claude — "the exhilaration was genuine, physical, the kind that makes you want to call someone and tell them what just happened" — he is describing the phenomenology of a learning system receiving an exceptionally strong positive signal. The signal is real. The productivity gain is real. And the learning system, doing exactly what learning systems do, updates its beliefs: this works. Do more of this. Allocate more resources here. The temporal myopia operates at the speed of human emotion, which is to say instantaneously.
Meanwhile, the returns to exploration remain as distant and uncertain as ever. Asking whether the product architecture needs rethinking does not produce a dopamine hit. Wondering whether the organization's competitive position will survive the next phase of AI development does not ship a feature. Sitting with the uncomfortable question of whether the team's newfound twenty-fold productivity is being directed at the right problems does not generate a metric that anyone can put on a slide.
The spatial myopia is equally intensified. AI tools are general-purpose in theory but local in practice. An organization adopts AI to improve its current processes — code generation, documentation, analysis — and the improvements are spectacular. The improvements are so spectacular that they consume the organization's attention entirely. The possibility that AI could be used not to improve existing processes but to discover entirely new ones — to explore unfamiliar markets, to experiment with business models that do not yet exist, to ask questions that the current organizational structure cannot even formulate — recedes from view. The spatial myopia narrows the field to what is directly in front of the organization, and what is directly in front of the organization is an exploitation opportunity of historic proportions.
The Ye and Ranganathan study that Segal analyzes in The Orange Pill provides empirical evidence of this narrowing. Workers who adopted AI tools "expanded into areas that had previously been someone else's domain," but the expansion was horizontal, not vertical. Designers started writing code. Engineers started doing documentation. The boundaries between existing roles blurred. But no one started doing work that had not previously existed in the organization at all. The AI intensified exploitation across a wider domain, but it did not catalyze exploration of genuinely new territories. The spatial myopia held. The organization got better at more of what it already did. It did not discover what it should do instead.
The failure-aversion mechanism is perhaps the most dangerous, because it operates at the level of organizational culture and is therefore the hardest to observe and the hardest to correct. When AI makes exploitation reliable, the organization's tolerance for the unreliability of exploration drops. Why fund an uncertain experiment when the exploitation returns are so high? Why tolerate the messiness of genuine inquiry when the AI can generate clean, confident, well-structured output on demand?
March noted in a 2006 paper on adaptive intelligence that "the notion that magically, through learning, we will end up with an optimum set of rules, I think is fanciful." The observation applies with particular force to AI-augmented organizations. The learning system converges on the exploitation strategy not because it is optimal but because it is the first strategy that produces strong positive signals. The convergence is premature. The premature convergence is invisible. And the invisibility is what makes it structural rather than merely circumstantial.
The Berkeley data documents the behavioral manifestation of this convergence: work intensification, task seepage into protected spaces, the colonization of pauses by AI-assisted productivity. These are not symptoms of bad management. They are symptoms of a learning system operating exactly as March predicted it would — favoring the near, the certain, and the measurable, and doing so with an efficiency that leaves no room for the distant, the uncertain, and the unmeasurable.
What makes AI different from previous technologies that intensified exploitation is the sheer magnitude of the asymmetry it creates between exploitation returns and exploration returns. The power loom made exploitation of textile production more productive than hand-weaving. The spreadsheet made exploitation of financial analysis more productive than manual calculation. But neither technology made the exploitation returns so overwhelming that the very concept of exploration felt irresponsible. The power loom did not produce the sensation that every moment spent not weaving was a moment wasted. The spreadsheet did not produce the compulsion to calculate during lunch breaks.
AI does. The productive addiction that Segal describes — "I could not stop" — and that the Berkeley researchers documented — "task seepage" into every available moment — is not a personal failing. It is the predictable behavioral outcome of a learning system receiving the strongest positive exploitation signal in the history of organizational technology. The system is not malfunctioning. It is functioning exactly as designed. The malfunction is that the design does not include a mechanism for protecting exploration against the overwhelming returns to exploitation.
March spent his career arguing that organizations need such a mechanism. He called it, variously, "slack," "organizational foolishness," "tolerance for ambiguity." The names differ; the function is the same: a structural feature of the organization that protects exploration against the relentless, rational, self-reinforcing drift toward exploitation.
AI has not eliminated the need for this mechanism. It has made the need for it desperate. The myopia that March and Levinthal described as a chronic condition of all learning systems has, in the age of AI, become acute. The drift toward exploitation is no longer gradual. It is sudden, overwhelming, and reinforced at every level of the organization — from the individual developer who cannot stop building, to the team that fills every freed hour with more features, to the organization that redirects every exploration budget toward exploitation because the exploitation returns make exploration look wasteful.
The treatment for myopia has always been corrective lenses — structural interventions that adjust the learning system's field of vision to include the distant, the uncertain, and the unmeasurable alongside the near, the certain, and the measurable. In the age of AI, those lenses are not optional. They are the difference between an organization that adapts and one that optimizes itself into obsolescence — brilliantly, efficiently, and with excellent quarterly numbers right up until the moment the game changes and the optimization is revealed to have been directed at a problem that no longer exists.
The lenses are what this book attempts to grind. But first, the mechanism through which AI adoption actually occurs in organizations must be examined in detail — not as a rational strategic choice, but as the unmanaged, incremental, individually rational, and collectively unexamined process that March's framework predicts.
---
The most important organizational decisions are often not decisions at all. They are the accumulated residue of a thousand local adjustments, each too small to warrant executive attention, each individually rational, and collectively transformative in ways that no one intended and no one examined until the transformation was complete.
James March understood this. His career-long engagement with organizational decision-making led him not toward the grand strategic choices that business school case studies celebrate but toward the far more consequential process by which organizations change without choosing to change. In his work with Richard Cyert on A Behavioral Theory of the Firm, March described how organizational behavior emerges from routines — standard operating procedures that encode past learning and reproduce it without requiring fresh analysis. Routines are efficient precisely because they do not require decisions. They execute automatically, drawing on accumulated experience, freeing cognitive resources for the exceptional cases that routines cannot handle.
But routines are also conservative. They perpetuate whatever the organization learned to do, whether or not the original learning still applies. And they change not through deliberate redesign but through incremental drift — small adjustments made by practitioners in response to local conditions, each adjustment modifying the routine slightly, the modifications accumulating over time into a routine that bears little resemblance to its original form. No one decided to change the routine. The routine changed itself, through the accumulated weight of adaptations that no one tracked.
This is the mechanism through which organizations adopt AI. Not through strategic evaluation. Not through pilot programs that produce recommendations that produce decisions that produce implementation plans. Those processes exist in organizational charts and consulting presentations. In the actual life of actual organizations, AI adoption follows the path that March's framework predicts: incremental, unmanaged, driven by local adaptation rather than central coordination, and irreversible by the time anyone recognizes what has happened.
The mechanism operates in stages, and each stage has the property that March identified as central to organizational change: it is individually rational, locally optimal, and systemically unexamined.
Stage one is sanctioned experimentation. An organization provides AI tools to a subset of its workforce. The provision is deliberate, bounded, and cautious. There are guidelines. There are use-case restrictions. There are review processes. Management is exploring — testing the tool's capabilities, evaluating its fit with existing workflows, assessing risk. This stage resembles what March would recognize as genuine exploration: an investment in an uncertain technology whose returns are unknown, undertaken with the explicit understanding that the experiment may fail and the organization will learn from the failure.
Stage two is where the ratchet engages. Individual practitioners, working within the bounds of the sanctioned experiment, discover that the tool is more capable than the experiment anticipated. The developer who was authorized to use AI for code generation discovers that it writes better documentation than she does. The analyst who was authorized to use AI for data cleaning discovers that it produces first drafts of reports that require only light editing. Each discovery is shared informally — in Slack channels, over lunch, in the hallway conversation that has always been the most efficient information channel in any organization. The discoveries are not reported through official channels, because the practitioners do not think of them as discoveries. They think of them as small efficiencies, life hacks, tricks of the trade. They are the organizational equivalent of the developer who discovers a new keyboard shortcut: worth sharing with colleagues, not worth reporting to management.
But the aggregation of these small efficiencies produces something that is worth reporting to management, and by the time it is reported, it is no longer a discovery. It is a fait accompli. The documentation is now AI-generated. The reports are now AI-drafted. The test cases are now AI-produced. No one decided that these activities would be transferred to the AI. Each practitioner made a local decision — this task, this time, this tool — and the accumulation of local decisions produced a systemic transfer of organizational capability from humans to machines.
Stage three is normalization. The AI-assisted way of working becomes the default. New employees are trained on the AI-augmented workflow. Process documents are updated to reflect the new reality. The non-AI way of doing things persists in institutional memory but fades from institutional practice. A developer who wanted to write code without AI assistance could, in principle, do so. In practice, the development environment, the timeline expectations, the performance benchmarks have all been calibrated to the AI-augmented workflow. Working without AI is not prohibited. It is merely impossible, in the way that driving a horse to work in a city designed for automobiles is not prohibited but is merely impossible.
Stage four is dependency. The organization can no longer function without the AI tools. Not because the tools have been formally designated as critical infrastructure, but because the human capabilities they replaced have atrophied through disuse. The developer who used AI for documentation for eighteen months has not written documentation manually in eighteen months. The skill has not been maintained. The analyst who used AI for first drafts has not produced a first draft from scratch in eighteen months. The cognitive pathway that would have been activated by the task — the effortful, slow, friction-rich process of generating structure from nothing — has not been exercised.
Segal's account of the Trivandrum training provides a compressed, high-resolution image of this entire sequence. On Monday, the engineers were in stage one: sanctioned experimentation, bounded and cautious. By Tuesday, stage two was underway — the engineers were discovering capabilities beyond the scope of the initial experiment. By Wednesday, the shift in how engineers "leaned toward their screens" signaled the beginning of normalization. By Friday, the transformation was described as "measurable, repeatable reality."
What is absent from this account is a decision. At no point in the five-day sequence did anyone — not Segal, not the engineers, not any organizational authority — make a deliberate choice to abandon the old way of working. The old way of working was not abandoned. It was rendered obsolete by the accumulation of individual discoveries that made it irrational to continue. Each engineer, independently and rationally, concluded that the AI-augmented approach was superior. No one coordinated these conclusions. No one evaluated their systemic implications. The ratchet engaged through the mechanism that March described: not through decision but through the accumulated weight of adaptations that no one tracked.
The ratchet has a critical property: it moves in one direction. Once the organization has transitioned from experimentation to dependency, the reverse transition — from dependency back to human capability — requires an investment that no one has budgeted for, that no learning signal supports, and that no performance metric rewards. The exploitation returns of the AI-augmented workflow are visible, measurable, and ongoing. The returns to rebuilding human capabilities that the AI has rendered unnecessary are invisible, uncertain, and defensive — they pay off only in the counterfactual scenario where the AI fails, a scenario that the organization's recent experience gives it no reason to anticipate.
March would have recognized this as a specific instance of a general organizational phenomenon: the asymmetry between the costs of adoption and the costs of reversal. Adoption is cheap because it piggybacks on existing motivation — the practitioner wants to work faster, the manager wants to hit the quarterly target, the organization wants to remain competitive. Reversal is expensive because it opposes existing motivation — it asks practitioners to be slower, managers to accept lower output, organizations to invest in capabilities they do not currently need.
This asymmetry means that the ratchet is not merely hard to reverse. It is, in the most practically meaningful sense, irreversible. Not because reversal is technically impossible, but because the organizational learning system — the same system that drove the adoption — produces no signal that would motivate reversal. Every signal points in the same direction: more AI, more exploitation, more productivity. The signal that would point toward reversal — the signal that says human capabilities are atrophying and the atrophy will matter when the environment changes — is a signal about the future, and learning systems, as March and Levinthal demonstrated, are myopic. They do not see the future. They see the last quarter.
The Trivandrum training compressed the ratchet into five days. In most organizations, the same process unfolds over months or years. But the mechanism is identical regardless of the timescale: sanctioned experimentation transitions to informal adoption, informal adoption transitions to normalization, normalization transitions to dependency, and at no point in the sequence does anyone make a decision that they would recognize as consequential. The consequence emerges from the accumulation of inconsequential decisions, each of which was rational, each of which was local, and none of which was evaluated at the systemic level.
The ratchet also explains a phenomenon that observers of AI adoption have noted with increasing alarm: the speed at which discourse about AI outstrips experience with AI. Segal observes that "the debate was outrunning the experience. People formed conclusions about a technology they had tried for an afternoon, or had not tried at all." March's framework explains why. The ratchet moves faster than the organization's capacity to evaluate what the ratchet is doing. By the time the organization has developed enough experience with AI to make an informed judgment about its adoption, the adoption has already occurred. The judgment is retrospective, and retrospective judgments about irreversible changes have a specific quality: they are rationalizations, not evaluations. The organization does not ask whether it should have adopted AI. It asks how to make the best of the fact that it already has.
This distinction — between a decision and a rationalization of a decision that was never made — is central to March's understanding of organizational life. Organizations tell stories about their decisions. The stories are coherent, rational, and retrospective. They describe a process of evaluation, deliberation, and choice. The actual process, in most cases, was nothing like the story. The actual process was the ratchet: incremental, uncoordinated, driven by local adaptation, and recognized as a "decision" only after the fact, when the organization needed a narrative to explain how it arrived where it did.
AI adoption will be narrated, in future business school case studies, as a strategic transformation. The narration will describe visionary leaders who recognized the potential of AI, designed implementation plans, managed the transition, and captured the productivity gains. The narration will be fiction. The actual process, in most organizations, will have been the ratchet — and the ratchet, by its nature, leaves no trace in the organizational record, because it was never a decision, and decisions are what organizational records are designed to capture.
---
In 1972, Michael Cohen, James March, and Johan Olsen published a paper that scandalized the rational-planning school of organizational theory. They called it "A Garbage Can Model of Organizational Choice," and the title alone was an affront to the discipline's prevailing assumption that organizations make decisions through orderly processes of problem identification, alternative generation, evaluation, and selection. Cohen, March, and Olsen proposed something different and, to the rationalists, deeply disturbing: organizations are not problem-solving machines. They are arenas in which four loosely coupled streams — problems, solutions, participants, and choice opportunities — flow independently and collide more or less at random.
The model described what its authors called "organized anarchies" — organizations characterized by problematic preferences (the organization does not know what it wants), unclear technology (the organization does not fully understand its own processes), and fluid participation (the people involved in any given decision change unpredictably). These are not pathological organizations. They are universities, hospitals, government agencies, technology companies — in other words, most organizations most of the time, including the most successful ones.
In a garbage can organization, solutions do not wait politely for problems to arrive. Solutions are looking for problems. A researcher has an idea and searches for a problem to which the idea might be applied. A consultant has a framework and pitches it to every client, regardless of whether the client's situation fits. A technology vendor has a product and markets it as the solution to whatever the customer happens to be worried about.
Problems, similarly, do not wait for solutions. Problems attach themselves to whatever choice opportunity happens to be available. A budget meeting becomes a forum for airing grievances about office space. A product review becomes a discussion about organizational culture. The problem finds the meeting, not the other way around.
Participants wander in and out of decision arenas based on competing demands on their time. The people present when a decision is made are not necessarily the people best qualified to make it; they are the people who happened to be available. The decision that emerges is a function of which problems, solutions, participants, and choice opportunities happened to collide in that particular arena at that particular moment.
The garbage can model was not a critique of organizational dysfunction. It was a description of organizational reality. And its most unsettling implication was that many organizational outcomes — including outcomes that are subsequently rationalized as the products of deliberate strategy — are better understood as artifacts of temporal coincidence than as products of purposeful choice.
Artificial intelligence adoption in organizations follows the garbage can pattern with an almost textbook fidelity. The mechanism is worth tracing in detail, because it explains features of AI adoption that the rational-planning framework cannot account for — including the speed of adoption, the unevenness of adoption across organizations, and the persistent gap between what organizations say they are doing with AI and what they are actually doing with AI.
Start with the solution stream. AI, as it existed in early 2026, was a solution of extraordinary generality. It could write code, generate documentation, draft reports, analyze data, produce images, compose music, answer questions, and conduct conversations. This generality is precisely what makes it a powerful garbage can participant: a solution so flexible that it can attach itself to almost any problem that presents itself. The specificity that would limit its applicability — the way a new accounting software solves accounting problems and nothing else — is absent. AI is a solution looking for problems, and it has the unsettling property of finding them everywhere.
Ethan Mollick, the Wharton professor who has become one of the most perceptive commentators on AI adoption, framed this explicitly in March's terms. Organizations, Mollick argued, are "chaotic 'garbage cans' where problems, solutions, and decision-makers are dumped in together, and decisions often happen when these elements collide randomly." AI enters this chaos not as a carefully evaluated strategic investment but as an available solution that attaches itself to whatever problem a participant happens to be facing at the moment the tool is within reach.
The attachment process is instructive. A developer is struggling with a debugging problem. The debugging problem is the "problem" stream in the garbage can. Claude Code is available on the developer's machine. Claude Code is the "solution" stream. The developer, who has ten minutes before a meeting, is the "participant" stream. The ten-minute gap is the "choice opportunity." The four streams converge: the developer asks Claude to help debug, Claude produces a fix, the fix works, and the developer has now used AI for debugging. No organizational decision authorized this use. No strategic evaluation preceded it. The solution found the problem in the time available with the participant who happened to be there.
Now the scope expansion begins, and it follows the garbage can logic with an inexorable quality. Having used AI for debugging, the developer encounters a documentation task. The documentation task is a new problem. AI is still available — the solution stream does not close between uses. The developer tries AI for documentation. It works. A new attachment has been formed. Then specification drafting. Then test case generation. Then architectural planning. Each attachment is individually incidental — a participant encountering a problem in the presence of an available solution — and collectively transformative. The scope of AI's role in the organization expands not through strategic planning but through the serial coincidence of problems and an omnipresent solution.
March and his colleagues identified a critical feature of the garbage can process: decisions made through temporal coincidence tend to be persistent. Once a solution has attached itself to a problem, the attachment becomes a routine — a standard operating procedure that reproduces itself without requiring fresh analysis. The developer who used AI for debugging once will use it again, because the previous use was successful and the routine has been established. The routine spreads through the organization by imitation — another developer sees the first developer's approach and adopts it — and by normalization — the process documents are updated to reflect the AI-augmented workflow. What began as a coincidence becomes a convention, and what became a convention becomes an expectation.
The garbage can model also explains a feature of AI adoption that the rational-planning framework treats as noise but that is, in March's terms, signal: the radical unevenness of adoption across organizations and within organizations. In a rational-planning world, AI adoption should track organizational need. Organizations with the most to gain from AI should adopt fastest. Organizations with less to gain should adopt more slowly. Within organizations, the functions with the most to gain should adopt first.
In reality, adoption tracks temporal coincidence. The organization that adopted AI earliest is not necessarily the one with the greatest strategic need. It is the one in which the right participant encountered the right problem in the presence of the right tool at the right moment. The function that adopted AI first is not necessarily the one with the most to gain. It is the one in which a curious individual happened to try the tool, happened to find it useful, and happened to share the discovery with colleagues who happened to be receptive.
This randomness is not dysfunction. It is the normal operation of organizational choice in the conditions that Cohen, March, and Olsen described: problematic preferences (the organization does not know what it wants from AI), unclear technology (the organization does not fully understand what AI can and cannot do), and fluid participation (the people making AI adoption decisions are not a stable group with a consistent mandate but a shifting cast of practitioners, managers, and executives whose involvement varies from week to week).
The garbage can model also illuminates a phenomenon that conventional analyses of AI adoption have struggled to explain: the persistent gap between official organizational AI strategy and actual organizational AI practice. Most large organizations have, by early 2026, produced official AI strategies — documents that describe how the organization will evaluate, adopt, and govern AI tools. These documents are rational-planning artifacts. They describe a process of deliberate evaluation, controlled implementation, and systematic governance.
Meanwhile, on the ground, adoption follows the garbage can. Practitioners are using AI tools in ways that the official strategy does not anticipate, has not authorized, and may not even be aware of. The gap between the strategy document and the ground-level reality is not a failure of implementation. It is a structural feature of organized anarchies: the official decision process and the actual decision process operate in parallel, connected only loosely, and the actual process — the garbage can — typically outpaces the official process by months or years.
Mollick identified the practical consequence with precision: "Scaling AI across the enterprise is hard because traditional automation requires clear rules and defined processes; the very things Garbage Can organizations lack." The enterprise AI strategy assumes a rational organization. The actual organization is a garbage can. The strategy prescribes orderly adoption. The reality is serial coincidence. The strategy calls for governance. The reality is that the tool is already in use, in ways governance has not yet imagined, by practitioners governance has not yet consulted.
Mollick also raised a possibility that March, one suspects, would have found both provocative and characteristic: that AI might navigate the organizational garbage can more effectively than humans do. "The AI will find its own paths through the organizational chaos; paths that might be more efficient, if more opaque, than the semi-official routes humans evolved." This is a prospect worth taking seriously. If organizations are garbage cans, and if AI is a solution of extraordinary generality that can attach itself to problems with an efficiency that human participants cannot match, then AI may not merely operate within the garbage can — it may reorganize the garbage can itself, establishing new routines, new attachments, new patterns of problem-solution coincidence that bypass the human-mediated processes the organization previously relied on.
This reorganization would be, in March's terms, the ultimate unmanaged organizational change: the garbage can reorganizing itself through a process that no participant directed and no authority sanctioned. The ratchet, operating inside the garbage can, produces outcomes that are doubly unexamined — unexamined because the ratchet moves without decision, and unexamined because the garbage can connects problems and solutions without deliberation.
The combination of these two Marchian mechanisms — the ratchet and the garbage can — produces a specific organizational trajectory. AI tools enter the organization as one solution among many, attach themselves to available problems through temporal coincidence, expand their scope through serial attachment, become routinized through repetition and imitation, and produce a state of dependency that no one intended and no one evaluated. The trajectory is not chaotic. It is patterned — patterned by the structural features of organizational choice that March spent fifty years documenting. But it is also unmanaged, which means that the organization arrives at a destination it did not choose, through a process it did not control, and then tells itself a story about strategic transformation that bears no resemblance to what actually happened.
The story is comforting. The reality, as March understood better than nearly anyone, is that organizational life proceeds not through the stories organizations tell about themselves but through the far messier, more consequential, and less visible processes that the stories are designed to conceal.
There is a particular kind of organizational death that arrives disguised as health. The vital signs are strong. The quarterly numbers are excellent. The workforce is productive, aligned, and executing with precision. The board is satisfied. The analysts are bullish. And somewhere beneath the polished surface, the organization is dying — dying of competence.
James March and Daniel Levinthal gave this pathology a name: the competency trap. The mechanism is elegant in its cruelty. An organization develops proficiency in a particular technology, a particular process, a particular way of serving its market. The proficiency produces returns. The returns reinforce the proficiency. The organization invests more in the approach that is working, which makes it work better, which produces more returns, which justifies more investment. The cycle is self-reinforcing, and at every step, the decision to continue is rational. The alternative — investing in an unfamiliar technology whose returns are uncertain and whose learning curve is steep — cannot compete with the proven approach on any metric the organization knows how to measure.
The trap springs when the environment changes. The technology that the organization mastered is superseded. The process that produced reliable returns is rendered obsolete. The market that the organization served so well has shifted to a different set of needs. And the organization, which has spent years — sometimes decades — deepening its competence in the old approach, discovers that it cannot adapt. Not because it lacks talent. Not because it lacks resources. Because every fiber of its learning system, every routine, every performance metric, every promotion criterion, every cultural norm has been optimized for a world that no longer exists.
The competency trap is not a failure of intelligence. It is an excess of it — intelligence directed so effectively at the current problem that no capacity remains for the next one. The organization that fell into the trap was not lazy. It was diligent. It was not inattentive. It was focused. The focus was the trap.
March described the mechanism with characteristic precision. "Your performance on a particular activity is a joint effect of the activity's potential and your skill at the activity," he observed in a 2013 interview. An activity with moderate potential but high accumulated skill will outperform an activity with high potential but no accumulated skill — and it will outperform it consistently, visibly, and persuasively. The learning system, observing these results, reinforces the skilled activity and neglects the unskilled one. The neglected activity never accumulates enough skill to demonstrate its potential. Its potential remains latent, invisible, permanently deferred.
This is not a parable. It is a precise description of what happened to the software industry in 2025 and 2026.
For two decades, the enterprise software industry had accumulated extraordinary competence in a particular model: build software, sell subscriptions, capture data, deepen integrations, raise switching costs. The SaaS model was a competency of breathtaking refinement. Salesforce, Workday, Adobe, ServiceNow — each had spent years perfecting the cycle of subscription revenue, feature expansion, and customer lock-in. The returns were enormous. The model was proven. Every metric confirmed its superiority.
Then AI made the code layer — the foundation on which the entire edifice rested — approach commodity pricing. Segal describes the result in The Orange Pill: a trillion dollars of market value vanished from software companies in the first eight weeks of 2026. Workday fell thirty-five percent. Adobe lost a quarter of its value. When Anthropic published a blog post about Claude's ability to modernize COBOL, IBM suffered its largest single-day stock decline in more than a quarter century. The market called it the SaaSpocalypse.
March's framework reveals what the market was actually pricing. The trillion-dollar decline was not a judgment that these companies had become incompetent. It was a judgment that their competence had become a trap. The very thing they were best at — writing, selling, and maintaining software — was the thing that AI had rendered insufficient as a basis for value. The organizations that had invested most deeply in software competence were the organizations most trapped by it, because the investment had produced returns that made any alternative investment look foolish by comparison.
The competency trap operates at the individual level as well, and the individual-level dynamics are, if anything, more painful to observe. A senior engineer described in The Orange Pill spends decades building a specific kind of expertise — the ability to feel a codebase the way a doctor feels a pulse, to sense architectural problems before they manifest, to navigate complex systems through embodied intuition developed over thousands of hours of patient work. This expertise is genuine. It was genuinely hard to acquire. It represents a real and irreplaceable form of organizational knowledge.
And it is caught in a competency trap. The expertise was built to solve problems at the implementation layer — the layer that AI now handles with twenty-fold efficiency. The engineer's accumulated skill outperforms any alternative skill the engineer might develop, because the alternative skill has no accumulated experience behind it. But the activity the skill was built for — manual implementation, hand-crafted architecture, the slow accretion of systemic understanding through friction — is an activity whose potential has been fundamentally altered by AI. The skill is real. The potential of the activity the skill serves has changed. And the learning system — both the engineer's personal learning system and the organization's — continues to reward the existing skill because the existing skill still produces observable returns.
The trap is that observable returns from current competence are a trailing indicator. They measure what the skill produced in the environment that existed when the skill was being built. They do not measure what the skill will produce in the environment that is emerging. The engineer's debugging intuition, developed over twenty years, still works today. It will work less well next year, as AI debugging capabilities improve. It will be largely unnecessary in three years. But the learning system sees only today's returns, and today's returns say: this skill is valuable. Continue investing.
March would have recognized a deeper structural issue in the AI competency trap, one that distinguishes it from previous instances. In the classical competency trap, the organization is trapped by competence in a technology or process. In the AI competency trap, the organization is trapped by competence in AI-augmented exploitation itself. The trap is not that the organization is too good at the old way. The trap is that the organization is too good at using AI to do the old things faster.
This is a second-order competency trap, and it is more insidious than the first-order variety. The organization has adopted AI. It has captured the productivity gains. It is exploiting AI with extraordinary effectiveness. The quarterly numbers reflect the gains. The board is satisfied. The workforce is productive.
But the organization is using AI to accelerate exploitation — to do what it already does, faster and at greater scale. It is not using AI to explore — to discover what it should do differently, to experiment with business models that do not yet exist, to ask questions that the current organizational structure cannot formulate. The exploitation returns are so large, so visible, so measurably superior to any alternative allocation of AI resources, that the exploration use of AI cannot compete. Why use AI to explore uncertain new markets when you can use AI to exploit existing markets twenty times faster?
The question answers itself, and the answer is the trap. The organization that uses AI exclusively for exploitation will be the organization best positioned for the world as it currently exists and worst positioned for the world as it is becoming. The exploitation gains buy time. They do not buy adaptation.
Segal's analysis of the Software Death Cross identifies the escape route, though his framework does not use March's vocabulary. The SaaS companies that will survive, he argues, are the ones "whose value was always above the code layer" — the ones that built ecosystems, data layers, institutional trust, customer relationships that transcend the software itself. In March's terms, these are the organizations that maintained exploration at the ecosystem level while exploiting at the code level. They built competence in software — but they also built competence in the higher-order activities that software served: understanding customer needs, integrating across institutional boundaries, building trust that takes years to accumulate and cannot be replicated by an AI tool in an afternoon.
The organizations that will be destroyed are the ones that were always just code — the ones whose competence was entirely at the layer that AI commodified. Their competence was real. Their trap was that the competence was in an activity whose potential had collapsed, and no amount of skill in a zero-potential activity produces value.
The escape from a competency trap requires what March called "strategic foolishness" — the deliberate investment in activities that the organization's learning system does not support, that current metrics do not reward, and that rational analysis cannot justify. The investment must be deliberate precisely because the learning system will not produce it spontaneously. Left to its own devices, the learning system will continue to reinforce current competence until the environment renders that competence valueless.
The organizations navigating the AI transition most effectively are the ones that have recognized — or that have leaders who have recognized — that the twenty-fold productivity gain is not the strategic prize. It is the strategic distraction. The prize is using the freed resources to explore — genuinely, uncomfortably, without guarantee of return — the territory that AI has opened but that exploitation cannot reach. The prize is the question that no dashboard can formulate and no quarterly report can measure: given that we can now do what we already do twenty times faster, what should we be doing instead?
That question has no answer that the learning system can provide, because the learning system learns from what has already been done, and the answer lies in what has not yet been attempted. Asking the question requires the organizational equivalent of what Segal calls pressing one's face against the glass of the fishbowl — seeing, however briefly, the territory beyond the water the organization has always breathed.
March spent fifty years studying why organizations so rarely ask this question and so reliably fall into the trap of not asking it. His answer was structural, not personal. It is not that leaders are unintelligent. It is that the systems through which organizational intelligence operates are biased — temporally, spatially, and in their tolerance for failure — toward the exploitation of current competence. AI does not change this bias. It amplifies it, with the specific cruelty of a technology that makes the trap more comfortable, more productive, and more profitable at every step of the descent.
---
In 1971, James March published an essay that his more rationalist colleagues found somewhere between puzzling and offensive. He titled it "The Technology of Foolishness," and in it he argued that one of the most important capabilities an organization could possess was the capacity to act without reasons — to do things that could not be justified by rational calculation, to pursue goals that had not yet been defined, to play.
The essay was not whimsical. It was a rigorous analysis of the limits of rational choice as a framework for understanding how organizations and individuals make decisions that matter. March's argument proceeded from an observation that was simple and, once stated, difficult to deny: the rational choice model assumes that decision-makers have preferences, that these preferences are consistent, and that decisions are made by selecting the action that best satisfies those preferences. But in the most consequential decisions — the decisions about what to do with your life, what kind of organization to build, what values to pursue — preferences are not given in advance. They are discovered through action. You do not first know what you want and then act to get it. You act, observe what happens, and discover what you wanted in retrospect.
This discovery-through-action requires what March called foolishness: the willingness to act before preferences are clear, to experiment without knowing what the experiment is testing, to play without knowing what the play is for. The technology of foolishness is the set of organizational and individual practices that enable this kind of action — that create spaces where rational justification is temporarily suspended, where the question "why are we doing this?" is deferred long enough for the doing to reveal its own purpose.
March positioned the technology of foolishness as the necessary complement to the technology of reason. The technology of reason is how organizations exploit: they identify goals, evaluate alternatives, select the best option, and implement it. The technology of reason is indispensable. It is also insufficient. An organization that operates exclusively through the technology of reason will never discover goals that it did not already have. It will optimize within its current framework forever, improving its performance on the current game while never discovering that the game has changed.
The technology of foolishness is how organizations explore. Not through structured innovation programs, which are the technology of reason dressed in exploration's clothing. Not through R&D budgets, which are rational investments in uncertain returns. Through genuine play — the kind of undirected, unjustifiable, often wasteful activity from which genuinely new ideas emerge. The kind of activity that, when examined through the lens of rational choice, looks like a failure of management, and that, when examined through the lens of organizational adaptation, looks like the only thing standing between the organization and obsolescence.
March used the word "foolishness" deliberately. He meant it to sting. The foolish leader is the one who funds a project with no clear business case. The foolish engineer is the one who spends a week on an idea that has no connection to the current product roadmap. The foolish organization is the one that tolerates these behaviors, that creates spaces for them, that protects them from the relentless rationality of the exploitation machine.
AI is a technology of reason. The characterization requires no philosophical extravagance. It is a description of how the technology operates. AI generates outputs that are optimized against specified criteria. It produces the most probable next token, the most statistically likely code completion, the most pattern-consistent response to a prompt. Its outputs are, by construction, the outputs that a rational evaluation of the training data would predict. When AI produces something surprising, the surprise is a statistical artifact — a low-probability output that the system generated because the prompt placed it in a region of the distribution where low-probability outputs are locally optimal. The surprise is rational in a deeper sense: it is the output that best satisfies the optimization criterion, given the specific input.
AI does not play. It does not pursue ideas for the intrinsic satisfaction of pursuing them. It does not suspend rational calculation to see what happens. It does not act before its preferences are clear, because it does not have preferences in any meaningful sense — it has optimization criteria, which is a fundamentally different thing. Preferences are discovered through action. Optimization criteria are specified in advance.
The distinction matters because it means that AI, no matter how capable, cannot perform the function that March's technology of foolishness performs. AI can exploit with extraordinary efficiency. It can even mimic exploration, producing outputs that look novel by combining elements from its training data in unusual ways. But it cannot play, in March's sense — it cannot act without knowing what the action is for, because every action it takes is, by construction, in service of a specified objective.
This creates a specific and previously unencountered organizational challenge. AI makes the technology of reason overwhelmingly productive. The exploitation returns are immediate, visible, and large. The technology of foolishness, meanwhile, produces returns that are by definition unjustifiable in advance — returns that become visible only in retrospect, after the foolish action has been taken and its consequences have unfolded. In an organization where the technology of reason is producing twenty-fold productivity gains, the technology of foolishness does not merely look wasteful. It looks irresponsible.
Consider the organizational dynamics. A team is shipping features at twenty times its previous rate. The backlog is clearing. The quarterly targets are not just met but exceeded. Into this environment, a team member proposes an experiment — something unrelated to the current product roadmap, something with no clear business case, something that she finds interesting for reasons she cannot fully articulate. The technology of reason says: we are in the middle of the most productive period in this team's history. Every hour spent on this experiment is an hour not spent on the exploitation machine. The opportunity cost is twenty times what it would have been before AI. The experiment cannot possibly justify that cost.
The technology of foolishness says: this experiment, precisely because it cannot be justified, may be the most important thing this team does this quarter. It may discover nothing. It will probably discover nothing. But the small probability that it discovers something genuinely new — something that reframes the product, opens a new market, reveals a capability the organization did not know it had — is worth protecting, because exploitation, no matter how productive, cannot produce genuine novelty. Exploitation can only refine what already exists. Novelty requires the willingness to act without justification.
This argument was difficult to sustain before AI. It is nearly impossible to sustain after. The exploitation returns are so large that the opportunity cost of foolishness has increased by an order of magnitude. The leader who protects exploratory time in an AI-augmented environment must defend, at every budget review, the decision to leave productivity on the table. The defense is structurally weak: it relies on the possibility of future returns from an activity that has, by definition, no track record and no business case. The prosecution, meanwhile, has the most compelling evidence imaginable — the evidence of twenty-fold productivity applied to known problems with measurable outcomes.
March anticipated this dynamic. In his 2006 paper on adaptive intelligence, he observed that "reason inhibits foolishness; learning and imitation inhibit experimentation." The inhibition is not accidental. It is structural. The more successful the rational strategy, the less tolerance the organization has for the irrational one. Success breeds confidence, confidence breeds commitment, commitment breeds rigidity, and rigidity — the inability to act outside the framework that produced the success — is the condition that March spent his career diagnosing.
Segal's account of working with Claude captures the phenomenology of this tension at the individual level. His description of the creative collaboration — the moments when Claude made connections he had not seen, the passages where the collaboration produced something neither participant could have produced alone — describes a form of intellectual play. The play is genuine: it is undirected, it produces surprises, it operates in the space where rational calculation has been temporarily suspended in favor of associative exploration. But the play exists inside a framework of intense productivity. The book is being written. The chapters are being produced. The deadlines are being met. The play is in service of a rational objective, which means it is not play in March's fullest sense — it is exploration within the bounds of exploitation, improvisation within a predetermined structure.
Whether this constrained form of play is sufficient to produce the genuinely new ideas that March's technology of foolishness is designed to protect is an open question. What is not open is that the organizational tolerance for play — for the unconstrained, unjustifiable, probably wasteful kind of exploration that has historically been the source of the most significant organizational innovations — is under unprecedented pressure from the productivity of AI-augmented exploitation.
The organizations that survive the AI transition will not be the most rational. They will be the ones that managed to remain productively foolish in an environment that made foolishness expensive — that built structures to protect the capacity for play against the relentless gravitational pull of twenty-fold exploitation returns. Whether they will also be the organizations that the current quarter's metrics identify as successful is, as March understood better than anyone, a separate question entirely.
---
Organizations remember through practice. The knowledge that an organization possesses is not stored in a filing cabinet or a knowledge management system. It is distributed across routines — the standard operating procedures, the habitual responses, the taken-for-granted ways of doing things that encode decades of accumulated learning. A routine is organizational memory made operational: the lesson learned from a previous failure, embedded in a process that prevents the failure from recurring without anyone needing to remember the original failure or the lesson it taught.
March, working with Barbara Levitt, described organizational learning as the process by which experience is encoded into routines and routines are transmitted across time and personnel. The encoding is imperfect — routines simplify the experience they encode, discarding context and nuance in favor of actionable rules. The transmission is lossy — each generation of practitioners inherits the routine but not the experience that produced it, which means the routine is followed but not understood. The routine works, but no one knows why it works, which means no one knows when it will stop working.
This framework illuminates a feature of organizational knowledge that is easily overlooked: the knowledge is maintained only through exercise. A routine that is not practiced decays. The practitioners who understood the routine retire or leave. The documentation that described the routine becomes outdated or lost. The institutional memory that preserved the routine's rationale fades. The decay is gradual, invisible, and irreversible by the time it is noticed — because by the time the routine is needed again, the people who could have reconstructed it are gone, and the experience that produced it cannot be replicated.
March and his colleagues called this process organizational forgetting, and they recognized it as the inevitable complement to organizational learning. Every organization is simultaneously learning and forgetting. It learns new routines through new experience. It forgets old routines through disuse. The balance between learning and forgetting determines what the organization knows at any given moment — and the balance is not under anyone's deliberate control. No one decides what the organization will forget. Forgetting happens through the same mechanism as learning: the accumulation of individually inconsequential changes in practice that, over time, produce a fundamentally different organizational knowledge base.
AI accelerates organizational forgetting with a precision that March's framework predicted but that his models, calibrated to the pace of pre-AI organizational change, could not have anticipated.
The acceleration operates through a specific mechanism: the elimination of the experiences from which the most instructive learning emerged. When AI handles debugging, the organization does not experience debugging failures. When AI generates documentation, the organization does not experience the struggle of converting implicit knowledge into explicit prose. When AI produces specifications, the organization does not experience the painful, iterative process of discovering what the specification should say through the failure of previous specifications to say it clearly.
Each of these eliminated experiences is, from the perspective of current operations, a cost savings. Debugging failures consume time. Documentation struggles consume attention. Specification iteration consumes patience. The elimination of these experiences is, on every metric the organization tracks, an improvement.
But the experiences that are being eliminated are not merely costs. They are the raw material of organizational learning. The debugging failure that revealed a systemic vulnerability in the architecture. The documentation struggle that forced the engineer to articulate assumptions she did not know she held. The specification iteration that exposed a misalignment between what the team was building and what the customer actually needed. Each of these experiences deposited a layer of organizational knowledge — knowledge that was encoded into routines, transmitted to new practitioners, and maintained through continued practice.
Segal uses a geological metaphor that maps precisely onto March's framework: "Every hour you spend debugging deposits a thin layer of understanding. The layers accumulate over months and years into something solid, something you can stand on." The metaphor is apt not only for individual knowledge but for organizational knowledge. Each debugging session, each documentation struggle, each specification failure deposits a layer of organizational understanding. The layers accumulate into the bedrock of organizational competence — the implicit, distributed, practice-maintained knowledge that allows the organization to function in the face of novel challenges.
AI stops the deposition. The debugger that never debugs does not deposit the debugging layer. The writer who never struggles with documentation does not deposit the documentation layer. The specifier who never iterates does not deposit the specification layer. The organizational bedrock, no longer being added to, begins to erode through the natural processes of personnel turnover and routine decay.
The erosion is invisible in the short term. The organization functions perfectly well on its existing bedrock. The AI handles the tasks that would have deposited new layers, and the existing layers are sufficient for current operations. The quarterly numbers remain strong. The workforce is productive. No one notices that the foundation is thinning.
The erosion becomes visible when the organization encounters a situation that the AI cannot handle — a genuinely novel problem that requires the kind of deep, embodied, practice-built knowledge that no training set contains. The AI hallucinates. The AI produces output that is plausible but wrong in ways that only someone with deep domain knowledge would recognize. The AI fails, and the organization turns to its human practitioners for the judgment that the AI cannot provide.
And the human practitioners do not have it. Not because they are less intelligent than their predecessors. Because the experiences that would have built the judgment were eliminated by the AI. The debugging failures that would have taught them where the architecture was vulnerable never occurred. The documentation struggles that would have forced them to articulate their implicit knowledge never happened. The specification iterations that would have exposed their assumptions never took place. The layers were never deposited. The bedrock is not there.
Segal describes this phenomenon at the individual level through the engineer who "realized she was making architectural decisions with less confidence than she used to and could not explain why." March's framework locates the same phenomenon at the organizational level, where it is simultaneously larger in scope and harder to detect. An individual can notice a decline in her own confidence. An organization cannot notice a decline in its collective judgment, because organizational judgment is distributed across routines and practitioners and the interactions between them, and no single observer has a vantage point from which the decline is visible.
The decline shows up not as a sudden failure but as a gradual degradation of decision quality — decisions that are slightly less well-informed, slightly less attuned to systemic risk, slightly more susceptible to the kind of plausible-but-wrong reasoning that AI is particularly good at producing and that only experienced practitioners can detect. The degradation compounds. Each slightly worse decision leads to a slightly worse outcome, which the learning system attributes not to degraded judgment but to environmental factors, bad luck, or the specific circumstances of the case. The systemic cause — the thinning of organizational bedrock through the elimination of instructive experience — is invisible to the learning system, because the learning system learns from observed outcomes, and the counterfactual outcome — what would have happened if the organization had maintained its experience base — is by definition unobservable.
March, with Lee Sproull and Michal Tamuz, examined an even more challenging variant of this problem in their paper on learning from samples of one or fewer. Organizations, they argued, must often learn from events that are rare, ambiguous, or have not yet occurred. The challenge is not merely that the sample size is small. The challenge is that the interpretation of rare events is deeply dependent on the framework the organization brings to the interpretation, and the framework is itself a product of the organization's accumulated experience.
When AI eliminates the experiences that built the framework, the organization loses not only the specific knowledge those experiences produced but the interpretive capacity that the knowledge supported. The organization that has never experienced a specification failure does not merely lack knowledge about specification failures. It lacks the ability to recognize one when it occurs, because recognition requires a framework that is built through repeated exposure to the phenomenon being recognized.
The solution is not to eliminate AI. The elimination of the experiences that AI replaces is a cost, but the experiences themselves were not cost-free — they consumed time, attention, and organizational patience that were genuinely scarce. The solution, in March's framework, is to design organizational structures that maintain the deposition of instructive experience even when AI has made the experience unnecessary for current operations.
This is a form of deliberate organizational investment in a capability whose returns are invisible to the current learning system — which is to say, it is a form of foolishness. The organization that insists on periodic manual debugging, not because it needs the debugging done manually but because it needs the learning that manual debugging produces, is acting foolishly by the standards of the technology of reason. The debugging could be done faster and more reliably by AI. The manual debugging is, by any current metric, a waste.
But the waste is the point. The waste is the deposition of the layer. The layer is the bedrock. And the bedrock, invisible and unmeasured, is what will bear the organization's weight when the AI fails and the novel problem arrives and the judgment that only experience can build is the only thing standing between a manageable setback and a systemic collapse.
March understood, perhaps better than any organizational theorist of his generation, that the most important organizational resources are the ones that no metric captures and no quarterly report records. The technology of foolishness. The tolerance for ambiguity. The accumulated bedrock of instructive failure. These are the resources that AI threatens — not by attacking them, but by making them appear unnecessary. The appearance is the trap. The organization that believes it no longer needs what it can no longer see has already begun to forget, and organizational forgetting, unlike individual forgetting, is not a lapse. It is a loss.
---
Most organizations treat ambiguity as a problem. It appears on risk registers. It occupies agenda items. It generates entire consulting engagements devoted to its elimination. The assumption is pervasive and rarely examined: ambiguity is the enemy of effective action, and the purpose of organizational intelligence is to replace ambiguity with clarity. Know what you want. Know what you face. Know what to do. The clear-eyed organization is the effective organization.
James March spent decades arguing that this assumption is not merely wrong but dangerous — that ambiguity, properly understood and properly managed, is one of the most valuable resources an organization possesses, and that the drive to eliminate it produces organizations that are clear, decisive, and unable to adapt.
The argument has a structure that rewards careful attention. March distinguished between two conditions that are often conflated: uncertainty and ambiguity. Uncertainty is the condition of not knowing which of several well-defined outcomes will occur. A coin flip is uncertain. The set of possible outcomes is known — heads or tails — and the question is merely which outcome will materialize. Organizations deal with uncertainty through the tools of probability: risk assessment, scenario planning, expected-value calculation. These tools are powerful and appropriate, and they require clarity about what the possible outcomes are.
Ambiguity is a different and more fundamental condition. It is the state of not knowing what the question is, of having multiple, conflicting, equally plausible interpretations of the same situation, none of which can be validated or invalidated with available information. Ambiguity is not uncertainty about the answer. It is uncertainty about the question. The organization facing ambiguity does not know which of several known possibilities will materialize. It does not know what the possibilities are. It does not know what it is looking at.
March, working with Johan Olsen, argued in Ambiguity and Choice in Organizations that ambiguity is the normal condition of organizational life, not the exception. Most significant organizational decisions are made under conditions where the goals are unclear (the organization does not know what it wants), the technology is uncertain (the organization does not fully understand how its own processes work), and participation is fluid (the people involved in the decision change as the decision unfolds). These are not pathological conditions. They are the conditions under which complex organizations routinely operate.
The critical implication — the one that challenges the clarity-as-virtue orthodoxy — is that ambiguity enables exploration. When an organization does not know exactly what it wants, it is free to discover what it wants through action. When a situation admits multiple interpretations, the organization can pursue several simultaneously, allowing the evidence to accumulate before committing to one. When preferences are unclear, experimentation is possible, because the organization has not yet closed off the interpretive possibilities that premature clarity would foreclose.
Conversely, when ambiguity is eliminated — when the organization has committed to a single interpretation, a single set of preferences, a single understanding of the situation — exploration ceases. The organization knows what it wants and acts to get it. This is exploitation: effective, efficient, and constrained to the interpretive framework the organization has adopted. The framework may be wrong. The preferences may be poorly specified. The interpretation may be partial. But the commitment to clarity has foreclosed the alternatives, and the organization will not discover the inadequacy of its framework until the framework fails under conditions it was not designed to handle.
AI eliminates ambiguity with an efficiency that is, from March's perspective, alarming. The elimination operates through a specific mechanism: the generation of immediate, confident, well-structured responses to ambiguous situations. A practitioner encounters an ambiguous problem — one that admits multiple interpretations, multiple approaches, multiple framings. Before AI, the practitioner would sit with the ambiguity. The sitting was uncomfortable. It produced the specific cognitive distress of not knowing what to do, the same distress that meditation traditions call "beginner's mind" and that creativity researchers identify as the precondition for genuine insight. The distress was productive precisely because it was uncomfortable — the discomfort motivated continued search, continued exploration of the interpretive space, continued openness to the possibility that the first interpretation was not the best one.
AI resolves the discomfort instantly. The practitioner describes the ambiguous problem to Claude. Claude responds with a clear, confident, well-structured analysis. The analysis selects one interpretation from the many that were available, develops it coherently, and presents it with the fluency that makes AI output so seductive. The ambiguity is gone. The path is clear. The practitioner can act.
Segal's account of his own collaboration with Claude illustrates both the power and the danger of this dynamic. He describes a moment of impasse — "I was stuck. I believed Han's diagnosis was partly right. I also believed the conclusion was wrong. But I could not find the pivot" — and Claude's response, which provided the connection to laparoscopic surgery that resolved the impasse. The resolution was productive. The connection was genuine. The chapter that resulted is better than what either participant could have produced alone.
But the resolution also foreclosed alternatives. The moment Claude provided the laparoscopic surgery connection, the interpretive space collapsed. The ambiguity that had kept Segal searching — the productive discomfort of not knowing how to pivot from Han's diagnosis to the counter-argument — was resolved by one particular connection, and the other connections that the ambiguity was protecting became invisible. They were not rejected. They were never discovered. The confident, well-structured response occupied the space that the ambiguity had held open.
This is not a criticism of Segal's process or of Claude's capability. It is a structural observation about what happens when ambiguity is resolved quickly and confidently rather than slowly and tentatively. The quick resolution selects one path from many. The slow resolution — the extended sitting with discomfort that produces the widest exploration of the interpretive space — is precisely the process that AI's efficiency eliminates.
The organizational implications scale the individual dynamic into something more consequential. When teams use AI to resolve ambiguous strategic questions, the AI provides clear, confident analyses that select one interpretation from many. The team, relieved of the discomfort of ambiguity, adopts the interpretation and acts on it. The action produces results. The results are evaluated. The learning system updates its beliefs based on the observed results — results that were shaped by the interpretation the AI selected, which means the learning system is learning from a sample that is biased toward the AI's initial interpretation. The alternative interpretations — the ones that the ambiguity would have kept alive — are never tested. Their potential remains latent, invisible, permanently deferred.
March would have recognized an additional danger, one that is specific to AI and that his original framework did not need to address. AI does not merely resolve ambiguity. It conceals the fact that ambiguity existed. The output is so fluent, so well-structured, so confident that the practitioner may not recognize that the situation was ambiguous in the first place. The AI's response does not say, "Here is one of several possible interpretations." It says, "Here is the analysis," as though the analysis were the only analysis possible.
Segal identifies this phenomenon in his description of Claude's failure modes: "Claude's most dangerous failure mode is exactly this: confident wrongness dressed in good prose. The smoother the output, the harder it is to catch the seam where the idea breaks." The confident wrongness is a specific instance of a more general problem: the confident resolution of ambiguity, which may be right or wrong but which, in either case, forecloses the alternatives that slower, more tentative engagement with the ambiguity would have preserved.
Byung-Chul Han's critique of smoothness, which Segal engages at length, is recognizable in March's terms as a critique of premature ambiguity resolution. The smooth surface — the polished, seamless, friction-free response — is a surface from which ambiguity has been eliminated. The rough surface — the unpolished, seamed, friction-rich engagement — is a surface that preserves ambiguity, that keeps the interpretive space open, that refuses to commit to a single reading before the reading has been tested against alternatives.
Han frames this in aesthetic terms. March frames it in organizational terms. The structural argument is the same: a system that resolves ambiguity too quickly — that replaces the productive discomfort of not-knowing with the comfortable clarity of a confident response — is a system that has traded exploration for exploitation at the deepest level. It has traded the capacity to discover what the question is for the efficiency of answering a question it has already committed to.
The organizational structures that preserve ambiguity are, by their nature, uncomfortable. They require leaders to tolerate the discomfort of not-knowing. They require teams to sit with conflicting interpretations rather than resolving them. They require organizations to resist the seductive clarity of AI-generated analyses in favor of the messy, slow, cognitively expensive process of maintaining multiple interpretive frameworks simultaneously.
These structures are what March spent his career arguing for — not because ambiguity is pleasant, but because ambiguity is the condition under which exploration occurs. An organization that cannot tolerate ambiguity cannot explore. An organization that cannot explore cannot adapt to genuinely novel circumstances. And the genuinely novel circumstances — the ones that no training set contains, that no pattern-matching can anticipate, that no confident analysis can resolve — are the ones that determine whether the organization survives.
The AI tool that resolves ambiguity in seconds is genuinely useful. The organizational muscle that tolerates ambiguity for weeks is genuinely essential. The tension between them is not a problem to be solved. It is a balance to be maintained — the same balance between exploitation and exploration that March identified as the fundamental challenge of organizational life, now operating at the level of interpretation itself, in the space where questions are formed before answers are sought.
The prescription is easier to state than to implement: maintain the balance between exploration and exploitation in an environment where exploitation has become twenty times more productive than it was twelve months ago. Every structural feature of the organization — its incentive systems, its performance metrics, its cultural norms, its promotion criteria, its budget allocation processes — is calibrated to a world that no longer exists. The calibration must be rebuilt, and it must be rebuilt while the organization is operating at full speed, which is the organizational equivalent of rebuilding an engine while the car is on the highway.
James March did not leave a blueprint. He left something more useful: a set of structural principles that describe the conditions under which the exploration-exploitation balance can be maintained. The principles are general. Their application to the specific conditions of the AI moment requires translation — the same kind of translation that Segal describes in The Orange Pill when he argues that "the dams need building" and "they need maintaining" and "they need to be built not for just the beaver's sake, but for the entire ecosystem that relies upon them."
The first principle is the protection of slack. Slack, in March's vocabulary, is organizational surplus — resources that exceed what is required for current operations. Slack is inefficient by definition. It is unused capacity, unallocated time, uncommitted budget. It is the engineering team that has two more people than the current sprint requires. It is the research budget that funds a project with no connection to this quarter's revenue targets. It is the afternoon that a developer spends on something that interests her but that no one has asked for.
Slack is the organizational resource that funds exploration. Without slack, every resource is committed to exploitation, and the organization has no capacity for the uncertain, often-failing, occasionally transformative experiments through which genuinely new capabilities are discovered. March recognized that slack is perpetually under threat — every efficiency initiative, every headcount optimization, every lean-management program reduces slack — and that the reduction of slack, which looks like improved efficiency on every metric the organization tracks, is actually the systematic dismantling of the organization's adaptive capacity.
AI intensifies the threat to slack in a specific way. When AI makes each individual twenty times more productive, the organizational instinct is to capture the productivity as output rather than preserve it as exploratory capacity. The arithmetic is irresistible: if five engineers can now do the work of a hundred, why not reduce the team to five? Or alternatively, why not direct all twenty engineers to produce twenty times more of what they already produce?
Both responses eliminate slack. The first eliminates it through headcount reduction. The second eliminates it through work intensification — the phenomenon the Berkeley researchers documented, where freed-up time is immediately filled with additional exploitation tasks. In either case, the surplus that would have funded exploration is captured by exploitation, and the organization's adaptive capacity is reduced in direct proportion to the productivity gain.
The dam against this threat is deliberate, structural, and organizationally uncomfortable: the explicit reservation of a percentage of AI-freed capacity for exploration. Not innovation labs, which tend to be exploitation in exploratory clothing — structured programs with defined objectives and measurable outcomes that are exploration in name only. Genuine exploration: unstructured time, undefined objectives, the permission to pursue questions that cannot be justified by their expected returns.
Segal describes something approaching this in his account of the "vector pods" at a company he advises — "small groups of three or four people whose job is not to build but to decide what should be built." The vector pod is a structural reservation of human capacity for the question that precedes exploitation: what deserves to exist? But the vector pod, as described, is still oriented toward the organization's existing business. The deeper reservation — time for questions that are not connected to the current product, the current market, the current strategy — requires a tolerance for organizational foolishness that most leaders find genuinely difficult to sustain when the exploitation machine is producing twenty-fold returns.
The second principle is the preservation of experiential diversity. March's computational models demonstrated that organizations converge on better beliefs when their members maintain diverse, idiosyncratic perspectives for longer — when the slow learners, the misfits, the people who do not immediately adopt the organizational consensus, are tolerated and even protected. The diversity of perspective is what prevents premature convergence on a local optimum. The organization that aligns quickly exploits efficiently but explores poorly. The organization that tolerates diversity aligns slowly but discovers possibilities that the aligned organization could never see.
AI threatens experiential diversity through a mechanism that is almost invisible: the homogenization of cognitive process. When every member of an organization uses the same AI tool, they are all filtering their thinking through the same training set, the same optimization criteria, the same pattern-matching architecture. The outputs differ because the inputs differ — different practitioners ask different questions — but the cognitive process is shared. The AI imposes a subtle uniformity on the organization's thinking, not by dictating conclusions but by shaping the space of possible conclusions through the statistical tendencies of its training data.
The dam against this homogenization is the deliberate cultivation of cognitive diversity through experiences that AI does not mediate. Periodic work without AI tools — not as punishment or Luddism but as a structural practice that maintains the cognitive pathways AI does not exercise. Cross-functional rotations that expose practitioners to domains their AI tools have not been trained for. Mentoring relationships between experienced practitioners and newcomers that transmit tacit knowledge — the embodied, practice-built understanding that cannot be encoded in a training set and cannot be generated by a model.
The Berkeley researchers proposed a version of this under the name "AI Practice" — structured pauses built into the workday, sequenced rather than parallel work, protected time for human-only engagement. March's framework supplies the theoretical rationale for these interventions: they are not merely wellness initiatives. They are structural protections for the experiential diversity that prevents premature convergence. They preserve the conditions under which exploration can occur.
The third principle is the institutional tolerance for failure. Exploration fails. That is its nature. Most experiments produce no return. Most new ideas are bad ideas. Most attempts to enter unfamiliar territory end in retreat. The failure rate is not a deficiency of the exploration process; it is a structural feature. If exploration succeeded reliably, it would not be exploration. It would be exploitation of a well-understood domain.
AI raises the baseline of expected performance. When AI-augmented exploitation produces reliable, measurable, impressive results, the tolerance for the unreliable, unmeasurable, often unimpressive results of exploration drops. The manager who authorized a failed experiment before AI could point to the general uncertainty of the environment — failure was common, expectations were calibrated accordingly. The manager who authorizes a failed experiment after AI must explain why resources were wasted on an uncertain venture when those same resources could have been directed to exploitation with twenty-fold returns.
The dam against this intolerance is cultural and structural: the explicit recognition, embedded in performance evaluation and budget processes, that exploration failure is not only acceptable but expected, and that the absence of exploration failure is evidence not of excellent judgment but of insufficient exploration. The organization that never fails at exploration is the organization that has stopped exploring. Its quarterly numbers may be excellent. Its learning system is dying.
The fourth principle is the preservation of ambiguity at strategic levels. Not all ambiguity should be preserved — operational ambiguity is genuinely costly, and AI's ability to resolve it is genuinely valuable. But strategic ambiguity — the condition of having multiple, conflicting interpretations of the organization's competitive position, its market, its purpose — should be maintained longer than the natural organizational impulse demands. Strategic ambiguity keeps the interpretive space open. It prevents premature commitment to a single strategic framework. It allows the organization to hold multiple possibilities simultaneously, testing them against emerging evidence, adjusting its trajectory as the evidence accumulates.
AI resolves strategic ambiguity as efficiently as it resolves operational ambiguity, and with the same seductive confidence. The leader who asks Claude for a strategic analysis receives a clear, well-structured response that selects one interpretation from the many that were available. The response is useful. It is also premature. The leader who sits with the strategic ambiguity longer — who tolerates the discomfort of not-knowing at the highest level of organizational decision-making — is the leader most likely to discover the interpretation that the AI's confident analysis foreclosed.
These four principles — the protection of slack, the preservation of experiential diversity, the institutional tolerance for failure, and the maintenance of strategic ambiguity — are the structural conditions that March's framework identifies as essential for the exploration-exploitation balance. They were difficult to maintain before AI. They are vastly more difficult to maintain after, because AI has made exploitation so productive that the case for protecting exploration's conditions must be made against the strongest counterargument in the history of organizational management: the evidence of twenty-fold returns on the exploitation side.
The case must be made anyway. It must be made by leaders who understand, as March understood, that the conditions for exploration are precisely the conditions that exploitation is most eager to eliminate — and that the elimination, though it produces immediate and measurable gains, does so at a cost that is invisible to the learning system until the cost has already been paid. The dam must be built by people who know that the river will press against it constantly, that the pressure is rational, that the exploitation returns justify the pressure, and that the dam must hold anyway — because the pool behind the dam, the organizational capacity for exploration, is the capacity from which the organization's future emerges.
The dam is not a project with a completion date. It is an ongoing practice of maintenance, as constant and as unglamorous as the practice March described: studying the river, identifying the leverage points, building at those points, repairing what the current loosens, and building again.
---
The balance between exploration and exploitation cannot be resolved. This is March's deepest insight, and it is the one most easily misunderstood. It sounds like resignation. It is not. It is a description of a structural condition that admits of management but not of solution — the way a chronic condition admits of treatment but not of cure. The physician who manages a chronic condition does not pretend that the condition will disappear. The physician adjusts the treatment as the condition evolves, monitors for changes, intervenes when the balance shifts, and accepts that the management will continue for as long as the condition exists.
The exploration-exploitation tension will exist for as long as organizations exist, because it is a property of any adaptive system that must simultaneously use what it knows and search for what it does not. AI does not introduce this tension. AI did not create a new organizational problem. AI took the oldest problem in organizational theory and amplified it to a scale that makes the previous management strategies inadequate — strategies that were imperfect to begin with, developed through decades of trial and error, and calibrated to an environment in which the exploitation returns were large but not overwhelming.
The environment has changed. The calibration must change with it. And the recalibration will not be a single adjustment, made once and maintained. It will be a continuous process of adaptation, as the AI capabilities evolve, the organizational response evolves, the competitive environment evolves, and the balance point shifts in ways that no one can predict.
March understood this. His entire body of work is an argument against the organizational fantasy of stable equilibrium — the belief that the right strategy, once found, can be maintained indefinitely. Strategies are products of their environments. When the environment changes, the strategy that was optimal becomes suboptimal, and the organization must search for a new one. The search is exploration. The new strategy, once found, is exploited until the environment changes again. The cycle is permanent.
But March also understood something more uncomfortable: the cycle is not smooth. The transition from one strategy to the next is not a gradual adjustment. It is a disruption — a period of organizational turmoil in which the old strategy has failed and the new one has not yet been discovered, in which the exploitation machine is running on momentum while the exploration machine is searching, uncertainly and often unsuccessfully, for the next approach. These transitions are the moments when organizations are most vulnerable, most likely to make mistakes, and most in need of the adaptive capacity that exploration provides.
The AI transition is one of these moments. The old strategy — build teams of specialists, assign them to defined tasks, measure their output, optimize their processes — has not failed in the sense of producing bad results. It has failed in the sense of being inadequate to the new environment. The new strategy — whatever it is, and no one yet knows what it is — has not yet been discovered. The organizations that will discover it are the ones that are exploring, now, in the period of maximum turbulence, when the exploitation returns are highest and the temptation to defer exploration is strongest.
This is March's paradox, and it applies to the AI moment with particular force: the time when exploration is most needed is the time when it is hardest to justify. The exploitation machine is producing unprecedented returns. The quarterly numbers are spectacular. The board is satisfied. The organizational learning system, doing exactly what learning systems do, is reinforcing exploitation and discouraging exploration at every level. And the environment is changing faster than at any point in organizational memory, which means the exploitation strategy that is producing those spectacular numbers may already be obsolete, and the evidence of its obsolescence will not appear in the data until it is too late to act on it.
March's career-long engagement with Don Quixote provides an emotional register for this paradox that his formal models do not. March taught Quixote for years at Stanford, not as a literary curiosity but as a model for organizational leadership — the figure who acts with total commitment in a world he does not fully understand, whose persistence in the face of uncertainty is the only form of integrity available to a creature that must act before it knows. The windmills may be windmills. They may be giants. The knight who charges does not know. The knight who does not charge has already surrendered.
March did not use Quixote sentimentally. He used Quixote to illustrate a structural feature of decision-making under ambiguity: when the situation is genuinely uncertain, when the correct interpretation is unknowable in advance, the quality of the decision cannot be evaluated by its outcome. A good decision that produces a bad outcome was still a good decision. A bad decision that produces a good outcome was still a bad decision. The quality of the decision resides in the process, not in the result — in the quality of the exploration that preceded it, the range of alternatives that were considered, the tolerance for ambiguity that was maintained throughout.
The leader navigating the AI transition is Quixote. The windmills are the organizational challenges that AI presents: the ratchet, the competency trap, the myopia, the organizational forgetting, the garbage can dynamics that produce unmanaged scope expansion. Some of these are windmills — manageable challenges that will yield to competent leadership. Some may be giants — structural threats that will destroy organizations that are not prepared. The leader does not know which is which. The environment is too new, the precedents too few, the data too sparse.
What the leader can do — what March's framework argues the leader must do — is maintain the organizational capacity for both responses. The capacity to exploit the windmill, to capture the AI productivity gains, to ship features at twenty times the previous rate, to clear the backlog, to hit the quarterly numbers. And the capacity to explore the giant, to investigate the possibility that the organizational model itself needs rethinking, to fund the uncertain experiments that might reveal what the next strategy looks like, to tolerate the ambiguity of not yet knowing whether the organization's current trajectory is headed toward expansion or obsolescence.
Maintaining both capacities simultaneously is the balance. The balance has no formula. It has no optimal ratio that can be calculated in advance and maintained through discipline. It shifts constantly, as the environment shifts, as AI capabilities evolve, as the competitive landscape reorganizes, as the organization's own learning system adapts to the new conditions and produces new biases that must themselves be managed.
March described this dynamic with a characteristic refusal of comfort. The organizations that succeed are not the ones that find the right balance and hold it. They are the ones that adjust, continuously and imperfectly, in response to signals that are ambiguous, data that is insufficient, and circumstances that are genuinely novel. The adjustment is never correct. It is merely less wrong than the alternative, which is to stop adjusting — to lock in either pure exploitation or pure exploration and ride the strategy until it fails.
The lock-in is the temptation. AI makes exploitation so productive that the temptation to lock in — to devote every resource to the exploitation machine — is the strongest organizational temptation that March's framework has ever had to describe. The returns are real. The productivity is measurable. The quarterly numbers are there, on the slide, undeniable.
And the exploration — the slow, uncertain, often-failing, occasionally transformative search for what comes next — is still there, too, invisible and unmeasured and indispensable. The organization that kills it will not know what it lost until the environment changes and the exploitation strategy that looked so brilliant on every metric the learning system could track is revealed to have been optimized for a world that disappeared while the organization was too busy exploiting to notice.
March would not have ended on a note of resolution. He would have ended with the observation that the tension is permanent, the management is imperfect, and the only organizational sin that is unforgivable is the pretense that the tension has been resolved — that the right strategy has been found, that the balance has been achieved, that the organization can stop adjusting and simply execute.
The pretense is the trap. The tension is the work. And the work, like all genuinely important work, never ends.
---
The detail that lodged in my mind was not a framework or a model. It was a word: foolishness.
March spent decades building some of the most rigorous formal models in organizational science — computational simulations, mathematical analyses, the kind of work that earns you the respect of people who believe only in what can be measured. And then he wrote an essay arguing that the most important thing an organization could do was be foolish. Deliberately, structurally, as a matter of institutional design. He meant it. He defended it for fifty years.
That tension — between the rigor and the foolishness, between the formal model and the Quixote lecture — is what made March matter to me during the months I spent writing The Orange Pill and the months since. Because the tension he described is the one I live inside every day.
I know the twenty-fold number. I measured it myself, in a room in Trivandrum, watching engineers transform before my eyes. I know the exploitation case. I have made it to my board, to my investors, to the audiences I speak to about what AI can do. The case makes itself. The numbers are real.
And I know — because March's framework forced me to see it — that the numbers are the trap. Not because they are wrong. Because they are so right, so visible, so compelling, that they crowd out the question the numbers cannot answer: are we building the right thing, or are we building the current thing twenty times faster?
I keep coming back to what March said in that interview: "The notion that magically, through learning, we will end up with an optimum set of rules, I think is fanciful." He was talking about organizations, but he might as well have been talking about me. About the seductive belief that if I just work harder with Claude, if I just exploit faster and more completely, I will optimize my way to the right answer. The exploitation machine whispers that the answer is more: more features, more chapters, more output, more velocity. March whispers back that the answer might be in the thing I have not tried yet — the foolish experiment, the undefined question, the afternoon spent on something I cannot justify to anyone, least of all myself.
I do not always listen to March. The exploitation machine is louder. But I have started building structures to protect the whisper. Small structures. Imperfect ones. Time reserved for questions I cannot answer. Conversations with my team that have no agenda. The willingness to sit in a meeting and say, "I don't know what we should do about this," and to let the silence that follows last longer than my discomfort demands.
These are dams. They are tiny, and the river presses against them every hour of every day. But the pool behind them — the organizational capacity for genuine exploration, for the kind of searching that produces not faster answers but better questions — is the thing that will determine whether what we build next is worth building at all.
March died in 2018, four years before the tools I write about existed. He never saw Claude. He never experienced the productive vertigo of the orange pill. But his framework, built across fifty years of watching organizations learn and forget and learn and forget again, is the most precise diagnostic instrument I have found for understanding what is happening inside the organizations that are adopting AI right now — including my own.
The balance has no formula. The tension is permanent. The adjustment never ends.
I find that strangely comforting.
-- Edo Segal
James March proved that organizations do not fail from incompetence. They fail from competence -- from getting so good at what they already do that they cannot discover what they should do next. He called it the competency trap, and he spent fifty years mapping its mechanisms: the myopia of learning systems that see only the last quarter, the drift toward exploitation that no one chooses but everyone reinforces, the organizational forgetting that erases hard-won knowledge through disuse.
AI has made the trap deeper than March could have imagined. When a tool delivers twenty-fold productivity gains on existing work, every rational signal in the organization screams: do more of this. The irrational signal -- the whisper that says but is this the right work? -- cannot compete. March argued that protecting that whisper is the most important thing a leader can do. He called it the technology of foolishness.
This book applies March's framework to the AI revolution with surgical precision. It reveals why the organizations celebrating the loudest may be the ones adapting the least -- and what the leaders who understand the difference are building instead.

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that James March — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →