Organizational Forgetting — Orange Pill Wiki
CONCEPT

Organizational Forgetting

The inevitable complement to organizational learning — the decay of unpracticed routines that erases institutional knowledge invisibly, now accelerated by AI's elimination of the instructive experiences from which the most consequential learning emerged.

Organizations remember through practice. The knowledge an organization possesses is not stored in filing cabinets or knowledge management systems; it is distributed across routines — standard operating procedures, habitual responses, taken-for-granted ways of doing things that encode decades of accumulated learning. A routine is organizational memory made operational: the lesson learned from a previous failure, embedded in a process that prevents recurrence without anyone needing to remember the original failure. The knowledge is maintained only through exercise. A routine that is not practiced decays: practitioners retire, documentation becomes outdated, institutional memory fades. March and Barbara Levitt called this process organizational forgetting and recognized it as the inevitable complement to organizational learning. Every organization is simultaneously learning and forgetting, and no one decides what will be forgotten. AI accelerates forgetting with a precision that March's framework predicted but that his original models, calibrated to pre-AI timescales, could not have anticipated.

In the AI Story

Hedcut illustration for Organizational Forgetting
Organizational Forgetting

The acceleration operates through a specific mechanism: the elimination of the experiences from which instructive learning emerged. When AI handles debugging, the organization does not experience debugging failures. When AI generates documentation, the organization does not experience the struggle of converting implicit knowledge into explicit prose. When AI produces specifications, the organization does not experience the painful iterative process of discovering what the specification should say through the failure of previous specifications to say it clearly. Each eliminated experience is, from the perspective of current operations, a cost savings. The experiences eliminated are not merely costs; they are the raw material of organizational learning.

The debugging failure that revealed a systemic vulnerability in the architecture. The documentation struggle that forced the engineer to articulate assumptions she did not know she held. The specification iteration that exposed misalignment between what the team was building and what the customer actually needed. Each experience deposited a layer of organizational knowledge — knowledge encoded into routines, transmitted to new practitioners, maintained through continued practice. Edo Segal's geological metaphor maps precisely: every hour of debugging deposits a thin layer of understanding; the layers accumulate into the bedrock of organizational competence. AI stops the deposition. The bedrock, no longer being added to, begins to erode through personnel turnover and routine decay.

The erosion is invisible in the short term. The organization functions perfectly well on its existing bedrock. The AI handles the tasks that would have deposited new layers. Quarterly numbers remain strong. No one notices the foundation is thinning. The erosion becomes visible when the organization encounters a situation the AI cannot handle — a genuinely novel problem requiring the kind of deep, embodied, practice-built knowledge no training set contains. The AI hallucinates or produces output that is plausible but wrong in ways only deep domain expertise would catch. The organization turns to its human practitioners, and the practitioners do not have the judgment — not because they are less intelligent than their predecessors, but because the experiences that would have built the judgment were eliminated.

March, with Lee Sproull and Michal Tamuz, examined a more challenging variant in Learning from Samples of One or Fewer: organizations must often learn from rare, ambiguous, or not-yet-occurred events. The interpretation of rare events is deeply dependent on the framework the organization brings, and the framework is itself a product of accumulated experience. When AI eliminates experiences that built the framework, the organization loses not only specific knowledge but the interpretive capacity that knowledge supported. The organization that has never experienced a specification failure does not merely lack knowledge about specification failures; it lacks the ability to recognize one when it occurs.

Origin

Levitt and March articulated organizational forgetting in their 1988 Annual Review of Sociology paper on organizational learning. The concept drew on decades of empirical work on routines, institutional memory, and the mechanisms through which organizations transmit knowledge across generations. The framework's key insight was that forgetting is not a failure of learning but a structural feature of it — the inevitable consequence of a system in which knowledge is maintained through practice and practice is always finite.

The extension to AI builds on March's later work with Sproull and Tamuz on learning from rare events, and on his recurring argument that the most important organizational resources are the ones no metric captures. The concept has particular force in AI-era discussions because it identifies a specific structural mechanism — the elimination of instructive experience — through which AI adoption produces knowledge loss that is simultaneously invisible and consequential.

Key Ideas

Knowledge maintained through practice. Organizational memory is not stored; it is enacted. Routines that are not exercised decay.

Deposition of instructive experience. Each struggle, failure, and iteration deposits a layer of understanding that accumulates into organizational competence.

AI eliminates deposition. Tasks handled by AI do not deposit layers; the bedrock, no longer accumulating, begins to erode.

Invisible erosion. The organization functions well on existing bedrock until a genuinely novel situation arrives and the missing layers become consequential.

Interpretive capacity as casualty. The deepest loss is not specific knowledge but the ability to recognize and interpret situations the organization has not directly experienced.

Debates & Critiques

Whether AI-generated outputs can substitute for the instructive experiences they replace is contested. Optimists argue that AI training on organizational history preserves the lessons of past failures in a form more accessible than embodied routines ever were. Skeptics — including those drawing on March's framework — respond that the lessons available for extraction are only the lessons that were articulated; the vastly larger body of tacit knowledge, which existed only in the embodied practice of experienced practitioners, cannot be captured by any training corpus because it was never encoded in the first place. The debate has practical stakes: it determines whether organizations should treat AI adoption as knowledge-preserving or knowledge-destroying, and what interventions the answer implies.

Appears in the Orange Pill Cycle

Further reading

  1. Barbara Levitt and James G. March, 'Organizational Learning,' Annual Review of Sociology 14 (1988).
  2. James G. March, Lee S. Sproull, and Michal Tamuz, 'Learning from Samples of One or Fewer,' Organization Science 2 (1991).
  3. Richard R. Nelson and Sidney G. Winter, An Evolutionary Theory of Economic Change (1982).
  4. Lisanne Bainbridge, 'Ironies of Automation,' Automatica 19 (1983).
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT