An organization develops proficiency in a particular technology, process, or way of serving its market. The proficiency produces returns. The returns reinforce the proficiency. The organization invests more in the approach that works, which makes it work better, which produces more returns, which justifies more investment. Every step of the cycle is rational. The alternative — investing in an unfamiliar technology whose returns are uncertain and whose learning curve is steep — cannot compete with the proven approach on any metric the organization knows how to measure. The trap springs when the environment changes: the technology the organization mastered is superseded, the process rendered obsolete, the market shifted. The organization discovers that it cannot adapt — not because it lacks talent or resources, but because every fiber of its learning system has been optimized for a world that no longer exists. The trap is not a failure of intelligence but an excess of it, directed so effectively at the current problem that no capacity remains for the next one.
March described the mechanism with characteristic precision: an activity with moderate potential but high accumulated skill will outperform an activity with high potential but no accumulated skill — consistently, visibly, persuasively. The learning system, observing these results, reinforces the skilled activity and neglects the unskilled one. The neglected activity never accumulates enough skill to demonstrate its potential. Its potential remains latent, invisible, permanently deferred. This is the structural core of the trap: rational response to observed returns produces convergence on what the organization already does well, at the cost of ever discovering what it could do better.
The SaaSpocalypse of 2026 is a precise instantiation. For two decades, the enterprise software industry had accumulated extraordinary competence in the SaaS model: build software, sell subscriptions, capture data, deepen integrations, raise switching costs. Each metric confirmed the model's superiority. Then AI made the code layer approach commodity pricing, and a trillion dollars of market value vanished in eight weeks. The decline was not a judgment that these companies had become incompetent. It was a judgment that their competence had become a trap. The very thing they were best at — writing, selling, and maintaining software — was what AI had rendered insufficient as a basis for value.
The trap operates at the individual level with equal force. A senior engineer spends decades building expertise in the ability to feel a codebase the way a doctor feels a pulse — embodied intuition developed over thousands of hours. This expertise is genuine and irreplaceable. It is also caught in the trap. The expertise was built to solve problems at the implementation layer, which AI now handles with twenty-fold efficiency. The engineer's accumulated skill outperforms any alternative skill she might develop, because the alternative has no accumulated experience behind it. But the activity the skill was built for has been fundamentally altered. The skill is real; the potential of the activity has changed; the learning system continues to reward the existing skill because it still produces observable returns. Observable returns are a trailing indicator — they measure what the skill produced in the environment that existed when the skill was built, not what it will produce in the environment that is emerging.
The AI moment creates a second-order trap more insidious than the first. Organizations can become trapped not by competence in old technologies but by competence in AI-augmented exploitation itself — using AI with extraordinary effectiveness to accelerate what they already do, without using it to explore what they should do differently. The exploitation returns are so large, so measurably superior to any alternative AI allocation, that exploration use of AI cannot compete. Why use AI to explore uncertain new markets when you can use it to exploit existing markets twenty times faster? The question answers itself, and the answer is the trap.
Levitt and March introduced the competency trap in their 1988 Annual Review of Sociology paper 'Organizational Learning.' The concept crystallized decades of research on why successful organizations fail — not through incompetence but through the specific kind of competence that forecloses adaptation. The framework became central to subsequent work on organizational change, technological displacement, and the paradox of high-performing companies that collapse without apparent warning.
The empirical literature extending the concept includes studies of typewriter manufacturers displaced by word processors, photographic film companies displaced by digital imaging, newspapers displaced by online media, and taxi companies displaced by ride-sharing platforms. Each case exhibits the structural pattern the concept predicts: dominant incumbents, optimized around their existing business, unable to respond effectively to disruptions their competence made invisible until too late.
Competence as trap. The organization that executes best on the current game is the organization least equipped to recognize that the game has changed.
Observable returns as trailing indicator. Today's returns measure what skill produced in yesterday's environment, not what it will produce in tomorrow's.
Second-order trap. Competence in AI-augmented exploitation can itself become a trap, foreclosing the exploration use of AI that might discover what the organization should do instead.
Escape requires foolishness. Deliberate investment in activities the learning system does not support — what March called strategic foolishness — is the only reliable escape route.
SaaSpocalypse as case study. The 2026 trillion-dollar repricing of software companies demonstrates the trap operating at industry scale and AI-compressed timescale.
Debate persists over whether the competency trap is best understood as an organizational-level phenomenon or a market-level one. Organizational theorists frame it as a failure of adaptive capacity within firms; economists sometimes reframe it as a natural consequence of creative destruction, where the failure of incumbents is the mechanism by which economic evolution proceeds. Both framings are true; the question is which intervention logic they imply. The organizational framing suggests leaders should cultivate adaptive capacity; the market framing suggests the attempt is usually futile and that resources should flow to new entrants instead. March's own position tended toward the organizational: adaptation is rare but possible, and the study of when and how it occurs is worth the effort even when the effort mostly fails.