Trapped value is Moore's operational reframing of where AI should be deployed. The question is not 'Where can we use AI?' but 'Where is the most value trapped in our current processes?' Trapped value accumulates wherever systems were designed around human limitations: the customer service operation that routes every inquiry through a human agent regardless of complexity, the healthcare system that requires a radiologist to review every normal scan, the educational system that delivers identical lectures to students with wildly different needs. These are environments where AI can release enormous value — but only if the deployment is accompanied by the whole product components that capture the release rather than squandering it.
Moore distinguishes trapped value from ordinary efficiency improvements. A marginal productivity gain on an already-optimized process is not trapped value; it is incremental improvement. Trapped value is categorical — the productivity ceiling established by a process architecture that cannot be incrementally improved because its design assumptions no longer hold. The customer service center was designed for a world in which every inquiry required human judgment. AI makes that assumption false for the majority of inquiries, which means the process architecture must be redesigned, not optimized.
The concept dictates a specific prioritization logic. The highest-value AI deployments are not where AI is easiest to deploy but where trapped value is greatest. Moore has been explicit that this prioritization points toward the public sector: higher education, social services, healthcare, law enforcement. These are sectors 'staggering under increasingly untenable demands' where the trapped value is enormous precisely because the processes were designed for resource constraints that AI can relax.
The tension in Moore's framework is that releasing trapped value is not the same as capturing it. Release requires only the generic product. Capture requires the whole product — the institutional redesigns, regulatory adaptations, professional identity reconstructions, and workflow reconfigurations that convert unlocked capacity into sustained outcomes. The AI industry in 2025–2026 has been relatively effective at releasing trapped value in narrow segments (developer productivity, customer service level-one inquiries) and relatively ineffective at capturing it in segments where the whole product gap is wide.
The Geoffrey Moore — On AI volume extends the trapped value framework by adding a second dimension: epistemological trapped value. The laggard is sitting on knowledge about what matters, what works, and what cannot be replaced by speed — knowledge that the tornado-phase industry needs but does not know how to extract. Both dimensions of trapped value — the operational and the epistemological — must be addressed for AI to reach its actual potential.
Moore developed the trapped value concept across his later-career writing, most explicitly in his 2024 Valize article on Zone to Win with AI and his November 2024 blog post identifying public-sector domains as highest-priority AI targets.
Trapped value is categorical, not incremental. It is the gap between what a process could theoretically achieve and what its human-limited architecture allows.
Release is not capture. Unlocking trapped value requires only the generic product; capturing it requires the whole product.
Public sector domains have the most trapped value. Healthcare, education, social services, and law enforcement are target-rich precisely because demand exceeds capacity.
The bowling alley should prioritize trapped value, not ease. The sectors with the most value to release are often the sectors with the widest whole product gaps.
Epistemological trapped value also exists. Knowledge held by practitioners whom the tornado-phase industry ignores is itself a form of trapped value that must be extracted or lost.