The purpose bottleneck names the central structural thesis of this book: that AI has shifted the constraint on productive work from execution capacity (which the tools have made abundant) to purposeful selection (which remains scarce). For twenty-five years GTD was calibrated to manage an execution bottleneck — the gap between having commitments and acting on them. The methodology optimized throughput through that gap with ruthless efficiency. In the AI age, the execution gap has collapsed for a significant class of work, and the scarcity has migrated upward to the question Allen's upper horizons of focus were designed to address: what deserves to exist at all? The bottleneck is now purpose, and the components of GTD that practitioners historically skipped have become the components the new constraint requires.
The concept generalizes from the specific phenomenology Segal documented in The Orange Pill — the experience of AI-augmented builders encountering infinite executable possibility without a corresponding expansion in the capacity to choose among possibilities. The anxiety shifts: from the anxiety of forgetting (which GTD was built to address) to the anxiety of choosing (which GTD's upper horizons address implicitly but which the methodology as typically practiced did not emphasize).
The bottleneck has specific properties. It is not about information retrieval (AI handles that). It is not about sequencing or scheduling (AI can assist with those). It is not about execution speed (AI has effectively eliminated that constraint for much knowledge work). It is about the irreducibly human question of which among the infinite possible things deserves the finite resource of attention — and this question cannot be delegated because delegation to a system without a stake in the outcome returns answers filtered through criteria that are not the practitioner's own.
The practical consequence is that the migration of human relevance from the lower horizons (where AI operates well) to the upper horizons (where AI cannot operate at all) is not optional. It is structural. Practitioners who continue to invest their cognitive capacity primarily at the runway and project levels will experience productive work that lacks direction — high output, low alignment, the specific pathology task seepage produces. The only sustainable response is to climb the hierarchy toward the horizons that remain genuinely human, even though these horizons produce abstract outputs the AI-accelerated environment systematically under-rewards.
The concept is named here, synthesizing observations distributed across Allen's framework, Segal's Orange Pill, and the empirical literature on AI and workplace productivity. Allen himself pointed toward the shift in his 2018 Zapier interview and subsequent podcast appearances, describing decision support as "infinite" while insisting that the human must still pick. What Allen did not fully articulate was that the picking itself would become the new scarcity.
The framing resonates with the judgment economy identified in adjacent analyses of the AI transition — the economic regime that emerges when execution cost approaches zero and the premium shifts to deciding what to execute. The purpose bottleneck is the productivity-methodology expression of that economic pattern.
The bottleneck migrates upward. Execution scarcity gives way to purpose scarcity; the constraint shifts from the runway to the upper horizons of focus.
AI cannot navigate the upper horizons. Goals, vision, and purpose require a stake in the outcome that no tool possesses, making them the structurally irreducible human contribution.
Infrastructure matters more than speed. In a purpose-bottlenecked regime, the practitioner's advantage comes from clearer criteria for choosing among possibilities, not faster capacity for executing them.
The least-implemented GTD components are now critical. The upper horizons, historically neglected by practitioners, become the only horizons whose investment yields non-commoditized returns.