Allen's two clarification questions are the engine of GTD's conversion of vague anxiety into concrete commitment. They work because they refuse to let imprecision persist, forcing every captured item into the form of a defined outcome and an executable action. The refusal of imprecision is powerful: the mind cannot hold a clearly defined next action with the anxious energy it brings to a vague worry. Clarity dissolves anxiety.
But the questions contain a hidden assumption that the AI age has exposed. They assume the item is worth clarifying. In a world where execution is expensive, the assumption was reasonable — ideas that weren't worth the hours, days, or weeks they would require tended to die quietly in the Someday/Maybe list. The cost of execution was the gatekeeper. When execution becomes cheap, the gatekeeper vanishes. Any idea that can be executed in an afternoon can survive clarification, and the pipeline floods.
The crisis surfaces a deeper issue Cal Newport identified years before AI made it acute: Allen's methodology treats all commitments as structurally equivalent, whether they connect to deepest ambitions or to logistical annoyances. AI amplifies this universalism catastrophically. The trivial and the profound now share the same processing pipeline, the same execution cost, and the same feel of productivity — and the system provides no criterion for distinguishing them.
The crisis is named here for the first time, but its components have been visible in the GTD discourse for years. Allen himself, on the MindHack Podcast and in his 2018 Zapier interview, acknowledged that the methodology's assumptions were shifting under the weight of new tools, while stopping short of redesigning the framework. Cal Newport's critique in Deep Work (2016) anticipated the structural vulnerability that AI has now exposed with full force.
The framing draws on Segal's orange pill moment and the phronesis barrier — the claim that the collapse of execution cost reveals a deeper barrier that was always the harder problem: the barrier of practical wisdom about what deserves to exist.
Cost was the hidden gatekeeper. In the pre-AI world, the expense of execution filtered commitments by worthiness without the practitioner consciously performing the filtering.
The filter has collapsed. Any idea that can be described can be executed, which means every idea passes the execution threshold and reaches the clarification stage.
A prior question is required. Before asking "What is the outcome?" and "What is the next action?", the practitioner must ask "Should this be pursued at all?" — a question Allen's framework never formalized.
Worthiness requires hierarchy. Answering the prior question requires reference to the upper horizons of focus — the goals, vision, and purpose that determine which possibilities deserve to become commitments.