Every decision a human makes about what to attend to is, in Miller's framework, a decision about slot allocation. Working memory has seven slots. The world presents thousands of simultaneous demands. Filling a slot with one item is necessarily and inescapably refusing to fill it with every other possible item. Attention is not merely selective; it is sacrificial. The promise of AI coding assistants is that compressing implementation frees slots for higher-level concerns — architecture, design, user experience, ethics, strategy. The promise is real. But Miller's framework reveals a complication: freed slots do not allocate themselves. They are allocated by the same cognitive system that was previously overwhelmed — a system with habits, defaults, and biases formed in a world where the freed slots did not exist. A developer who spent ten years allocating five slots to implementation does not, upon having those slots freed, spontaneously allocate them to architectural thinking. She allocates them to whatever her environment signals as the next most urgent demand — which, in most software organizations, is more implementation.
The organizational systems surrounding the developer — sprint planning, velocity metrics, backlog grooming, quarterly OKRs — are designed to absorb available cognitive capacity and direct it toward production. They are slot-filling machines. They exist to ensure that no slot remains unallocated, and they are supremely effective at this purpose. This means that what AI compression actually accomplishes is not a question about the tool. It is a question about the environment in which the tool is used. In an environment that measures output, freed slots fill with more output. In an environment that measures quality, freed slots fill with quality concerns. The tool is identical. The allocation is different. The outcomes diverge.
Miller's research on attention demonstrated that attention follows incentive. The mind, operating under scarcity, optimizes for the reward landscape it perceives. This is not a moral failing but a cognitive efficiency. If the reward landscape values speed, freed slots go to speed. If it values depth, they go to depth. The mind does not have an independent preference; it has a preference for survival, which means attending to what the environment values.
The phenomenon of productive addiction that Edo Segal describes has a precise cognitive explanation in the slot allocation framework. When implementation is compressed, slots are freed. Freed slots are phenomenologically uncomfortable — an unused slot feels like cognitive hunger, a restless sensation that the mind interprets as a need for something to attend to. The AI tool is perfectly designed to satisfy this hunger. It responds instantly, provides new problems, generates new engagement opportunities. Each response fills a freed slot. Each filled slot generates cognitive satisfaction. Each satisfaction is immediately followed by another freed slot as the tool completes the current task. The cycle — free, fill, satisfy, free — is the structure of behavioral addiction operating at the level of working memory.
The self-concealing nature of the problem is what makes it most dangerous. Miller's work on cognitive control established that control and capacity compete for the same slots. A developer using all seven slots for task engagement has zero slots available for monitoring how she is engaging. She cannot simultaneously be in flow and observe her flow from outside. The slot allocation problem is thus self-concealing: the very condition of full engagement AI produces is the condition under which the developer is least able to evaluate whether her engagement is well-directed.
The concept of slot allocation as an analytical framework draws on Miller's working memory research combined with attention research from Broadbent, Anne Treisman, and Daniel Kahneman. Kahneman's Attention and Effort (1973) explicitly modeled attention as a scarce resource allocated among competing demands.
The specific application to AI-mediated work extends this tradition to the novel condition in which a tool actively participates in the allocation decision — both by freeing slots and by structuring the flow of demands that compete to fill them.
Sacrificial attention. Every slot filled is six alternatives refused. The cognitive cost of attention is not measured in energy but in foregone possibilities.
Allocation follows incentive. Freed slots fill with whatever the environment rewards. The tool does not determine allocation; the reward structure does.
The flow-state trap. Full engagement consumes the slots that would otherwise monitor whether engagement is well-directed. The developer in flow cannot observe her own flow.
Environmental design is determinative. The same AI tool produces wisdom or velocity depending on whether the organization measures quality or quantity. The tool is neutral; the environment is not.
Structured interruption as remedy. Code review, architectural review, ethics review are mechanisms for forcing slot reallocation toward concerns the flow state crowds out.