Context reconstruction is the first of three cognitive operations the monitoring builder must perform at each evaluation event. When an AI agent's output arrives, the builder must reload that project's context into working memory: its objectives, current state, constraints, and the criteria against which the output will be judged. This context was displaced by whatever the builder was working on when the output arrived. Reconstructing it requires retrieval from long-term memory, reactivation of the project's task-set, and re-establishment of evaluative standards. Each operation consumes working memory capacity and executive control resources that are already partially occupied by residue from the previous task. The reconstruction is cognitively expensive and rarely complete — some context elements will have decayed, some associations will need re-derivation, and the rebuilt context will be thinner than the original.
The expense is measurable. Studies of task resumption after interruption consistently show a 'resumption lag' — the interval between returning to a task and achieving the performance level that characterized work before the interruption. The lag reflects the time required for context reconstruction and is proportional to the complexity of the interrupted task. For complex knowledge work, the lag can extend to twenty minutes or more, during which the worker is nominally back on task but operating with incomplete context. In AI-augmented monitoring, the builder experiences this lag at every evaluation event: she returns to a project she hasn't thought about in hours, attempts to reload its context in the seconds before evaluating the agent's output, and makes a judgment call with a hastily reconstructed and incomplete understanding of what the project requires.
The incompleteness is what makes context reconstruction particularly dangerous for AI monitoring. When the builder executed the work herself, context was continuously maintained through the execution process. Writing code kept code architecture in working memory; drafting a document kept the argument's logic active; building an analysis kept the data relationships salient. The execution was itself a form of context maintenance. When AI executes and the builder monitors, this continuous maintenance disappears. Context must be reconstructed at each evaluation from whatever memory traces the previous engagement deposited. The traces degrade over time — Ebbinghaus's forgetting curve operates on project context just as it does on memorized facts — and the degradation means that each reconstruction recovers less than the previous engagement contained.
Organizations compound the problem by assigning builders to more projects than working memory can maintain simultaneously. Miller's 7±2 limit on working memory chunks suggests an upper bound on the number of projects a builder can keep 'warm' in active memory. Exceed that limit, and projects fall out of working memory entirely, requiring full 'cold start' reconstruction at each return. The cognitive cost of cold-start reconstruction is substantially higher than the cost of refreshing a warm context, yet the typical AI-augmented builder in 2026 is assigned to five, eight, or twelve projects — well beyond the limit that working memory architecture can support. The result is that most evaluations are performed with cold-reconstructed context: expensive to rebuild, incomplete in its recovery, and degraded by the residue of whatever the builder was doing when the reconstruction was demanded.
The concept builds on task-resumption research in cognitive psychology, particularly work by Erik Altmann and colleagues on memory for suspended goals, and extends it to the specific case of evaluating AI outputs across multiple projects. The term 'context reconstruction' emphasizes the active, effortful, resource-consuming nature of the process — not passive retrieval but active rebuilding of a cognitive workspace that was dismantled by the previous switch. Application to AI-augmented work appears to have originated in practitioner accounts and was given theoretical grounding through Leroy's framework by researchers examining why AI productivity gains came with reports of cognitive exhaustion.
Not instantaneous. Reloading project context into working memory requires retrieval, reactivation, and re-establishment — a process taking seconds to minutes, during which judgment quality is compromised.
Never complete. The reconstructed context is thinner than the original because some elements have decayed, some associations require re-derivation, and emotional momentum must be regenerated from a lower baseline.
Amplified by project count. Assigning builders to more projects than working memory can maintain forces cold-start reconstruction at most evaluation events, dramatically increasing cognitive cost per monitoring interaction.
Hidden by AI speed. Because AI produces outputs rapidly, the builder feels pressure to evaluate quickly, often before context reconstruction is complete — producing judgments made with partial understanding of what the project actually requires.