The supercritical regime is the domain beyond the critical point where the rate of perturbation exceeds the system's capacity for integration. In a sandpile analogy, it's the condition where grains are falling so fast the pile cannot maintain any stable configuration — each avalanche triggers before the previous one has resolved, producing continuous collapse rather than reorganization. For human cognitive systems augmented by AI, the supercritical regime manifests as the Berkeley study's documented pattern: work intensifies, pauses disappear, task seepage colonizes every gap, and the grey fatigue settles in. The distinction between flow (operating at criticality) and compulsion (operating supercritically) is determined by whether the perturbation rate matches or exceeds the system's absorption capacity. AI tools, by collapsing response latency to seconds, make supercriticality the default unless deliberate structures control the perturbation rate.
Christopher Langton's cellular automata experiments revealed three regimes: frozen (too much order, no computation), edge of chaos (productive complexity), and chaotic (too much disorder, computation dissolves into noise). The supercritical regime in self-organized criticality maps to Langton's chaotic regime — the system is perturbed so frequently that stable patterns cannot form, and the computational capacity available at criticality is lost to the noise of unintegrated perturbations. The human brain has natural mechanisms for avoiding supercriticality: attention fatigue forces disengagement, sleep consolidates the day's perturbations, boredom signals the need for reduced stimulation. AI interfaces bypass these mechanisms by eliminating wait states and maintaining engagement through infinitely available, immediately rewarding interactions.
Edo Segal's three-in-the-morning confession — 'I knew this, but I kept typing' — is the phenomenology of the supercritical regime experienced from inside. The work is still technically productive (code is generated, ideas are articulated, progress is made by conventional metrics), but the subjective signature shifts from the energized clarity of flow to the grinding compulsion of a system that cannot stop reorganizing. The Berkeley study documented this at organizational scale: workers using AI reported increased exhaustion alongside increased output, the two rising in tandem rather than trading off. The exhaustion is not caused by the volume of work but by the rate of cognitive reorganization exceeding the rate at which reorganization can be metabolized.
The supercritical regime is entered not through a deliberate choice to work harder but through the removal of natural stopping points. In pre-AI software development, compile waits, colleague delays, and implementation friction created gaps — cognitively unproductive gaps that were actually essential for integration, for the settling of perturbations, for the return to critical rather than supercritical dynamics. AI tools eliminate these gaps. The conversation is perpetual. The feedback is instant. The next prompt is always available. The system defaults to supercriticality unless deliberate dams — structured pauses, protected offline time, forcing functions that impose gaps the tool doesn't provide — are built and maintained.
The organizational consequence is that productivity gains measured by output metrics can mask degradation measured by worker capability and institutional knowledge. A team producing twenty times more features per quarter while operating supercritically is depleting its cognitive capital — the judgment, the deep understanding, the cross-domain insight — faster than it's being replenished. The depletion is invisible to the metrics because the metrics measure output, not the capacity to sustain output. The team looks phenomenally productive until, suddenly, it isn't — the senior members burn out, the judgment calls start failing, the architecturally unsound decisions made under supercritical time pressure accumulate into technical debt that eventually forces a crisis. The supercritical regime produces apparent success that conceals structural damage, a pattern that won't become visible until the damage is severe enough to affect the output metrics themselves.
The terminology of sub-, critical, and supercritical originates in nuclear reactor physics, where a reactor is subcritical if each fission event produces less than one subsequent fission (chain reaction dies out), critical if each produces exactly one (self-sustaining), and supercritical if each produces more than one (runaway). The adaptation to sandpile physics and self-organized criticality was developed by Bak and extended by subsequent researchers studying systems from ecosystems to markets to neural networks. The application to human cognitive systems under AI augmentation is a novel extension, synthesizing criticality physics with burnout research, flow theory, and the phenomenology of the AI-augmented workplace.
Exceeded absorption capacity. Supercriticality occurs when the perturbation rate surpasses the system's capacity to integrate and reorganize — cascades pile on cascades without resolution.
Compulsion's physical signature. The difference between flow and compulsion is not psychological but dynamical — same intense engagement, different perturbation rates relative to absorption capacity.
Default state without dams. AI tools' instant responsiveness and infinite availability make supercriticality the default — natural stopping points are eliminated, deliberate gaps must be engineered.
Productivity-exhaustion paradox. Supercritical systems can maintain high output temporarily while depleting the cognitive and emotional capital required to sustain that output — invisible degradation masked by apparent success.
Gray fatigue as diagnostic. The Berkeley study's 'grey fatigue' — exhaustion without recovery, depletion without disengagement — is the somatic signature of operating supercritically for extended periods.