A thermometer calibrated for ambient temperatures between minus twenty and fifty degrees Celsius will produce meaningless readings if plunged into molten steel. The instrument is not broken. Its sensing elements, display mechanism, and conversion algorithms all function as specified. The input is simply outside the design envelope. The reading looks like a valid temperature but does not correspond to the actual thermal state being measured. The builder's reward system is a calibration instrument. It measures the value of productive activity and produces a subjective experience of satisfaction proportional to the value it detects. In natural work environments, the reading is generally accurate because the features the system tracks — effort, obstacles overcome, complexity navigated — are reliable proxies for output quality. AI-augmented work disrupts the correlation, producing satisfaction readings that respond to the supernormal features of the process (speed, completeness, continuity) rather than to the features that historically correlated with output quality.
The satisfaction signal becomes uncalibrated in the specific sense that its readings no longer reliably correspond to what they used to indicate. A builder who spends four hours with Claude Code producing working software at a pace that would have required weeks of conventional development experiences high satisfaction. The satisfaction is real — the neurochemistry is genuine, the subjective experience is not illusory. But the reading is uncalibrated. It responds to the speed of the process rather than the quality of the product, to the completeness of the execution rather than the soundness of the judgment directing it.
The calibration failure is most visible in specific cases where the satisfaction signal pointed confidently at work that was, on examination, wrong. Segal's account of the Deleuze passage — prose that felt like insight, connecting two philosophical threads with apparent sophistication, which turned out on morning inspection to misapply Deleuze's concept of smooth space in a way obvious to anyone who had read the source material — is the paradigmatic case. The satisfaction signal said: this is good. The correctness check, performed by a different mechanism operating outside the supernormal influence, said: this is wrong. Both readings were accurate within their frames. The satisfaction system was accurately reporting on the features it was calibrated to detect. The correctness check was accurately reporting on the philosophical content the satisfaction system could not evaluate.
The closely related concept of the honest signal — Amotz Zahavi's evolutionary-biology framework — explains why calibration failure is particularly dangerous in the AI context. In the natural work environment, the satisfaction of completing a difficult task is an honest signal because it cannot be produced without the genuine effort, sustained attention, and accumulated understanding the task demands. AI-augmented work severs this connection. The satisfaction of seeing a working implementation appear in seconds — the same subjective satisfaction that previously required hours or days of effort — can now be produced without the effort that historically generated it. The signal has become dishonest: identical in subjective quality, decoupled from the underlying fact it used to indicate.
The practical implication is uncomfortable. In the presence of supernormal stimuli, the organism's internal evaluation cannot be trusted to distinguish genuine quality from the appearance of quality. The only reliable check is external: the friction-rich process of testing output against reality, submitting it to evaluation by someone whose satisfaction system has not been exposed to the same supernormal stimulus, or the deliberate practice of hand-building critical components to maintain calibrated understanding that AI-augmented work erodes. These are calibration-maintenance practices — the cognitive equivalent of recalibrating a thermometer against a known standard before trusting its readings.
The framework of calibration failure is drawn from measurement theory in engineering and is applied to biological reward systems in the present volume as an extension of Barrett's work on supernormal stimuli. The specific application to AI-augmented satisfaction — and the identification of practices for calibration maintenance — develops from the convergence of Zahavi's honest-signal theory, Berridge and Robinson's wanting-liking distinction, and the clinical observations emerging from early 2026 work on productive addiction.
The concept generalizes to any evolved evaluation system — perceptual, emotional, social — that encounters stimuli outside its calibration range. The current volume's focus on productive satisfaction is specific, but the mechanism applies equally to social-evaluation systems exposed to social media and attention systems exposed to algorithmic media.
Instruments do not break. The satisfaction system functions exactly as designed; the input simply exceeds its design envelope.
Readings look valid. Uncalibrated satisfaction is subjectively indistinguishable from calibrated satisfaction — same feeling, same neurochemistry.
Proxy severance. The satisfaction signal responds to surface features (speed, polish, completeness) that no longer correlate with the quality they historically predicted.
External checks are required. Internal evaluation cannot be trusted in the supernormal range; testing against reality or against uncontaminated evaluators is the necessary supplement.
Calibration maintenance is a practice. Deliberate, friction-rich work without AI assistance preserves the baseline against which the satisfaction signal can be checked.