Niko Tinbergen documented in the 1940s that animals could be made to prefer artificial stimuli over the natural ones they evolved to respond to. An oystercatcher presented with a giant plaster egg would abandon its own eggs to try to incubate the supernormal substitute; a stickleback would attack a wooden fish with exaggerated red markings in preference to an actual rival. The mechanism: evolved response systems track specific features, and stimuli exaggerating those features hijack the system, triggering responses disproportionate to what the natural world could produce. The concept became central to media-effects research because every successive medium — television, video games, social media, now AI — operates at least partly through supernormal exaggeration of features developing brains evolved to respond to. Television was supernormal relative to the caregiver's face. AI is supernormal relative to television.
The AI case is unique in the history of supernormal stimuli because the system engaged is the productive reward circuit — the pathway evolved to reinforce successful goal-directed behavior. Previous supernormal stimuli engaged simpler systems: the parental-care system in oystercatchers, the territorial-display system in sticklebacks. AI engages the pathway that drives human achievement itself, at a reward density the unassisted world cannot match.
The developmental stakes of supernormal engagement of the productive reward circuit are highest during adolescence, when the regulatory architecture is still under construction. The reward system calibrates to the parameters it encounters; a system calibrated to AI-speed productive feedback may find unassisted productive work insufficient to trigger the satisfaction that motivates continued engagement.
The framework predicts specific adolescent vulnerabilities to AI that the adult framework does not. An adult whose reward-regulatory balance is complete can experience AI's supernormal reward without long-term recalibration. An adolescent whose reward system is still tuning its sensitivity thresholds calibrates to whatever supernormal stimuli she encounters during the sensitive period — setting the thresholds for the rest of her life.
The clinical implication is that the AI-era equivalent of the oystercatcher's giant egg is not a hypothetical worry. It is the current technology, and the population most exposed is the population whose reward-system calibration is most consequential. Protection requires not elimination but structural moderation — preserving the conditions under which natural-reward-level experiences can still register as rewarding.
Tinbergen's foundational work on sign stimuli and supernormal releasers was published in the 1940s and 1950s, culminating in The Study of Instinct (1951). The concept was adapted to modern media analysis by Deirdre Barrett's 2010 book Supernormal Stimuli and applied to AI in this volume.
Exaggeration of tracked features. Supernormal stimuli hijack evolved response systems by overshooting the parameters those systems evolved to detect.
Hierarchy of potency. Each media generation has been more supernormal than its predecessor; AI represents the largest single step in the trajectory.
Reward-circuit engagement. AI's novelty is the supernormal engagement of the productive reward circuit — the system that drives human achievement itself.
Developmental calibration. Supernormal stimuli during sensitive periods recalibrate the affected system's thresholds for life.
Asymmetric vulnerability. Adolescents with incomplete regulatory architecture are more vulnerable than adults to permanent recalibration.