Worthy amplification, in the Berridge framework, is the amplification of an integrated neural signal — the state in which wanting, liking, and caring are all active and modulating each other's output. The wanting system provides drive. The liking system provides hedonic verification that the drive is producing something satisfying. The caring system — the prefrontal, oxytocin, and default-mode circuits that generate concern for others and long-term consequences — provides the evaluative framework that determines whether the satisfaction is in service of something beyond the self. Each signal alone is insufficient. Wanting alone produces compulsion. Liking alone produces self-referential satisfaction without direction. Caring alone produces evaluation without momentum. The integrated signal — rare, demanding, and the primary psychological task of the AI age — is what The Orange Pill means when it asks whether a builder is worthy of amplification.
There is a parallel reading that begins not with neural integration but with the material conditions that make "worthy amplification" possible. The tripartite model of wanting-liking-caring assumes a subject who has the luxury of reflection, the security to pause wanting, and the social position to exercise care. But the AI amplifier doesn't exist in a vacuum — it runs on server farms powered by coal plants, maintained by gig workers who clean data at three dollars an hour, built from rare earth minerals mined by children. The "caring system" that supposedly provides moral direction operates only for those insulated from the substrate.
The framework's emphasis on individual neural states obscures the political economy of amplification itself. Who gets access to the amplifier? Whose "caring" counts as legitimate evaluation? The builder asking "does my work serve others" may be sincere, but the question assumes their evaluation matters more than the evaluation of those whose labor makes the amplification possible. The Congolese cobalt miner doesn't get to activate their caring system about whether AI art serves humanity — they're too busy surviving the wanting system's demand for more lithium-ion batteries. The content moderator in Manila reviewing outputs at industrial speed has no time for "reflective evaluation" — their liking system is suppressed by quotas, their caring system overridden by economic necessity. The worthy signal, in this reading, is not a neural achievement but a class privilege — the ability to integrate wanting, liking, and caring because someone else's disintegrated labor maintains the infrastructure that makes integration possible.
The three systems have different activation requirements, and integrating them requires conditions the AI workflow tends to eliminate. Wanting is activated by cues and variable rewards — the AI interaction provides these in abundance. Liking requires embodied effort, mastery, friction — the AI workflow reduces these. Caring requires reflective time, social presence, default-mode activation — the goal-directed intensity of AI engagement suppresses these. The integration is therefore not a natural equilibrium but a constructed achievement, maintained against structural forces pulling the three apart.
The practical test is the afterglow test extended into moral evaluation. Flow's afterglow indicates wanting-liking coupling — the work produced satisfaction that persists beyond the session. But the afterglow alone does not guarantee the work was worth doing in a larger sense. A builder can experience flow while producing something that serves only the builder. The caring system's contribution — the evaluation of whether the work serves anyone beyond the self — requires a different cognitive operation, activated under different conditions. The full test asks not only "does the world feel richer after I stop?" but "does the work serve what I would endorse on reflection as worth serving?"
Csikszentmihalyi's late-career distinction between narrow flow (in trivial activities) and what he called vital engagement (in activities that engage a person's deepest values and highest capabilities) maps precisely onto this tripartite integration. Vital engagement is wanting, liking, and caring operating together — the state in which motivation, satisfaction, and meaning are all contributing to the generation of behavior. The AI revolution makes this integration simultaneously more important and more difficult. More important because the amplifier is more powerful. More difficult because the interaction loop is structurally optimized for wanting and antagonistic to liking and caring.
The unworthy signal is not evil. It is not a moral failing. It is a neural state — wanting without liking or caring, dopaminergic compulsion driving behavior the other systems have not endorsed. The person generating this signal may be extraordinarily productive; the output may be technically excellent. But the output serves only the wanting system's priority, which is "more." The caring system did not ask whether more is what the world needs. The worthy signal is generated by a person who has maintained the coupling — who wants to build, likes the process of building, and cares about whether what is built serves others. This person's output, amplified by AI, carries the full signal. The drive produces momentum. The satisfaction produces sustainability. The caring produces direction. Direction is the critical addition that wanting alone cannot supply.
The concept extends Segal's central question in The Orange Pill — "are you worth amplifying?" — into a neurobiological specification. Berridge's wanting-liking framework supplies two of the three systems; the caring system is developed by extending Berridge's framework through social cognitive neuroscience (theory of mind, moral reasoning, default-mode network) to add the other-directed evaluative capacity that neither wanting nor liking alone provides.
Three systems, integrated. Wanting (drive), liking (hedonic verification), and caring (other-directed evaluation) must operate in concert for amplification to be worth undertaking.
Wanting alone is insufficient. The wanting system asks only "more?" It does not evaluate whether more is what the world needs.
Liking alone is insufficient. Satisfaction is self-referential. A builder can experience flow while producing something that serves only the builder.
Caring alone is insufficient. Evaluation without momentum produces paralysis. The caring system requires the motivational drive that wanting provides.
Direction is the critical addition. Wanting with caring produces directed momentum — the builder who uses AI speed to reach a destination the caring system identified as worth reaching, rather than producing faster and faster without asking where the shipping leads.
The merit of each framework depends fundamentally on the scale of analysis. At the individual psychological level, the tripartite integration model captures something essential — builders do need to maintain the coupling of drive, satisfaction, and evaluative care to produce sustainable, meaningful work. Here, Segal's framework dominates (80%). The phenomenology is accurate: wanting without liking produces burnout, liking without caring produces hollow satisfaction, and the integration of all three does generate what we recognize as worthy output.
But as we zoom out to the systemic level, the contrarian view gains explanatory power. When asking "whose wanting drives the system?" or "what maintains the infrastructure of amplification?" the substrate critique becomes primary (70%). The neural integration model assumes an autonomous subject, but autonomy itself is unevenly distributed. The gig worker training language models has little opportunity for "vital engagement" — their caring system may be fully active, but structural conditions prevent its expression in their work. The question shifts from "is this builder integrating their neural systems?" to "which builders get the chance to integrate?"
The synthetic frame that serves the topic best might be: worthy amplification operates at multiple scales simultaneously, and what counts as "worthy" changes with the frame. At the phenomenological scale, it's the integrated neural signal that produces sustainable, meaningful output. At the structural scale, it's the recognition that any individual's integration depends on others' disintegration — and truly worthy amplification would account for this debt. The most worthy signal might be generated not by the builder who achieves perfect wanting-liking-caring integration, but by the one who recognizes that their integration is subsidized and works to extend the possibility of integration to those whose labor currently makes theirs possible.