An amplifier increases the power of a signal. It does not distinguish between signal and noise, because the distinction between signal and noise is not a property of the waveform — it is a property of the sender's intention, which the amplifier cannot access. The consequence is a hard mathematical constraint: no amplifier, however sophisticated, can improve the signal-to-noise ratio of what it receives. In the human-AI collaboration, this theorem is the mathematical foundation beneath Segal's central claim in The Orange Pill: AI amplifies what it is given, and no improvement to the model can overcome a low-quality input. The quality of the output is bounded above by the quality of the input. The ratio cannot be raised by the machine; it can only be raised by the human at the source.
The result follows from basic circuit analysis. A real amplifier multiplies its input by a gain factor and adds its own internal noise. The output is (signal + noise_in + noise_amp) × gain. The signal-to-noise ratio of the output is at best equal to the signal-to-noise ratio of the input, and in practice worse because of the amplifier's own contributed noise.
Applied to large language models, the theorem specifies what the tool can and cannot do. The model can increase the reach, speed, and scale of a human's thinking. It cannot improve the clarity of that thinking. A user with a vague intention receives fluent elaboration of the vagueness. A user with a clear intention receives powerful amplification of the clarity.
The theorem is the mathematical grounding of the question of whether one is worth amplifying. If amplification does not filter, then the moral responsibility for the quality of amplified output lies entirely at the source — with the human whose signal is being multiplied, not with the machine doing the multiplication.
The phenomenon of amplifier saturation — where the amplifier's own noise dominates the output — has a direct analog in AI: the moment when the model's stylistic tendencies overwhelm the user's voice, producing text that sounds like Claude rather than like its purported author. Segal describes exactly this experience in The Orange Pill, and the remedy is the same as in circuit design: reduce the gain and strengthen the input.
The result is not a single theorem in Shannon's 1948 paper but a consequence of the broader information-theoretic framework: the noise figure of a cascaded system, first formalized by Harald Friis at Bell Labs in 1944, establishes the impossibility of improving SNR through passive amplification. The application to human-AI collaboration is a translation of the same mathematics into a new medium.
Gain multiplies everything. The amplifier cannot distinguish signal from noise because the distinction lies in the sender's intention, not in the waveform.
SNR is bounded by the input. No amplifier can raise the signal-to-noise ratio of what it receives; the best it can do is preserve it.
Responsibility stays at the source. If the amplifier is morally neutral, the moral valence of the output is fixed at the input, by the human.
Saturation is a failure mode. When the amplifier's own noise dominates, the output stops being amplification and becomes substitution.
Democratization widens distribution. Universal access to the amplifier amplifies the existing distribution of signal quality — making the best better and the loud louder.
Whether the theorem holds strictly for AI systems is contested. Some researchers argue that language models, by exposing users to high-quality patterns from training data, can improve user inputs through iterative interaction — a violation of the strict amplifier analogy. Others respond that this is not signal-to-noise improvement but substitution: the user's voice is replaced rather than clarified.