Every communication channel has an operating point — a specific combination of transmission rate, error probability, and delay at which it is being used. Shannon's capacity theorem defines the boundary; engineering decides where inside the boundary to operate. The human-AI channel has two capacity limits: the machine's capacity to produce and the human's capacity to absorb, verify, and integrate. The minimum of the two determines the maximum rate of reliable communication — and the minimum, in almost every case, is the human's. When the operating point sits below the human's processing capacity, the three conditions of flow are satisfied: matched bandwidth, low latency, trustable signal. When it exceeds that capacity, the channel enters buffer overflow: output accumulates unverified, verification thresholds drop to accommodate throughput, and the system enters a positive feedback loop that produces the behavioral signature of productive addiction.
The concept generalizes Csikszentmihalyi's flow conditions into information-theoretic terms. Flow requires that challenge match skill — in Shannon's framework, that channel throughput match receiver processing capacity. The classical flow conditions (clear goals, immediate feedback, sense of control) map onto channel properties (low latency, high signal-to-noise ratio, user-controlled rate).
Traditional software development violated all three conditions. Information arrived in batches across multi-week latencies; the cognitive context dissipated between exchanges; verification consumed substantial cognitive resources at every handoff. The AI channel, at its best, satisfies all three — which is why the phenomenology of engagement with it so closely resembles the phenomenology of flow.
But the same channel supports a pathological operating regime. When the user prompts at a rate that exceeds her processing capacity, output accumulates faster than it can be evaluated. The system enters buffer overflow, and the behavioral signature — inability to stop, colonization of every pause with more interaction, compulsive return to the tool — is the surface manifestation of an information-theoretic constraint violation.
The practical consequence is that the optimal use of AI tools requires self-knowledge as a technical skill. The tool does not regulate itself. It responds at whatever rate the user prompts. The user must function as the rate limiter, and must possess the awareness to detect when the operating point has shifted from directed creativity into reactive consumption.
The framework synthesizes Shannon's channel capacity theorem with Csikszentmihalyi's flow research, extending both into the specific case of human-AI collaboration where the receiver is variable-capacity rather than fixed. The synthesis is recent — a necessary response to the observation that the same tool produces flow for some users and compulsion for others without any change in the tool itself.
Two capacity limits. The human-AI channel is bounded by both the machine's production capacity and the human's absorption capacity.
The human is usually the minimum. The binding constraint is almost always the human's processing, verification, and integration capacity.
Flow below capacity. When throughput matches processing capacity, the channel produces directed, verified, integrated work.
Overflow above capacity. When throughput exceeds processing capacity, the channel produces reactive, unverified, accumulative output indistinguishable from flow at surface level.
The user must be the rate limiter. The tool does not regulate; the discipline of holding the operating point optimal must come from the human.