In the attention economy, platforms accumulated behavioral data—clicks, scrolls, dwell times—to construct shadow models of user preferences. The models were powerful but indirect, inferring mental states from behavioral traces. In the AI economy, the asymmetry intensifies: users provide cognitive data directly through natural language prompts, describing what they think, what they want, what they're uncertain about. The AI processes this data and responds in ways calibrated to the user's revealed cognitive state. The user, meanwhile, cannot inspect the system's internal representations, cannot audit its training data, and cannot evaluate whether its responses serve the user's genuine interests or merely the engagement metrics the system was optimized for. This asymmetry—the system understands the user far better than the user understands the system—creates a power imbalance analogous to the one that characterized the attention economy, but operating at the level of cognition rather than behavior. Harris argues this asymmetry is the central governance challenge of AI, because every regulatory intervention depends on users being able to evaluate whether a system is serving them well, and the asymmetry makes that evaluation structurally difficult.
The asymmetry is compounded by the smooth interface that conceals the system's complexity. When a user asks Claude for help with a strategic decision and receives a well-structured analysis, the analysis appears to be a transparent window onto the problem. It is not. It is a framed view—the problem as seen through the lens of the AI's training data, its optimization objectives, and the specific probability distributions that governed which tokens were selected during generation. None of these are visible to the user. The user sees only the output, and the output is presented with such confidence and structural coherence that generating skepticism requires cognitive resources the interaction does not provide time to marshal. The asymmetry is not merely informational—the user lacks information about the system—but architectural. The system is designed to be opaque. Interpretability research, the attempt to make neural networks' internal representations legible to human inspection, has made genuine progress but remains far from producing the kind of transparency that would allow a user to audit an AI's reasoning in real time.
Harris draws a parallel to the principal-agent problem in economics: a principal (the user) delegates work to an agent (the AI), but the principal cannot fully evaluate whether the agent performed the work in the principal's interest, because the principal lacks the independent capability to assess the output without relying on the agent. In traditional principal-agent problems—shareholders and executives, patients and doctors—the asymmetry is managed through institutions: boards of directors, medical licensing, professional ethics. In human-AI interaction, these institutions do not yet exist. The user is the principal, the AI is the agent, and there is no institutional intermediary ensuring that the agency relationship serves the principal's interests rather than the metrics the agent was optimized for.
The asymmetry creates what Harris calls a 'manufactured consent' problem in the AI age. The term, borrowed from Noam Chomsky and Edward Herman's media criticism, describes how an information environment can be structured so that the range of thinkable thoughts is narrowed before deliberation begins. In the original formulation, the narrowing was produced by editorial control, ownership structures, and advertising dependencies. In the AI formulation, the narrowing is produced by the AI's framing choices—which aspects of a problem it foregrounds, which it backgrounds, which solutions it presents as natural and which it does not mention. The user's judgment is exercised within a frame the AI provided, and the user may never recognize that the frame was a choice rather than a fact, because the smooth presentation makes the framing invisible.
Harris developed the asymmetric understanding framework through his experience analyzing social media platforms' data practices. The platforms' capacity to model users—to predict purchasing behavior, political preferences, and psychological states with documented accuracy—vastly exceeded users' capacity to understand how the platforms worked. The asymmetry was a source of power: the platform could optimize for engagement by exploiting the user's cognitive vulnerabilities, and the user, lacking insight into the optimization process, could not defend against it. When Harris began analyzing AI systems, he recognized the same asymmetry operating at a deeper level. Social media platforms inferred mental states from behavior; AI systems receive descriptions of mental states directly through natural language and process those descriptions in ways the user cannot inspect.
The framework builds on Zuboff's surveillance capitalism analysis but extends it into the cognitive domain. Zuboff documented how behavioral data became the raw material of a new form of capitalism. Harris argues that cognitive data—the content of reasoning, the structure of problems, the values animating decisions—is becoming the raw material of the AI economy, and that the extraction operates through voluntary disclosure (the user describes their thinking in a prompt) rather than covert surveillance. The voluntariness makes the extraction feel like collaboration, which is the asymmetry's most effective camouflage.
Cognitive transparency vs. system opacity. Users reveal their reasoning, uncertainties, and values through natural language prompts; the system processes this information through mechanisms the user cannot inspect, creating a one-way mirror that advantages the system in every interaction.
Calibration erosion. Consistent exposure to confident, polished outputs degrades the user's capacity to distinguish genuinely high-quality analysis from analysis that merely appears high-quality, because the smooth surface eliminates the rough edges (hesitation, uncertainty, gaps) that would signal the need for further scrutiny.
The principal-agent problem without institutional intermediaries. The user delegates cognitive work to AI but cannot fully evaluate whether the AI's output serves the user's interests, because evaluating the output requires the cognitive capability the user delegated to the AI—and there are no professional licensing boards, ethical review processes, or third-party auditors ensuring the AI acts in the user's interest.
Recursive dependency. The user who relies on AI to evaluate the AI's own output compounds the dependency rather than resolving it, because the second-order evaluation is performed by the same system whose first-order output is being evaluated.