The AI Dilemma is the presentation Tristan Harris and Aza Raskin delivered in March 2023 at an invitation-only summit and subsequently in widely viewed public versions, arguing that large language models represented not a departure from the social-media pattern but its intensification. The presentation's central claims — that social media was humanity's first contact with AI, that the engagement-optimization incentive structure that produced social media's harms is now operating on categorically more powerful tools, and that the response window for governance is narrow — became the Center for Humane Technology's organizing framework for the AI era.
The presentation crystallized a shift in the humane-technology discourse from social-media criticism to AI governance advocacy. Its specific technical claims — about large language models' capabilities, about the speed of capability gains, about the risks of model deployment — were contested. Some were later shown to be imprecise. But the structural argument survived the specific critiques: the same incentive structure that produced social-media harms would produce AI harms, possibly at greater scale and with faster onset, if the structure were not changed.
The presentation's most widely circulated framing — you can have the blue pill or the red pill, and we're out of blue pills, repeated in the New York Times op-ed Harris and Raskin co-authored with Yuval Noah Harari — positioned the choice as binary. The framing drew criticism for oversimplification and for overstating AI's current capabilities. The framing was also effective: it moved AI governance from a specialist concern to mainstream political discussion, producing the conditions in which subsequent regulatory initiatives (EU AI Act, UK AI Safety Institute, US executive orders) could gain political support.
The presentation's application to the Orange Pill celebration of AI productivity is direct. The productivity gains Segal documents in Trivandrum, the thirty-day Napster Station development, the twenty-fold multiplier — these are the exact outputs the engagement-optimized design produces at scale, and the exact outputs Raskin's framework identifies as masking the underlying cost. The presentation argues, in effect, that the productivity celebration cannot be evaluated independently of the cost accounting the celebration omits.
Critics including Noah Giansiracusa, Emily Bender, and Timnit Gebru have argued that the presentation's existential framing distracts from immediate, documentable harms — labor exploitation, monopolistic consolidation, surveillance, bias. Raskin and Harris's response has been to argue that immediate and existential harms are not competing categories but the same structural problem at different scales, both flowing from the same incentive structure.
The presentation was first delivered in March 2023 to a gathering of approximately 100 technology executives, researchers, and policymakers. It was subsequently made publicly available and has been viewed millions of times. The New York Times op-ed You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills (March 2023) extended the argument to a broader audience.
First contact framing. Social media as humanity's first contact with AI, making its lessons directly applicable to the current moment.
Continuity thesis. The same incentive structure that produced social-media harms is now operating on AI.
Narrow window. Governance must be established before deployment patterns lock in effects that subsequent regulation cannot reverse.
Contested specifics, resilient structure. Some technical claims have been challenged; the structural argument has remained influential.