Tristan Harris is an American technology ethicist whose 2013 internal Google presentation — A Call to Minimize Distraction & Respect Users' Attention — became the founding document of the humane technology movement. After a career at Google as a design ethicist, Harris co-founded the Center for Humane Technology with Aza Raskin in 2018 and has become one of the most visible public critics of the technology industry's engagement-optimization business model. His partnership with Raskin has produced the analytical framework — extraction-oriented design, the race to the bottom of the brain stem, the AI Dilemma — that grounds the critique developed in this volume.
Harris's trajectory parallels Raskin's: a technology insider whose exposure to the industry's internal decision-making produced a moral crisis that led to public advocacy. His Google presentation articulated, a decade before the AI moment, the specific mechanisms by which engagement-optimizing designs systematically degrade user well-being. The presentation was widely circulated internally, produced a period of earnest discussion at Google, and ultimately changed little — a trajectory Harris has cited as evidence that internal advocacy is insufficient without external pressure.
The partnership with Raskin has produced a division of intellectual labor: Harris tends to speak in the broader register of civilizational risk, while Raskin focuses on the specific mechanisms of engagement architecture and the design alternatives that would address them. Together they have articulated the continuity thesis — that social media and AI represent the same structural problem at different scales — that organizes the Center for Humane Technology's analytical work.
In 2023, Harris and Raskin's AI Dilemma presentation at the Summit on AI Safety argued that large language models represented a categorical escalation of the risks social media had introduced. The presentation was widely viewed and controversial. Critics argued it overstated AI's current capabilities; supporters argued it correctly identified the structural dynamics that would determine AI's trajectory under existing incentive structures.
Harris began his career at Stanford's Persuasive Technology Lab under B.J. Fogg, where he studied the psychology of habit formation in digital products. He joined Google via acquisition of Apture, his startup, and held positions ranging from product manager to design ethicist before leaving in 2015 to found the organization that would become the Center for Humane Technology.
Time Well Spent. Harris's early framework asking whether time spent on a platform is time the user would choose to spend again — a diagnostic that reveals massive gaps between engagement and satisfaction.
Design ethics. The argument that technology designers bear ethical responsibility for effects the user cannot see or consent to because the effects operate below conscious awareness.
Civilizational framing. Harris's tendency to frame AI risks at civilizational rather than individual scale — the argument that the cumulative effect of engagement optimization on billions of minds constitutes a structural threat to democratic cognition.