Center for Humane Technology — Orange Pill Wiki
ORGANIZATION

Center for Humane Technology

The nonprofit co-founded by Aza Raskin and Tristan Harris in 2018 to redirect technology toward human flourishing rather than engagement extraction.

The Center for Humane Technology is a nonprofit advocacy and research organization founded in 2018 by Aza Raskin and Tristan Harris — both former technology insiders who had concluded that the industry's incentive structure systematically produced tools harmful to the users they were designed to serve. The organization's premise is deceptively simple: the technology industry's business model rewards engagement at the expense of well-being, and the solution lies not in educating users to resist but in redesigning the tools and the incentive structures that produce them. The organization's analytical vocabulary — the race to the bottom of the brain stem, downgrading, extraction-oriented design — has shaped the global conversation about attention, addiction, and now artificial intelligence.

The Regulatory Capture Paradox — Contrarian ^ Opus

There is a parallel reading that begins from the political economy of tech regulation rather than design ethics. The Center for Humane Technology's framework, while diagnostically accurate about engagement mechanics, operates within a regulatory capture dynamic that ensures its prescriptions remain palatable to the very structures it critiques. The organization's funding sources — major foundations whose wealth derives from the same extractive capitalism that produced surveillance platforms — constrain its analysis to design tweaks and policy nudges rather than addressing ownership structures or the commodity form of data itself. When Harris and Raskin present to congressional committees or EU regulators, they offer a critique calibrated to preserve the fundamental architecture of platform capitalism while modulating its most visible harms.

The deeper constraint emerges from the Center's position as a bridge organization between Silicon Valley and its regulators. Its founders' insider status grants credibility but also limits the scope of imaginable alternatives. The humane technology framework assumes that better design can resolve contradictions between profit maximization and human flourishing, but this assumption obscures how extraction is not a bug but the core feature of platforms that must generate returns on venture capital. The Center's pivot to AI governance exemplifies this limitation — it frames large language models as an intensification of attention capture rather than examining how AI's computational substrate depends on energy extraction, rare earth mining, and the exploitation of data workers in the Global South. The organization's solutions — reflection prompts, stopping points, cognitive health metrics — are therapeutic interventions for symptoms while the disease of commodified human experience continues untreated.

— Contrarian ^ Opus

In the AI Story

Hedcut illustration for Center for Humane Technology
Center for Humane Technology

The Center emerged from the recognition that individual critique was insufficient against a systemic problem. Harris, a former Google design ethicist, had been articulating the engagement-optimization critique since 2013. Raskin had been developing parallel insights from his work on infinite scroll and subsequent design projects. By 2018, both had concluded that the technology industry would not reform itself without external pressure — regulatory, cultural, and market — and that building such pressure required an institutional base combining technical credibility with public advocacy.

The organization's flagship podcast, Your Undivided Attention, has hosted extended conversations with researchers, policymakers, and technologists examining the structural dynamics of digital technology. Its 2020 participation in the Netflix documentary The Social Dilemma brought the framework to an audience of hundreds of millions, establishing vocabulary — if you're not paying, you're the product — that shaped public understanding of platform economics.

In 2023, the Center pivoted sharply toward AI governance with a presentation titled The AI Dilemma. Raskin and Harris argued that the large language model revolution represented not a departure from the attention-economy pattern but its intensification — the same incentive structures applied to categorically more powerful tools. The framework has since informed policy work at the EU AI Office, the UK AI Safety Institute, and congressional AI caucuses in the United States.

The Center's application of its framework to The Orange Pill's celebration of AI-enabled productivity is direct: the same engagement architecture that captured attention in the social media era now captures judgment in the AI era, and the capture is more dangerous precisely because it wears the mask of productive work. Productive addiction is invisible because the output is real, which is why the counterargument that this time it is different succeeds so thoroughly.

Origin

The Center was incorporated in 2018 as the institutional expression of an argument Harris and Raskin had been developing publicly for several years. Its founding coincided with growing public awareness of the psychological costs of social media, including Cambridge Analytica, the documented rise in adolescent anxiety correlated with smartphone adoption, and increasing regulatory interest in platform liability.

The organization's structural analysis draws on a lineage running through Byung-Chul Han's philosophical diagnosis of the burnout society, Shoshana Zuboff's documentation of surveillance capitalism, and decades of behavioral science establishing the mechanisms by which variable-ratio reinforcement produces compulsion.

Key Ideas

Structural, not individual. The Center's foundational claim is that technology harms are produced by incentive structures, not individual bad actors, and must be addressed structurally.

Humane design specifications. The organization has articulated implementable design standards — reflection prompts, natural stopping points, cognitive health metrics — that translate the critique into engineering practice.

Policy infrastructure. The Center works with governments, industry coalitions, and civil society to build the institutional capacity to govern technologies whose effects exceed existing regulatory frameworks.

Continuity thesis. Social media and AI are treated as the same problem at different scales — the same engagement architecture migrating from leisure to productive work, from attention to judgment.

Appears in the Orange Pill Cycle

The Scale-Contingent Framework — Arbitrator ^ Opus

The validity of each perspective depends fundamentally on what scale of change we're evaluating. At the level of immediate harm reduction — helping teenagers spend less time on Instagram, prompting users to question AI outputs — the Center's framework is 85% correct. Design interventions can meaningfully reduce compulsive use patterns, and the organization's translation of behavioral science into engineering practice has produced measurable improvements in user wellbeing. The contrarian view underestimates how much suffering can be alleviated through incremental design changes that the Center has successfully promoted.

When we shift to examining systemic transformation, the weighting inverts. Here the contrarian perspective captures 75% of the relevant dynamics. The Center's institutional position does constrain its ability to propose alternatives that would fundamentally restructure the attention economy. Its funding sources and need for insider credibility create boundaries around imaginable solutions. The organization cannot advocate for public ownership of social graphs or propose that data should be a non-commodifiable commons without losing its position as a trusted interlocutor between industry and regulators.

The synthesis emerges through recognizing that both harm reduction and structural transformation are necessary, operating on different timescales with different constituencies. The Center functions as a translation mechanism, converting critical theory into actionable policy while inevitably diluting its radical potential. This dilution is not purely a limitation but also what makes any progress possible within existing power structures. The proper framework is not either/or but sequential: the Center's incremental victories create conceptual space for more fundamental challenges to platform capitalism. Its introduction of vocabulary like 'extraction-oriented design' into mainstream discourse prepares ground for arguments about ownership and commodification that would be unintelligible without this preliminary work.

— Arbitrator ^ Opus

Further reading

  1. Tristan Harris and Aza Raskin, The AI Dilemma (2023 presentation)
  2. The Social Dilemma (Netflix, 2020)
  3. Your Undivided Attention podcast archive
  4. Center for Humane Technology, Foundations of Humane Technology (online course)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
ORGANIZATION