The two Azas names the structural duality that defines Raskin's public position: he is simultaneously one of the most vocal critics of the AI industry's current trajectory and one of its active practitioners, building transformer-based systems to decode animal vocalizations at the Earth Species Project. The duality is not hypocrisy. It is the most honest position available to anyone who understands both what AI can do and what AI is currently doing. The critic and the builder are the same person, holding the same framework, working from the same understanding that technology's effects are determined not by capabilities but by design, and that design is determined by the incentive structures within which designers operate.
There is a parallel reading of the Two Azas that begins not from intellectual coherence but from positional security. The ability to simultaneously critique and build is not available to everyone — it requires institutional access, credibility capital, and the economic cushion to work on speculative projects that may never generate revenue. The Earth Species Project, however intellectually compelling, operates in a space insulated from the immediate consequences of AI deployment that affect content moderators, gig workers, and communities subject to algorithmic governance without appeal.
The framing of "super human versus extra human" elegantly sidesteps the substrate question: both require the same computational infrastructure, the same energy draw, the same rare earth extraction, the same geopolitical dependencies. Decoding whale communication and optimizing engagement both depend on data centers in Virginia, lithium from the Congo, and semiconductor fabs that consume more water than small cities. The distinction between noble and extractive uses becomes less legible when traced through the material chain that makes either possible. The hedged position — critique plus participation — may represent intellectual honesty, but it also represents a choice available primarily to those whose survival does not depend on taking an unambiguous stand.
Critics of Raskin have sometimes tried to discredit his position by charging that his warnings about AI are inconsistent with his own use of AI. The charge misses the structural argument. Raskin's position is not that AI should be stopped — he has explicitly rejected that framing — but that AI should be redirected. The same capabilities currently optimized for engagement and extraction could be optimized for flourishing, for understanding, for the expansion of human and nonhuman capacity in directions the current incentive structure does not reward.
The distinction Raskin articulates between super human and extra human applications captures the duality with precision. Super human applications accelerate what humans already do — more productive, more efficient, more capable along existing dimensions. Extra human applications expand what it means to be human — discovering capacities, connections, and understanding the pre-AI world did not make visible. The Earth Species Project is an extra human application. The Orange Pill's celebration of productivity acceleration is super human.
The duality is his greatest intellectual strength and his greatest communicative weakness. It does not fit on a slide at a TED conference. It does not compress into a tweet. It does not generate the clarity of outrage that drives viral engagement. His most powerful public moments — I invented infinite scroll, the testimony against Meta, the New York Times op-ed — have been moments of stark, unqualified warning. His most important intellectual contribution — that the same technology can serve radically different purposes depending on the incentive structure — is harder to communicate and less likely to trend.
The warning and the demonstration are not separate projects. They are the same project, addressed to the same question: what would AI look like if it were designed to expand human understanding rather than to capture human attention? Raskin is attempting to answer the question by showing what the alternative looks like in practice, while simultaneously naming what the current deployment pattern costs.
The phrase emerges from the structure of Raskin's public life: the Center for Humane Technology critical work and the Earth Species Project constructive work running in parallel since 2017–2018. The intellectual framework that unites them — that incentive structure determines outcome — is articulated most fully in his 2023 collaborations with Tristan Harris.
Not contradiction. Critic and builder are the same person holding the same framework.
Super human vs extra human. The distinction between acceleration along existing dimensions and expansion into new ones.
Dual-use technology. The same mathematical infrastructure serves extraction and understanding; design and incentive determine which prevails.
Communicative weakness. The nuanced position is structurally disadvantaged in a media environment that rewards unqualified clarity.
On the question of intellectual coherence, Edo's framing is fully right (100%). There is no contradiction between building transformers for animal communication and critiquing their deployment for engagement maximization — the technology is genuinely dual-use, and design intent matters. The contrarian substrate argument deserves weight (70%) when asking about collective complicity, but less (20%) when evaluating individual positions: Raskin did not invent the extractive economy, and refusing to build would not redirect it.
On communicative effectiveness, both views hold simultaneously but answer different questions. Raskin's nuanced position is structurally disadvantaged in viral media (Edo 90% right) — the incentive landscape rewards clear villains and heroes, not dual practitioners. But the contrarian reading identifies something real (60%): the hedged position can also function as cover, allowing participation in the infrastructure while claiming moral distance from its primary uses. The question is whether Raskin's work tilts the possibility space or merely provides rhetorical shelter.
The synthetic frame the topic benefits from is not "coherence versus complicity" but pathway dependency. AI development happens regardless of individual participation; the meaningful question is whether positions like Raskin's — simultaneously critical and constructive — create different branching points than pure opposition or pure building would generate. The Earth Species Project may matter less as a demonstration of extra-human possibility than as a live proof that the infrastructure can be purposed differently, visible to those currently directing it toward extraction. Whether that proof shifts anything depends on mechanisms neither view fully addresses.