The Rise of Antihumanism is Crawford's 2023 lecture articulating the tacit ideology that legitimizes the progressive replacement of human judgment by automated systems. The lecture identifies four premises operating beneath the surface of much contemporary technology discourse: human beings are stupid, we are obsolete, we are fragile, and we are hateful. Each premise captures something real about human limitation. Taken together, deployed as a justification for replacing human agency with algorithmic governance, they constitute what Crawford calls "apologetics for a further concentration of wealth and power." The lecture's argument is that antihumanism operates most effectively when it remains tacit — when it shapes technology development without being explicitly defended or even recognized as an ideology.
The lecture's canonical illustration is the Google self-driving car incident of 2009, in which the car's algorithms froze at a four-way intersection and the Google engineer's response was that human beings need to be "less idiotic." The response is revealing because it positions human unpredictability as the problem to be solved, rather than as an essential feature of the context in which the automated system must operate. The engineer's framing treats the messiness of human interaction as a deficiency — as something to be engineered out — rather than as a domain of judgment that automation must accommodate.
Crawford's analysis of the four premises is careful and refuses the easy response of defending humans against every criticism. Human beings are sometimes stupid — in specific ways, in specific circumstances. Human beings are vulnerable and mortal. Human judgment is subject to bias, fatigue, and error. The premises are not wholesale lies; they capture real features of human limitation. What makes them ideological in their current deployment is the conclusion drawn from them — that the solution to human limitation is to replace human judgment with systems designed by other humans operating under the same limitations but insulated from accountability.
The lecture connects antihumanism to Crawford's broader political-economic critique. The ideology serves specific interests: the concentration of capital in companies that own automated systems, the reduction of labor costs, the legitimation of algorithmic governance that operates without democratic accountability. The ideology does not need to be consciously defended to function; it needs only to be absorbed unreflectively, to become the common sense within which specific technology choices appear obvious and necessary.
Crawford's counter-position is not romantic humanism. It is an insistence on specificity — on careful examination of what particular human capacities are actually at stake in particular technological transitions, and on refusal of the wholesale substitution that antihumanism legitimizes. The mechanic's judgment is not defensible because humans are generally wise. It is defensible because her specific engagement with the specific motorcycle over specific years has produced a specific form of understanding that the automated alternative lacks. The argument for preserving human judgment must be made case by case, with empirical attention to what each practice actually involves.
Crawford delivered The Rise of Antihumanism as a lecture in 2023. The lecture was subsequently adapted into essay form and has been widely cited in the emerging philosophical literature on AI and automation.
Four antihumanist premises. Stupid, obsolete, fragile, hateful — each captures something real about human limitation, but deployed together as justification for replacement, they constitute ideology rather than analysis.
Tacit ideology. Antihumanism operates most effectively when it is not explicitly defended — when it shapes technology development as background common sense.
The intersection parable. The Google engineer's "less idiotic" response exemplifies how antihumanism frames human agency as problem rather than context.
Specific defense required. The case for preserving human judgment must be made empirically, by examining what particular practices actually involve, rather than through abstract appeals to human dignity.
Service to concentration. The ideology legitimizes the concentration of capital and cognitive authority in corporations that own automated systems.