The Autonomous Vehicle Critique — Orange Pill Wiki
CONCEPT

The Autonomous Vehicle Critique

Solnit's observation that driverless cars are called autonomous but driving is a cooperative social activity—a distinction exposing the category error at the heart of AI automation.

Rebecca Solnit's February 2024 observation in the London Review of Books—that driverless cars are called autonomous vehicles but driving is not an autonomous activity—became the most quoted passage of her "In the Shadow of Silicon Valley" essay because it crystallized a structural misunderstanding embedded across AI deployment domains. Driving is a cooperative social activity conducted through eye contact, gesture, hand signals, timing, and the thousand micro-negotiations that allow millions of strangers to share roads without killing each other in numbers far higher than they do. When the human is removed from the vehicle, the social negotiation does not become more efficient; it becomes impossible. The machine can process sensor data and optimize routes, but it cannot make eye contact with a pedestrian stepping into a crosswalk or wave a cyclist through an intersection. The activities framed as autonomous—teaching, diagnosing, managing, designing—are, like driving, cooperative social activities involving not just information processing but the navigation of meaning between participants who bring different contexts, needs, and stakes to the interaction.

In the AI Story

Hedcut illustration for The Autonomous Vehicle Critique
The Autonomous Vehicle Critique

The critique emerged from Solnit's direct observation of San Francisco's streets after the California Public Utilities Commission granted Cruise and Waymo expanded deployment permits despite the San Francisco fire chief's explicit warnings that autonomous vehicles had blocked firetrucks, parked on hoses, and interfered with emergency response. The fire chief possessed direct, embodied knowledge of consequences. The regulatory body possessed institutional authority. The knowledge lost; the authority won. The pattern—institutional decision-making that overrides situated expertise from people who bear the consequences—is the pattern Solnit has documented across her career, now replicated in the AI governance vacuum.

Jean Burgess, a professor at Queensland University of Technology, drew the broader implication explicitly: Solnit's observation "applies to so many more of AI's current and proposed application domains." The teacher explaining a concept is not transmitting data—she is reading the student's face, adjusting pace, choosing metaphors based on what she knows about this particular student's struggles and capacities. The doctor delivering a diagnosis is not outputting a classification—he is managing fear, calibrating hope, navigating the specific terrain of this patient's ability to hear difficult information. These activities can be assisted by AI; they cannot be replaced by it, because replacement eliminates the social dimension that constitutes the work's actual value.

The autonomous vehicle critique exposes the category error structuring much of AI deployment: the treatment of fundamentally social activities as information-processing tasks that can be optimized through the removal of human judgment. The error is not technical but ideological—the same ideology that produced the factory system, that treated workers as interchangeable units optimized for throughput. The technology is new; the logic is ancient; and the logic produces the consequences it has always produced: efficiency gains captured by infrastructure owners, social costs borne by the people whose cooperative activities have been reframed as friction to be eliminated.

Origin

Solnit had been writing about Silicon Valley's transformation of San Francisco since the 1990s, documenting displacement, the conversion of public space into private amenity, and the specific forms of violence operating under the rhetoric of innovation. The autonomous vehicle critique synthesized decades of observation into a single, devastating example. The example resonated widely—quoted in technology discourse, urban planning debates, and AI ethics conversations—because it made visible an assumption most people had not noticed they were making: that autonomy (independence from human direction) is desirable across all domains. Solnit's intervention was to name the domains where autonomy is not merely undesirable but impossible—where the removal of human social presence eliminates the activity's value rather than enhancing its efficiency.

Key Ideas

Driving Is Cooperative, Not Individual. The person behind the wheel communicates constantly with other drivers, cyclists, pedestrians—eye contact at intersections, hand waves, the timing that allows someone to merge. Remove the human and you eliminate the communication, producing a vehicle that can navigate but cannot negotiate.

The Eye Contact You Cannot Make. San Francisco Airport installed signs instructing pedestrians to make eye contact with drivers before crossing. There is no one in a driverless car to make eye contact with—a small detail that reveals a categorical difference between human-operated and algorithmic systems.

Social Activities Misclassified as Autonomous Tasks. The ideology framing teaching, medicine, management, and creative work as information-processing tasks amenable to automation is the same ideology that calls driving autonomous. The misclassification is not innocent—it justifies the removal of human judgment from domains where judgment is constitutive rather than incidental.

Efficiency vs. Negotiation. Optimization assumes a single metric (speed, cost, throughput) can govern decision-making. Social activities require multi-dimensional negotiation among participants with different and sometimes incompatible goals. AI can optimize; it cannot negotiate in the sense that requires mutual recognition of the other as a subject with legitimate but different needs.

The Fire Chief's Knowledge. The person with direct, embodied, consequence-bearing knowledge of how a system fails in practice is structurally disadvantaged in institutional decision-making against the authority possessing formal power. This asymmetry—repeated across AI deployment domains—is not a bug but a feature of how power operates when divorced from accountability.

Appears in the Orange Pill Cycle

Further reading

  1. Rebecca Solnit, "In the Shadow of Silicon Valley," London Review of Books (8 February 2024)
  2. Lucy Suchman, Human-Machine Reconfigurations (Cambridge University Press, 2007)
  3. Donald Norman, The Design of Everyday Things (Basic Books, 1988; revised 2013)
  4. Sherry Turkle, Alone Together (Basic Books, 2011)
  5. Langdon Winner, "Do Artifacts Have Politics?" (1980)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT