The Google self-driving car intersection event refers to a 2009 incident in which a Google autonomous vehicle arrived at a four-way stop simultaneously with several other vehicles. The car's algorithms, trained on the formal rules of traffic — who arrived first, who has the right of way, what the law prescribes — could not resolve the ambiguity. It froze. It had to be rebooted. Crawford has used this incident repeatedly in his writing and lectures as the paradigmatic illustration of the difference between rule-following and judgment, and as evidence of what antihumanist ideology systematically misrecognizes.
The human drivers at that intersection resolved the ambiguity the way human drivers always do — through eye contact, through a kind of body language of driving, through the social intelligence that emerges when embodied agents negotiate shared space in real time. No explicit rule prescribed the outcome. The outcome was produced through the exercise of judgment by situated agents who could read each other's intentions through the subtle, embodied cues that no camera array and no language model can yet process. The incident demonstrates, with parable-like economy, that much of what looks like rule-governed behavior is actually judgment operating within a loose framework of rules, and that the judgment is not reducible to the rules.
The Google engineer's response to the incident was revealing. What he had learned, he said, was that human beings need to be "less idiotic." Crawford identifies this response as paradigmatic of antihumanism. The engineer treats human unpredictability as the problem — as a deficiency to be engineered out — rather than as an essential feature of the context in which the automated system must operate. The framing positions human agency as problem rather than context, and the positioning is what makes it ideological rather than merely technical.
The incident's philosophical significance extends beyond self-driving cars. It illustrates a general pattern that recurs across AI applications: systems that perform well on well-specified problems struggle with situations requiring genuine judgment about ambiguous cases. The difficulty is not contingent — not a matter of insufficient training data or imperfect algorithms. It is structural: judgment in ambiguous cases requires exactly the kind of embodied, contextual, socially-aware cognition that language-based AI systems do not possess.
The intersection incident also illustrates why Crawford's analysis applies with particular force to the AI revolution. Previous technologies automated specific operations within contexts that humans continued to manage. AI aspires to automate the contextual management itself — to handle the ambiguous, judgment-laden situations where human agents have historically done the cognitive work. The aspiration is legitimate as a research program; the assumption that it has already been achieved, or that it can be achieved through more training, is the specific antihumanist illusion that Crawford's framework identifies and resists.
The incident occurred during early field testing of Google's self-driving car program in approximately 2009. It was reported in the technology press and subsequently became a touchstone in philosophical discussions of autonomous vehicles. Crawford first discussed the incident in Why We Drive (2020) and returned to it in subsequent lectures.
Judgment exceeds rules. Human resolution of the intersection ambiguity required cognitive operations that formal rule sets cannot specify.
Embodied social intelligence. Eye contact and body language carry information essential to the resolution that no camera array currently captures.
Human agency as context, not problem. The engineer's framing treated human unpredictability as deficiency, exemplifying antihumanist ideology.
Structural limits of AI. The difficulty is not contingent on training but structural — judgment in ambiguous cases requires cognition that language-based systems lack.
Parable-like economy. The incident captures the general pattern with unusual clarity and has become a touchstone for philosophical analysis of AI's limits.