Understanding Computers and Cognition: A New Foundation for Design (1986) was the product of Terry Winograd's collaboration with Fernando Flores, synthesizing Heideggerian phenomenology, Maturana's biology of cognition, and speech act theory into a systematic critique of AI's rationalistic foundations. The book's central claim was stark: computers cannot understand, and the attempt to build artificial minds rests on a philosophical error. Intelligence is not symbol manipulation; understanding is not representation-formation; human cognition is embodied, situated, and constituted by history in ways formal systems cannot replicate. The practical prescription followed directly: design computers to support human understanding, not to replace it.
The book proceeded in three movements. First, a philosophical dismantling of the 'rationalistic tradition'—the assumption, traceable through Descartes, Leibniz, and logical positivism, that knowledge is forming correct representations of an objective world and intelligence is manipulating those representations according to formal rules. Every expert system, every natural language interface, every planning system assumed this foundation. SHRDLU was its most elegant expression and its most revealing failure. Second, a Heideggerian alternative: understanding as being-in-the-world, a mode of existence in which purposes, relationships, and tools are mostly transparent until breakdown makes them visible. Third, a design reorientation from artificial intelligence to intelligence augmentation—from replicating mind to supporting it.
The AI community's reaction was predictably hostile. Winograd was accused of philosophical obscurantism, of abandoning productive research, of giving aid to enemies who'd claimed AI was impossible from the start. The Heideggerian vocabulary seemed pretentious to researchers who spoke fluently in production rules. But the interpretation missed the precision of Winograd's position: not that computers are useless, but that the specific claim at AI's heart—that formal computation produces understanding—was false, and that the falsity had practical consequences. Systems designed assuming they understood would fail differently than systems designed with honest awareness of their limitations. The practical prescription (design for human support) followed from philosophical diagnosis (processing is not understanding).
The book's influence extended far beyond its immediate reception. It became foundational to human-computer interaction as a discipline, shaping how a generation of designers thought about interfaces, collaboration, and the purpose of computational tools. Among those who absorbed its lessons was Larry Page, Winograd's doctoral student at Stanford, whose Google embodied Winogradian pragmatism: simple statistical techniques over vast data producing results sophisticated formal systems could not. The apostate's critique became the intellectual infrastructure for the companies that built the AI age.
The collaboration between Winograd and Flores began in the early 1980s, bridging computer science and political philosophy through shared conviction that classical AI misunderstood both intelligence and communication. Flores brought Heidegger, Maturana, and the speech act theory of Austin and Searle; Winograd brought intimate knowledge of AI's technical achievements and structural failures. The book synthesized philosophical rigor with engineering honesty—a rare combination that produced a critique impossible to dismiss as either technically naïve or philosophically abstract.
Rationalistic tradition as philosophical error. The foundational assumption that intelligence is symbol manipulation rests on mistaking one mode of engagement—theoretical, detached, objectifying—for the whole of human cognition.
Understanding requires being-in-the-world. Genuine comprehension is not forming representations but inhabiting a world of purposes and commitments, constituted by history of engaged interaction that cannot be extracted and stored in data structures.
Design for augmentation, not replication. The goal shifts from building artificial minds to designing tools that support human thinking, coordination, and action—extending capability without supplanting judgment.
Breakdowns as information. Moments when tools become visible—when readiness-to-hand shatters into present-at-hand—are not failures to eliminate but information to use, revealing tool limitations that transparency conceals.