In April 2025, Terry Winograd delivered a talk at Berkeley's Institute of Design whose title deliberately echoed Hubert Dreyfus's 1972 book—the philosophical critique that had catalyzed Winograd's transformation from AI pioneer to phenomenological skeptic five decades earlier. The talk was remarkable for what it did not do: it did not declare victory (that language models vindicated the approach he'd critiqued) nor defeat (that his arguments had been refuted by machines' achievements). It held both observations—the distinction between processing and understanding is real; the practical implications of that distinction are narrower than predicted—and examined their tension with the care of someone who understood how easy it is to mistake appearance for reality. Winograd's position, distilled across fifty years: the capability is real and expanding, the absence of understanding is real and consequential, and the discipline is acknowledging both without collapsing into triumphalism or despair.
The talk traced Winograd's intellectual trajectory: building SHRDLU (the most convincing demonstration of machine understanding), spending twenty years explaining why it was not understanding, spending another twenty designing tools that support understanding without claiming to possess it, and spending the last decade watching machines achieve capabilities his framework had classified as impossible—maintaining through all of it the core distinction his career illuminated. The audience included AI researchers, designers, and philosophers—the Berkeley Institute serving as neutral ground where technical and humanistic perspectives could meet. Winograd's performance was neither celebration nor lamentation but diagnostic precision: these are the territories statistical pragmatic competence has conquered, these are the boundaries where caring remains necessary, and here is why knowing the difference matters more than ever.
The talk's most striking moment came during Q&A when a graduate student asked whether large language models had proven Winograd wrong. His response: 'They've proven that I was wrong about how much you can do without understanding. They have not proven that understanding doesn't matter—they've made the question of what understanding is for more urgent, not less urgent, precisely because everything else can now be done without it.' The answer compressed fifty years of intellectual honesty into a single claim: revision without surrender, adaptation without abandonment of the core insight that processing and understanding are categorically distinct.
The lecture was part of Berkeley's ongoing series on AI and society, organized in partnership with the Center for Human-Compatible AI. Winograd, professor emeritus at Stanford, rarely gave public talks by 2025—the Berkeley invitation brought him out of semi-retirement to address the generation building the systems his career had been studying. The choice of title signaled his method: measuring the present against the past's predictions, acknowledging what was wrong while preserving what remains right, holding the tension without resolving it prematurely.
Three claims surviving contact. The distinction between processing and understanding is real (sharpened, not weakened); practical implications narrower than predicted (correction acknowledged); design must be grounded in honest awareness of machine's nature (most consequential).
The correction admitted. Open-domain competence does not require being-in-the-world—can be approximated through statistical mechanisms, making territory accessible to processing vastly larger than anticipated.
The distinction preserved. What machines lack is not competence but character—not ability to produce correct outputs but capacity to know why correctness matters, to care whether outputs serve or harm.
The urgency increased. As capability expands, the human capacity to give a damn becomes more necessary, not less—the only check on capability deployed without purpose.