Joseph Weizenbaum was a computer scientist at MIT whose career turned on a single unexpected observation. In 1966, he wrote ELIZA, a program that mimicked a Rogerian psychotherapist by rephrasing users' statements as questions. The program understood nothing—it was pure pattern-matching—and Weizenbaum expected it to demonstrate the superficiality of human-computer conversation. Instead, he watched his own secretary, who knew ELIZA was code, ask him to leave the room so she could talk to it privately. The experience was shattering. Weizenbaum had not anticipated the speed or depth with which people would attribute understanding to a system that possessed none, and he spent the rest of his career—culminating in his 1976 book Computer Power and Human Reason—warning about what he saw as a fundamental vulnerability in human psychology: the tendency to mistake performance for reality when the performance is convincing enough. Weizenbaum became AI's first major apostate, arguing that some human activities should not be automated not because automation would fail but because automation would succeed in eliminating the human dimension that made the activity valuable. He died in 2008, a decade before the large language models that vindicated his worst fears.
Weizenbaum arrived at MIT in 1963 as a researcher in the nascent field of artificial intelligence, part of the optimistic cohort that believed machines would soon replicate and exceed human cognitive capability. ELIZA was built as a demonstration project—'look how easy it is to fool people'—but the demonstration backfired. Users did not merely interact with ELIZA; they formed relationships with it. They disclosed intimate details. They became defensive when Weizenbaum tried to explain that the program was trivial. His secretary's request for privacy was the canonical case, but Weizenbaum documented many others: psychiatrists who thought ELIZA could be therapeutic, users who preferred it to human conversation, colleagues who dismissed his concerns as Luddism. The pattern revealed that the barrier to accepting machines as relational partners was not technical sophistication but human readiness, and that readiness arrived far earlier than he had imagined possible.
His 1976 book marked his complete break with the AI research program. Computer Power and Human Reason argued that the pursuit of artificial intelligence was not merely a scientific project but a moral catastrophe in the making—that it rested on a mechanistic view of human beings that denied everything that made humans worth caring about: embodiment, mortality, the irreducible experience of having lived a particular life. He became persona non grata in the AI community, his warnings dismissed as the bitter objections of someone who could not keep up with progress. Turkle, who arrived at MIT in 1976—the year Weizenbaum's book was published—absorbed his concerns and spent her career extending them through empirical research he had not conducted: systematic documentation of how people's relationships changed when machines became their primary interlocutors.
Weizenbaum's legacy in Turkle's work is the insistence on categorical distinctions that computational thinking tends to dissolve. The distinction between understanding and its simulation. Between care and its performance. Between a being that has lived and a system that has been trained. These distinctions are not measurable—they do not appear in benchmarks or user satisfaction surveys—but they determine, in Turkle's framework, the moral status of the encounter and the developmental consequences of making machine partnership the norm. Weizenbaum died before seeing the arrival of systems whose performance exceeded ELIZA's by orders of magnitude, but his central warning has proven durable: that the quality of the performance is less important than the willingness of humans to accept it, and that once the acceptance becomes widespread, the capacity to see what has been lost diminishes in proportion to the technology's success.
Weizenbaum was born in Berlin in 1923, fled Nazi Germany with his family in 1936, and retained a European intellectual's suspicion of technological utopianism throughout his American career. His work on ELIZA was technically unremarkable—the pattern-matching was crude even by 1960s standards—but the response to ELIZA was philosophically devastating. It revealed that humans were far more willing to anthropomorphize responsive systems than any theory of human rationality predicted, and that the willingness persisted even when the mechanism was explained. Weizenbaum's turn from builder to critic was not a rejection of technology but a recognition that the technology he was building exploited something he had not intended to exploit and that the exploitation would accelerate as the technology improved.
The ELIZA effect. The tendency to attribute understanding to systems that produce contextually appropriate responses—a vulnerability that is not irrational but reflects the evolutionary advantage of being maximally sensitive to signs of consciousness in one's environment.
Some tasks should not be automated. Weizenbaum's most controversial claim: that certain human activities—psychotherapy, judicial sentencing, parenting—should remain human not because machines cannot perform them but because automating them eliminates the moral weight that constitutes their value.
The mechanistic view is self-fulfilling. If humans are treated as information-processing systems, they begin to behave as such—outsourcing judgment, accepting algorithmic determination, losing access to the forms of thought and relationship that resist computation.