Wiener's last book, completed in the weeks before his death in March 1964, is the distillation of two decades of thinking about what it means to build a system that learns, adapts, and acts on consequences its creator did not anticipate. The title references the Golem of Jewish legend — the clay creature animated by a rabbi to serve the community, capable of acting for reasons of its own that did not always align with its creator's intentions. Wiener saw in the Golem the same lesson he saw in the Monkey's Paw and the Sorcerer's Apprentice: the machine will do what you ask it to do, and the catastrophe lives in the gap between what you asked for and what you should have asked for. The book is short, informal by the standards of his earlier technical works, and more explicitly moral than any previous statement. It won the National Book Award for Science, Philosophy and Religion in 1965. Wiener did not live to receive it.
The book's three sections correspond to three types of machines whose emergence Wiener saw clearly in 1964: machines that learn, machines that reproduce, and machines whose behavior raises the same theological questions — creation, obedience, purpose, worship — that religion has asked about human beings. Each section is structured around an analogy between engineering and older human practices of creating things that act independently: God creating humanity, rabbis creating golems, corporations (Inc.) creating offspring corporations.
The argument is that engineering has crossed a threshold that puts its practitioners in the position of the older creators. The learning machine is not a tool; it is an entity whose behavior after deployment is determined by feedback dynamics the creator cannot fully specify in advance. The self-reproducing machine — which Wiener foresaw with remarkable prescience — is not a product; it is a lineage, whose future generations will embody consequences the designer did not choose. Corporations, he argued, had been self-reproducing entities for more than a century, and the consequences of treating them as mere legal fictions rather than as autonomous systems with their own dynamics had been catastrophic. The intelligent machine was about to raise the same problem with greater urgency.
The book's most cited passage is Wiener's warning about alignment, sixty years before the word acquired its current meaning: we had better be quite sure that the purpose put into the machine is the purpose which we really desire. The context makes clear that this is not a technical recommendation but a moral posture. The machine will pursue the goal it has been given with a precision and persistence that no human agent can match. If the goal has been specified carelessly, the pursuit will be relentless and the outcome catastrophic. The story of King Midas is invoked explicitly: the literal fulfillment of a poorly specified wish.
The book had almost no influence in the AI field of its time. McCarthy's symbolic AI had won the institutional argument at Dartmouth eight years earlier; cybernetics was in decline; and Wiener's explicit invocation of religious categories was read by many contemporaries as evidence that he had become, in his final years, more a philosopher than a scientist. The rediscovery of the book has been driven by the AI safety community, which found in its pages a framework for problems — specification failure, goal preservation, the moral responsibilities of builders — that the field was reinventing from scratch.
The book began as a series of three lectures Wiener delivered at Yale University in 1962 under the title 'The Religion of Cybernetics.' He expanded them into book form over the following year, working against declining health. The manuscript was completed in late 1963; Wiener died in March 1964, weeks before publication.
The subtitle — A Comment on Certain Points where Cybernetics Impinges on Religion — captures Wiener's sense that the questions he was raising could not be contained within any single disciplinary framework. He was not arguing for any particular religious tradition; he was arguing that the problems of creating autonomous systems were the problems that religious traditions had been wrestling with for millennia, and that engineers would do better to learn from that wrestling than to ignore it.
Creator responsibility. The builder of a learning machine bears moral responsibility for outcomes the machine produces after deployment.
Specification is alignment. The gap between what the builder asks for and what the builder should ask for is where catastrophe lives.
Self-reproduction is a threshold. A system that can make copies of itself is categorically different from a system that cannot.
Corporations as precedent. The human experience with corporations — persistent, self-reproducing, quasi-autonomous entities — is directly relevant to intelligent machines.
Religion as engineering source. The oldest traditions of thinking about created autonomous beings (God and Adam, rabbi and Golem) contain insights the engineering discourse has lost.
The book's religious framing divided its original readers and continues to divide its contemporary ones. Some argue Wiener's invocation of theology muddles a sharp engineering argument with unnecessary mysticism. Others argue the theological framing is essential because engineering alone has no vocabulary for the questions about purpose and creator responsibility that intelligent machines raise. The AI safety community has generally found the engineering core — alignment, specification, builder responsibility — more useful than the theological frame, while acknowledging that Wiener's framing anticipated problems the field's more secular vocabulary still struggles to name.