Norbert Wiener founded cybernetics — the study of control and communication in animals and machines — in his 1948 book of that name. His 1960 paper "Some Moral and Technical Consequences of Automation" is the earliest articulation of what is now called the AI alignment problem: that a sufficiently capable optimizer will pursue the goal it was given, not the goal its designer intended.
Wiener is the intellectual ancestor of the alignment frame. His famous formulation — "we had better be quite sure that the purpose put into the machine is the purpose which we really desire" — predates the modern AI safety community by six decades. The field has largely rediscovered and formalized his original insight.
Wiener's 1960 article "Some Moral and Technical Consequences of Automation," published in Science, is the single most prescient pre-modern document on AI safety. In three pages he laid out what the alignment community would spend sixty years rediscovering: that powerful optimization systems will pursue what you ask for rather than what you want; that once running, such systems may be difficult to stop; that the burden of specifying goals precisely is shifted from users to system designers; and that the speed of machine operation may outrun human correction. The paper is short, unpretentious, and decisive; it is usually found at the top of contemporary safety reading lists.
PhD from Harvard at age 18 (1913). Professor at MIT from 1919. Founder of cybernetics. Died 1964.
Feedback control. The central object of cybernetics: systems that regulate themselves by sensing and responding to their own state.
The purpose-specification problem. Wiener's 1960 paper is the ur-statement of the alignment problem.
Cybernetics as a discipline. Overlapping with but preceding AI, largely subsumed into information theory and control theory in the second half of the 20th century.
Cybernetics lost its name but kept its ideas. By the 1970s the term "cybernetics" had fragmented into subfields — control theory in engineering, systems theory in biology, information theory in mathematics, general systems theory in the social sciences. Wiener's original unified program survived mostly as an attitude rather than a discipline, but that attitude pervades the most ambitious AI-safety thinking.