Cybernetics is the science of feedback loops. Wiener coined the term in 1948 from the Greek kybernetes, the steersman of a ship, because he understood that the central question of any purposive system is not what its components can do but whether someone is reading the water and adjusting the tiller. Cybernetics treats intelligence as a property of loops rather than of individual minds or machines — a relational, distributed phenomenon that lives in the connections between components rather than within any component alone. The field flourished in the 1940s and 1950s, gave birth to control theory and information theory, and was then deliberately excluded from the AI research agenda at the 1956 Dartmouth Workshop. It is now being rediscovered, sixty years later, as the framework the AI safety community needs and the AI field was constructed to avoid.
The word kybernetes carried its meaning precisely. The steersman does not row, build the vessel, or choose the destination. The steersman reads the water, feels the wind shift against the hull, watches the current bend around the headland, and makes continuous small corrections that keep the ship oriented toward its destination against every force that would push it off course. Remove the steersman and the ship drifts. Remove the ship and the steersman is a person gesturing at the ocean. Neither component, in isolation, produces purposive behavior. The loop between them is where the purpose lives. This insight — that the unit of analysis is the loop rather than the component — is the founding intuition of the field.
Cybernetics emerged from the confluence of several wartime research programs: Wiener and Bigelow's work on anti-aircraft fire control, Claude Shannon's information theory at Bell Labs, McCulloch and Pitts's neural network formalism, and the Macy Conferences (1946–1953) that brought mathematicians, neurophysiologists, anthropologists, and psychiatrists into sustained dialogue about feedback, communication, and control. The intellectual promiscuity of this moment — Wiener debating Margaret Mead about cultural transmission, or Warren McCulloch sketching neural logic circuits alongside Gregory Bateson — produced a conceptual framework unlike anything before or since: a genuinely interdisciplinary science of purposive systems.
The field's institutional decline began with John McCarthy's 1956 rebranding at Dartmouth. McCarthy explicitly chose 'artificial intelligence' to escape association with cybernetics, as he later admitted; he wished to avoid accepting Wiener as a guru or arguing with him. The rebranding was consequential. Cybernetics understood intelligence as a property of loops; McCarthy's AI understood it as a property of machines. Cybernetics was relational; McCarthy's AI was atomistic. Cybernetics required a human in the picture; McCarthy's AI aspired to autonomous reasoning systems. The theory that won was not the more correct one — six decades of dead ends in symbolic AI suggest otherwise — but the more fundable one. The consequences shape the field's confusions to this day.
The irony is that when AI finally broke through in the 2010s, it did so by rediscovering cybernetic principles without crediting them. Backpropagation is negative feedback. Gradient descent is error correction. The transformer architecture that underlies every modern LLM is fundamentally a feedback system: outputs are compared to targets, errors propagate backward, weights adjust. A 2019 Nature Machine Intelligence editorial observed that Wiener's framework, ignored at Dartmouth, is now undergoing a revival — especially around the augmentation of human abilities rather than their replacement.
Wiener, Rosenblueth, and Bigelow's 1943 paper 'Behavior, Purpose, and Teleology' is the founding document. It rehabilitated teleology for science by redefining purpose as an observable feedback pattern rather than a metaphysical property. Wiener's 1948 book Cybernetics: Or Control and Communication in the Animal and the Machine was the technical treatise; The Human Use of Human Beings (1950) was the popular extension into social and ethical territory.
The Macy Conferences (1946–1953), organized by the Josiah Macy Jr. Foundation, were the institutional home of early cybernetics. Transcripts were published as a five-volume series that remains the best record of a genuinely interdisciplinary intellectual moment.
The loop is the unit. Intelligence, purpose, and control are properties of feedback systems, not of components in isolation.
Steering, not building. The field's name emphasizes continuous correction rather than one-time construction.
Biological and mechanical unity. The same mathematics describes organisms, machines, and institutions as purposive systems.
Information over energy. Cybernetics inverts the industrial-era focus on power by treating information flow as the organizing principle.
Excluded from AI by design. McCarthy's 1956 rebranding was a deliberate act of intellectual foreclosure whose consequences still shape the field.
The 'two cybernetics' debate (first-order vs. second-order) divides the tradition into those who study observed systems from outside and those who insist the observer is always part of the system. Second-order cybernetics, developed by Heinz von Foerster and others in the 1970s, anticipated many contemporary AI interpretability concerns by insisting that no system can be understood apart from the observer's relationship to it.