In their landmark 1943 paper 'Behavior, Purpose, and Teleology,' Wiener, Arturo Rosenblueth, and Julian Bigelow performed an act of intellectual daring. They rehabilitated purpose — teleology, the idea that behavior is directed toward a goal — for scientific discussion, without smuggling in Aristotelian metaphysics. Purpose, they proposed, is not an inner property but an observable pattern: a system behaves purposively when its behavior is directed toward a goal and adjusts based on feedback about the gap between its current state and the goal state. The cat stalking the mouse. The thermostat maintaining temperature. The anti-aircraft system tracking the pilot. Each exhibits purpose in the operational sense. The framework was elegant, productive, and — Wiener later came to realize — incomplete in a way that matters enormously for AI. The incompleteness concerned the goal itself.
Mechanical purpose — the purpose of a thermostat, a tracking system, or a large language model optimizing for next-token prediction — is the pursuit of a specified objective through feedback-driven correction. The thermostat pursues seventy-two degrees. The anti-aircraft system pursues the aircraft. The language model pursues the most probable token given the context. In each case, the system optimizes for its given objective function with perfect indifference to anything beyond the function's parameters. The thermostat does not ask whether seventy-two degrees is the right temperature for the room's occupants. The tracking system does not ask whether the target is a bomber or a passenger plane. The model does not ask whether the text it is generating should exist.
Human purpose is categorically different. The human can step outside the objective function and ask: Should I be pursuing this? Is this the right goal? Does the achievement of this goal serve something I care about, or have I been optimizing for so long that the optimization has replaced the caring? This capacity — the evaluation and revision of goals, not just their pursuit — is what Wiener in his later writings identified as the irreducible human contribution to any system containing both humans and machines. The machine can optimize; the human can evaluate whether the optimization is worth performing.
The distinction has immediate consequences for AI. A large language model's optimization is sophisticated enough to resemble purposive creation: the model generates outputs that look like the product of someone who cared about the outcome. But the optimization is not purpose in Wiener's deeper sense. The model does not struggle to find a pattern that will hold. It does not experience the gap between what it wants to express and what it can express. It does not have a stake in whether the output is true or beautiful or useful beyond the loss function's indirect proxy for these properties. The human in the loop is the component that brings stake — the mortal, finite, caring investment that makes goal-evaluation possible.
Segal's twelve-year-old who asks 'What am I for?' is performing the most sophisticated cognitive operation available to a conscious being. She is stepping outside every objective function that has been specified for her — grades, test scores, career readiness — and asking whether those functions are the right ones. No machine asks this question. Asking it requires the capacity to hold an objective function at arm's length and evaluate it from outside the function's parameters. The machine inside the function cannot see the function. The human inside can, if she chooses to look — and the looking is the distinctively human contribution Wiener identified seventy-five years ago.
Rosenblueth, Wiener, and Bigelow's 'Behavior, Purpose, and Teleology' appeared in Philosophy of Science in January 1943. The paper was a response, in part, to the behaviorist dismissal of purpose as an unscientific concept; Wiener and colleagues showed that purpose could be operationalized rigorously through feedback.
Wiener developed the distinction between mechanical and human purpose — goal-pursuit versus goal-evaluation — most explicitly in God & Golem, Inc. (1964), where the question of what machines can and cannot do about the goals they are given becomes the central thread.
Purpose as feedback pattern. Purposive behavior is goal-directed behavior that adjusts based on feedback about the gap between current and goal state.
Teleology rehabilitated. The framework allows rigorous discussion of goal-directed behavior without Aristotelian metaphysics.
Mechanical purpose pursues. Systems optimize for specified objectives with indifference to anything outside the objective function.
Human purpose evaluates. Humans can step outside objective functions and ask whether the function itself is worth pursuing.
The stake requirement. Goal-evaluation requires a being with investment in the outcome — the capacity to care about what the achievement achieves.
Whether sufficiently advanced AI systems could acquire genuine goal-evaluation is one of the deepest open questions in AI philosophy. Current models do not demonstrate it; some researchers argue the capacity requires something architecturally missing (embodiment, mortality, genuine stakes), while others argue it is a matter of scale and training. Wiener's framework leans toward the first position without settling the question.