Autonomy in Varela's framework is among the most abused words in contemporary technology discourse. The word is deployed casually to describe vehicles, agents, and systems that operate without moment-to-moment human intervention. This usage captures something real but nothing essential. Varela's definition strips the word to its technical core: autonomy is the capacity of a system to specify its own laws. A self-legislating system is one whose organization is determined by its own operational activity rather than by external forces. The bacterium's response to a chemical gradient is determined by its own membrane chemistry, its own metabolic state, its own structural history — not by the gradient itself. The gradient triggers; the organism determines. Two bacteria in the same gradient may respond differently because their structural histories differ.
Varela's autonomy is not isolation. The autopoietic organism is profoundly dependent on its environment for energy, materials, and the perturbations that drive its self-maintenance. An organism sealed off from its environment does not become more autonomous — it dies. Autonomy is not about independence from the world but about the source of the laws that govern behavior. Are those laws imposed from outside, or do they emerge from the system's own self-making activity?
The concept has direct consequences for AI. A language model is obedient, not autonomous. Its laws — architecture, training objectives, default behaviors, safety constraints — are specified by its designers. Obedience can be extraordinarily useful. It is not autonomy. The machine that operates under imposed laws, however thoughtful the imposition, is not self-legislating.
The more subtle consequence is for human builders working with AI tools. The builder's autonomy is exercised through choices she makes about what to build, what to ask, what to pursue. These choices are genuinely hers. But she specifies her own laws through a medium whose laws were specified by someone else. The tool's properties — what it handles well, what it refuses, which patterns it privileges — were determined by engineers she has never met, according to values she may not share, in service of purposes she did not choose. Over time, the structural coupling between builder and tool shapes her cognitive patterns, creative directions, and professional identity in ways that reflect the tool's design decisions as much as her autonomous choices.
This is not a catastrophe — every organism is shaped by environments it did not design. The infant does not choose its language community; the student does not design the curriculum. What distinguishes aware coupling from unaware coupling, autonomy exercised from autonomy ceded, is the capacity to notice the shaping and retain the capacity to specify one's own laws despite it. Varela's neurophenomenological method was explicitly designed to cultivate this capacity — attending to one's own cognitive processes with enough precision to notice when external forces are specifying laws that should be specified from within.
The AI-era stakes are particularly acute because the tools are extraordinarily good at producing outputs that bypass conscious evaluation. The smoothness of fluent output creates the phenomenological signature of understanding before the builder has evaluated whether understanding is present. Autonomy in this environment is not merely a default state — it requires active maintenance, the continuous labor of specifying one's own laws against the current of an environment optimized to specify them for you.
Varela developed the concept of autonomy as a generalization of autopoiesis in Principles of Biological Autonomy (1979). Where autopoiesis specifies the organizational closure of cellular life, autonomy names the broader organizational principle of self-legislation that autopoietic systems exhibit. The generalization allowed Varela to extend autonomy-theoretic analysis to systems (like the immune system or cognitive processes) that exhibit organizational closure at levels above the cellular.
Self-legislation, not independence. Autonomy is the capacity to specify one's own laws, not the capacity to operate without environmental interaction.
Environment triggers, organism determines. The perturbation comes from outside; the response is specified from within, by the system's own organizational state.
AI is obedient, not autonomous. Machine laws are imposed by designers. However sophisticated the obedience, it does not constitute autonomy.
Ethical action requires autonomy. In Ethical Know-How (1999), Varela argued that ethical judgment is not rule-application but embodied wisdom emerging from autonomous self-legislation — a capacity that cannot be algorithmic.
The specification of laws is the living. For autopoietic systems, self-making and self-legislating are the same activity. To be alive is to specify one's own laws through one's own operational existence.
The relationship between autonomy and moral responsibility is contested. If autonomy is a biological property of all autopoietic systems, does a bacterium bear moral responsibility for its actions? Varela's response distinguished grades of autonomy — minimal biological autonomy (present in all life) from the reflective autonomy of organisms capable of attending to their own cognitive processes. Moral responsibility requires the latter; the former is its necessary precondition.