An allopoietic system is produced by others and produces for others. A factory produces cars; it does not produce the factory. A printing press produces books; it does not produce the printing press. The purpose belongs to the designers and users, not to the system. The allopoietic/autopoietic distinction is Varela's sharpest diagnostic instrument for the AI moment: the distinction is organizational rather than evaluative, and no increase in operational sophistication moves a system across the threshold. A large language model processes language with extraordinary fluency, but its silicon is manufactured elsewhere, its architecture designed elsewhere, its training data curated elsewhere, its electricity generated elsewhere. At no point in the production chain does the system produce the components that produce the system.
The allopoietic category is not a diminished version of autopoiesis. Varela was explicit that allopoietic systems are not inferior — they are simply different. A cathedral is not a lesser thing than a coral reef. A symphony is not a lesser thing than birdsong. The difference is categorical, not hierarchical, and the categorical difference has consequences that valuing one over the other cannot capture.
The distinction becomes most visible at the moment of failure. When an autopoietic system is perturbed beyond its capacity to maintain itself, it dies — the cessation is irreversible, because organizational closure cannot be restored from outside. When an allopoietic system is perturbed beyond its operational parameters, it breaks — and breakage is repairable, replaceable, recoverable from backup. The machine that cannot die was never alive; the system that was never alive does not know its world in the autopoietic sense.
For the AI discourse, the classification cuts against both triumphalist attribution and catastrophist dismissal. The triumphalist error treats output quality as evidence of cognition — if the system produces paragraphs that capture subtle distinctions, something cognitive must have produced them. The allopoietic classification blocks this inference: sophisticated output can be produced by systems that do not engage in cognition as autopoietic systems do. The catastrophist error treats AI as "mere" statistical mimicry; allopoiesis resists this too, by insisting that what the machine does is real, valuable, and consequential — just not cognition in the organizational sense.
The classification extends to a broader family of systems — ecosystems, markets, institutions — that exhibit some autopoietic-like properties without meeting the full criterion. These systems self-organize, adapt, and persist, but they do not produce their physical components through their own operation. The technium as a whole is allopoietic in this extended sense: it evolves, grows, and adapts, but it depends on human autopoietic activity for its continued production.
Varela developed the autopoietic/allopoietic distinction alongside Maturana in the early 1970s as a way of making the autopoietic criterion diagnostically sharp. The pair of terms is symmetric in form but asymmetric in role: autopoiesis is the positive criterion (self-producing), allopoiesis is the residual category (everything else that produces, but not itself). Most engineered systems, most machines, most tools, most products of culture fall into the allopoietic category by definition.
Productive but not self-producing. Allopoietic systems produce real outputs — cars, books, electricity, language, code. What they do not produce is the components that produce the system. The production flows outward, not inward.
External specification of laws. An allopoietic system's organization is determined by external designers, according to external purposes. Its laws are specified from outside — it does not specify its own laws.
Replaceable and restorable. Allopoietic systems can be copied, backed up, restored. Their "identity" is their functional specification, which can be preserved and re-instantiated. No biography is lost when the machine is rebooted.
Purpose without stakes. The purposes an allopoietic system serves belong to its users and designers, not to the system. The factory does not care whether cars are useful; the language model does not care whether its outputs are true. Caring requires stakes, and stakes require self-making.
The misreading that over-attributes cognition. When human readers encounter allopoietic output that captures subtle human distinctions, the natural inference is that the system producing it must have understood those distinctions. Varela's framework blocks the inference: the distinctions are real in the human world the text describes, extracted statistically, delivered back — but the machine did not live in that world.
Some theorists argue the autopoietic/allopoietic distinction is too sharp — that autonomy and self-maintenance exist on a continuum, and systems like the immune system or even some engineered platforms occupy intermediate positions. Varela and his successors, particularly Evan Thompson, have defended the threshold character of the distinction, arguing that what looks like continuity is actually a mixture of systems at different organizational levels, some autopoietic and others not.