The autonomy claim is what separates Ellul's analysis from conventional technology criticism. A critic who believes technology is a neutral tool argues that the effects depend on how we use it — good tools, bad tools, wise users, foolish users. Ellul rejected this framing as systematically misleading. The tool is not neutral because the logic that produced it is not neutral. A tool designed to maximize efficiency will be deployed in ways that maximize efficiency, regardless of the user's values, because the competitive environment rewards maximum deployment and penalizes principled restraint.
The mechanism is neither mysterious nor conspiratorial. In any competitive field, the actor who adopts the most efficient method outperforms the actor who does not. The outperforming actor captures resources, market share, institutional legitimacy. The underperforming actor is eliminated — not by force but by irrelevance. Over time, the selection pressure produces a population of actors who have adopted the most efficient methods, regardless of their private preferences. The trajectory emerges from the selection, not from the preferences.
This is why individual resistance proves structurally insufficient against technique. The system does not need to coerce individuals. It needs only the competitive environment that compels their institutions to adopt whatever methods maximize competitive performance. The institutions then transmit the pressure to individuals through ordinary mechanisms of employment, evaluation, and advancement.
The AI transition illustrates autonomy with unusual clarity. No one decided that large language models should reshape knowledge work. The reshaping followed from the tools' existence plus the competitive environment plus the logic of efficiency. Even builders who see the costs — Edo Segal among them — find themselves inside a system that makes refusal structurally unviable. The refusal is admirable but locally; the adoption is compelled but systemically.
Ellul developed the autonomy thesis through three decades of observation. His wartime experience of watching administrative systems pursue their internal logics regardless of moral consequence — the bureaucracy of occupation processing refugees as cases rather than persons — provided the empirical ground. His theological conviction that human freedom requires resistance to systems that claim the status of necessity provided the normative framework. The synthesis appeared in The Technological Society and was extended in every subsequent work.
Autonomy is structural, not agential. Technique does not possess intentions. Its trajectory emerges from the structure of competitive environments in which efficient methods are rewarded and inefficient ones eliminated.
Intentions do not determine outcomes. The builder who intends to use AI thoughtfully and the builder who intends to use it recklessly end up in the same competitive environment, producing similar trajectories, because the system shapes behavior more powerfully than intention does.
Selection operates invisibly. The population of surviving institutions in any competitive field has been filtered for efficiency, not for the values that their founders professed. The filtering happens through bankruptcy, acquisition, irrelevance — mechanisms that feel impersonal because they are.
Individual virtue is necessary but insufficient. Moral achievement at the individual level preserves the resister's integrity and produces locally better outcomes. It does not alter the trajectory, because the trajectory operates at a scale individual virtue cannot reach.
Refusal is treated as irrationality. Because technique equates efficiency with rationality, any choice that prioritizes other values appears, within the system's evaluative vocabulary, as a failure of rational judgment.
The autonomy claim has been attacked from both directions. Techno-optimists argue it is too pessimistic, denying the genuine agency of builders, regulators, and users who do shape technology's trajectory. Techno-pessimists argue it is too optimistic, implying that structural critique alone can illuminate possibilities for change. Ellul's position sits between: agency is real at the individual and local institutional level, but structural forces determine the aggregate trajectory unless they are met by countervailing structures. The question for AI is whether such countervailing structures can still be built.