Nearly every conversation about artificial intelligence mistakes the technology for the phenomenon. Debates about specific models, particular applications, and immediate effects operate at the level of objects in the world. Ellul's question operates at a different altitude. He asks what logic produced these objects, demanded their creation, ensured their adoption, and will reshape every domain they enter according to imperatives no individual chose. Technology is a hammer. Technique is the logic that demanded the hammer, standardized it, mass-produced it, and will replace it with the next more efficient striking mechanism. The distinction matters because technology can be catalogued and regulated, while technique — being a logic rather than an artifact — operates beneath the level at which catalogues and regulations apply.
The failure to distinguish technique from technology produces most of the confusion in contemporary AI discourse. 'Is this tool good or bad?' 'Should this model be regulated?' 'Will this application take jobs?' These questions are coherent at the technology level and meaningless at the technique level. Banning a specific AI model does not address the logic that produced it and will produce the next one. Regulating a particular application does not constrain the trajectory toward more efficient applications.
Technology can be located in space. A data center exists somewhere. A model runs on specific hardware. A tool can be photographed, purchased, or destroyed. Technique cannot be located. It is the relation between efficiency and adoption that holds across every data center, every model, every tool. You cannot photograph it because it is not an object. You cannot destroy it because there is nothing physical to smash.
This is why machine-breaking proved ineffective. The framework knitters of Nottingham destroyed power looms and were destroyed in return. The looms were replaced. The logic that produced the looms did not reside in the looms and was not damaged by their destruction. The Luddites understood the technology but missed the technique. Their error was not strategic, as Edo Segal reads it; it was ontological. They believed they were facing a specific artifact. They were facing a logic that the artifact expressed.
The AI developer of 2026 stands in the same ontological position. She believes she is facing Claude, GPT-5, whatever comes next — specific technologies that might be regulated, adopted on her own terms, or replaced by alternatives. She is actually facing technique, which has identified AI as the next most efficient method and is eliminating alternatives at a pace that accelerates as the tools improve.
Ellul developed the distinction through his dissatisfaction with contemporaneous accounts of industrialization that treated technology as a neutral tool whose effects depended on human use. He observed that across very different users — communist, capitalist, democratic, authoritarian — industrial technology produced convergent effects. The convergence could not be explained by users, because the users differed radically. It had to be explained by something shared: the logic of efficiency itself, operating identically regardless of the values of those applying it. This observation became the technique/technology distinction that structures his entire oeuvre.
Technology is artifact, technique is logic. Confusing them makes the analysis shallow in a specific way — debates about particular tools cannot see the logic that produces tools generally.
Technology is bounded, technique is total. A specific technology operates in a specific domain. Technique's logic operates across every domain technology enters.
Technology can be regulated; technique cannot. Regulation addresses artifacts. The logic that produces artifacts is not an artifact, so regulation does not constrain it — it merely redirects which artifacts the logic produces next.
Technique precedes the specific technology. The conditions that make a technology adoptable — competitive pressure, efficiency metrics, cultural valorization of optimization — exist before the specific artifact arrives.
Resisting technology without addressing technique preserves nothing. Smashing the loom leaves the logic intact. The logic produces the next loom, the one after that, and eventually the AI model that makes the looms irrelevant.
The distinction has been accepted even by critics who disagree with Ellul's other claims. What remains contested is whether technique is sufficiently autonomous to warrant his pessimism. Sympathetic readers note that the distinction allows for more sophisticated analysis than either naive techno-optimism or naive techno-pessimism permits. Critical readers argue that Ellul's account leaves too little room for how specific technologies shape the logic rather than merely instantiating it.