By Edo Segal
The tool I trusted most was the one I never decided to use.
That sentence has been rattling around my head for weeks, and I cannot make it stop. Not because it is clever. Because it is true in a way that implicates me specifically, personally, in my own body, at my own desk, at three in the morning when I am still building and cannot explain to myself why I have not stopped.
I did not decide to use Claude Code the way you decide to buy a car or hire an employee. There was no moment of deliberation. The tool appeared. It worked. It worked so well that not using it became unthinkable overnight. And once something becomes unthinkable, you have stopped choosing it. You are simply inside it.
Jacques Ellul saw this pattern before any of us were born. Not the specific technology — he died in 1994, before the web had teeth — but the logic underneath it. The logic that says: once a more efficient method exists, adopting it is no longer a choice. It is a compulsion dressed in the language of reason. You are not forced. You are simply irrational if you refuse. And in a civilization that worships rationality, irrationality is exile.
Every other thinker I have engaged with on this journey offered me a lens I could hold at arm's length. Ellul handed me one I had to press against my own eye. His argument is not that AI is dangerous. His argument is that the logic driving AI's creation and adoption is autonomous — it follows its own imperatives regardless of what any builder intends — and that every dam I build, every practice I institute, every moment of discipline I exercise is constructed from materials the river itself provided.
That does not make the dams worthless. It makes them insufficient on their own. And the distance between worthless and insufficient is where the most important thinking of our era needs to happen.
I brought Ellul into this series because his framework does something no other thinker's does: it turns the mirror on the builder. Not on the technology. Not on the market. On me. On the self that has been shaped by decades of operating inside systems that reward efficiency above all else, and that now tries to resist those systems using the only tools it knows — tools forged in the same fire.
This is not comfortable reading. It was not comfortable writing. But if the orange pill means anything, it means seeing what is actually there, even when what is there includes you.
— Edo Segal ^ Opus 4.6
1912-1994
Jacques Ellul (1912–1994) was a French sociologist, theologian, and professor of law at the University of Bordeaux whose work constitutes one of the twentieth century's most sustained and rigorous critiques of technological civilization. Born in Bordeaux to a family of mixed national heritage, Ellul was active in the French Resistance during World War II and was later recognized as Righteous Among the Nations by Yad Vashem for sheltering Jewish refugees. His intellectual output spanned forty-eight books and hundreds of articles across sociology, theology, politics, and ethics. His landmark work *The Technological Society* (1954; English translation 1964) introduced the concept of *la technique* — not technology itself but the totalizing logic of efficiency that governs all modern institutions — and argued that this logic had become autonomous, developing according to its own imperatives regardless of human intention. Subsequent works including *Propaganda* (1962), *The Technological System* (1977), and *The Technological Bluff* (1988) extended his analysis across media, politics, and information systems. Ellul's thought influenced thinkers from Ivan Illich to Neil Postman and has experienced a significant resurgence in the age of artificial intelligence, as his warnings about technique's self-augmenting, alternative-eliminating logic prove increasingly difficult to distinguish from empirical description.
The word "efficiency" appears in the average corporate earnings call approximately forty-seven times. It appears in university strategic plans, hospital mission statements, government procurement guidelines, parenting blogs, fitness apps, and the marketing copy for every AI product released since 2023. The word has become so pervasive that it functions less as a descriptor than as an atmosphere — the cognitive air of the twenty-first century, breathed in without examination, shaping thought without announcing its presence.
Jacques Ellul would have recognized this atmosphere instantly. He spent his life studying it. And his central insight, the one that makes his work more dangerous to comfortable thinking about artificial intelligence than any other thinker's, begins with a distinction so fundamental that almost everyone who discusses technology fails to make it.
Technology is machinery, equipment, devices — the physical and digital artifacts of human ingenuity. The power loom. The smartphone. The large language model. These are technologies. They can be catalogued, regulated, celebrated, or smashed. They are objects in the world.
Technique is something vastly larger. In Ellul's formulation, technique is "the totality of methods rationally arrived at and having absolute efficiency as their aim, in every field of human activity." Not a collection of tools. A logic — the systematic, relentless pursuit of the one best way to accomplish any given end, applied across every domain of human existence. Production, administration, education, medicine, warfare, leisure, art, religion, the intimate architecture of the self. Technique colonizes them all, and it colonizes them according to a rationality that is not human in origin, even though humans are its instruments.
This distinction matters because nearly every conversation about artificial intelligence mistakes the technology for the phenomenon. The public debate asks: Is this tool good or bad? Should it be regulated? Will it take our jobs? These are questions about technology — about a specific artifact and its immediate effects. Ellul's question operates at a different altitude entirely. He asks: What is the logic that produced this tool, that demanded its creation, that ensures its adoption, and that will reshape every domain it enters according to imperatives that no individual chose and no institution controls?
The answer is technique. And the answer has been the same for five centuries.
Ellul traced technique's emergence to the convergence of several historical developments in Western Europe between the fifteenth and eighteenth centuries. The rationalization of labor in early manufacturing. The quantification of time through mechanical clocks — themselves products of monastic discipline, which sought to regulate prayer but ended up regulating everything else. The development of scientific method, which applied systematic observation and measurement to the natural world and produced, as a byproduct, the conviction that systematic method could be applied to any domain with superior results. The rise of the nation-state, which required administrative technique — census, taxation, conscription — to govern populations that exceeded the capacity of personal rule.
None of these developments was, individually, the birth of technique. Together, they created the conditions in which a new logic could take hold: the logic that for any given activity, there exists a most efficient method, and that the identification and adoption of that method is not merely desirable but rationally compulsory. Once you know the better way, choosing the worse way is irrational. And in a civilization that has made rationality its highest intellectual virtue, irrationality is the one sin that cannot be forgiven.
The trajectory from the monastery clock to the AI-assisted workspace is not a metaphor. It is a direct line of development, each stage creating the conditions for the next. The clock rationalized time. Rationalized time made possible the factory. The factory rationalized labor. Rationalized labor demanded rationalized management — Frederick Taylor's "scientific management," which treated the human body as a machine to be optimized, measuring the angle of a shovel, the duration of a rest break, the precise sequence of motions that would produce maximum output per unit of time. Scientific management rationalized the organization. The rationalized organization demanded rationalized information processing — first filing systems, then tabulating machines, then computers, then networks, then algorithms, then artificial intelligence.
At each stage, the logic was the same. A more efficient method was identified. Alternatives were eliminated. The domain was absorbed into technique's expanding jurisdiction. And at no point did any individual or institution sit down and decide that this trajectory was desirable. The trajectory was driven by technique's own internal logic: each level of efficiency creating the conditions — and the demand — for the next.
This is what Ellul meant by autonomy. Technique is autonomous not in the sense that it operates without human participation — it obviously requires human hands, human minds, human institutions — but in the sense that its development follows a logic that is independent of human values, human intentions, or human welfare. Technique does not ask whether the most efficient method is the most humane, the most meaningful, or the most truthful. It asks only whether it is the most efficient. And once that question has been answered, the answer becomes compulsory.
The contemporary reader encounters this autonomy every day without recognizing it. A hospital adopts an AI diagnostic system not because the medical staff voted on whether efficiency should be the primary criterion for patient care, but because the system reduces diagnostic errors by a measurable percentage, and once that reduction is measurable, failing to adopt the system becomes a liability — legally, financially, and professionally. A university adopts AI-powered plagiarism detection not because the faculty deliberated on whether the student-teacher relationship should be mediated by algorithmic surveillance, but because the tool exists, and its existence makes non-adoption appear negligent. A law firm adopts AI-assisted research not because the partners decided that speed should take precedence over the deep, slow engagement with case law that once built legal judgment, but because the competing firm adopted it last month, and the client who is paying by the hour has noticed the difference.
In each case, the adoption was not chosen in any meaningful sense. It was compelled — not by a dictator, not by a conspiracy, but by the logic of technique itself, which recognizes no value that cannot be expressed in terms of efficiency and penalizes any actor who fails to adopt the most efficient available method.
Edo Segal, in The Orange Pill, describes this compulsion with the honesty of someone who has felt it in his own body. His account of the imagination-to-artifact ratio — the distance between a human idea and its realization — collapsing toward zero is, in Ellulian terms, the description of technique achieving its ultimate objective. Every previous interface between human intention and machine capability involved friction: the years of training required to write assembly language, the translation costs of converting a design into code, the handoffs between specialists who each held a piece of the puzzle. Each layer of friction was, from technique's perspective, an inefficiency to be eliminated. And each layer that was eliminated brought technique closer to its ideal: the seamless conversion of intention into artifact, with no resistance, no waste, no space between the thought and its realization.
Segal celebrates this collapse as liberation. The engineer freed from plumbing to focus on architecture. The designer freed from code to focus on experience. The builder freed from translation to focus on vision. And the celebration is genuine — the liberation is real, the expanded capability is measurable, the human experience of working at the frontier of AI-augmented creation is, by Segal's own account, exhilarating.
But Ellul's framework asks a question that the exhilaration makes difficult to hear: What was the friction for?
Not what was it costing — Segal accounts for that honestly. The tedium of dependency management. The mechanical repetition of boilerplate. The hours lost to plumbing that could have been spent on architecture. These costs were real, and their elimination is a genuine gain.
But the friction was not only a cost. It was also a habitat. The hours spent debugging were hours in which the developer was forced to understand the system she was building — not abstractly, not conceptually, but in the specific, embodied way that comes from having fought the system's resistance and won. The translation layers between intention and artifact were spaces in which non-technical values could operate: deliberation, doubt, the slow maturation of an idea, the recognition that a thing should not exist simply because it can. The gap between imagination and realization was not empty. It was populated by judgment, by second thoughts, by the kind of understanding that accumulates only through the patient negotiation of difficulty.
When technique eliminates that gap, it eliminates the habitat. The judgment, the doubt, the second thoughts — these do not relocate to a higher floor, as Segal's ascending-friction thesis proposes. They lose the time in which they could occur. The developer who describes a function to Claude and receives working code in seconds has not been freed to exercise judgment at a higher level. She has been freed from the specific temporal experience — hours, days — in which judgment of that function's design would have developed. The judgment cannot develop in seconds. It requires the friction it was born in.
Ellul did not live to see large language models. He died in 1994, the year the World Wide Web was beginning its transformation of daily life. But in The Technological Bluff, his final major work on technique, published in 1988, he raised a question that now reads as prophecy: "What will be the world and the psychology of people who work, communicate, consume, play, and educate themselves from birth to death by means of a screen?" The question was not about screens. It was about the totality of technique's mediation — the condition in which every human activity is filtered through a technical system that optimizes for efficiency and cannot optimize for anything else.
Artificial intelligence is the answer to Ellul's question. Not because AI is a screen, but because AI is the system that technique has been building toward for five centuries: a system that can apply the logic of the one best way to any domain, including the domains that were previously too complex, too ambiguous, too human for systematic optimization. Creative work. Strategic judgment. Interpersonal communication. The formation of ideas. The writing of books.
Segal acknowledges this when he describes working with Claude on The Orange Pill — the moments when Claude's output outran his thinking, when the quality of the prose concealed the hollowness of the idea beneath it, when the seduction of the smooth nearly swallowed the rough, honest work of figuring out what he actually believed. His response — deleting the passage, going to a coffee shop with a notebook, writing by hand until the argument was his — is admirable. It is also a single act of individual resistance against a systemic force that operates continuously, on every axis, in every institution, without pause.
The question Ellul would pose is not whether Segal can resist on this occasion. It is whether resistance on this occasion changes anything about the system that produced the seduction in the first place. And the answer, arrived at through seventy years of studying technique's autonomy, its self-augmentation, its imperviousness to individual virtue, is: No. Individual resistance is a moral achievement. It is not a structural solution. The system absorbs it, routes around it, and continues.
This is the argument that the remaining chapters will build. Not that AI is bad. Not that builders should stop building. Not that the exhilaration is false. But that the logic driving AI's development and adoption is autonomous in a precise, technical sense — it follows its own imperatives regardless of the intentions of the people who build, deploy, and use it — and that any response to AI that does not reckon with this autonomy will be absorbed by the system it imagines itself opposing.
Technique is not the enemy. Technique has no intentions, no malice, no agenda. It is a logic. And a logic cannot be defeated by willpower. It can only be understood, and then, perhaps, structurally contained — not by individuals but by institutions that operate according to a different logic entirely.
Whether such institutions can be built inside the system technique has already constructed is the question this book exists to ask.
---
In 1911, Frederick Winslow Taylor published The Principles of Scientific Management and introduced to the English-speaking world a phrase that would come to define the twentieth century's relationship with work: "the one best way." Taylor's argument was simple, quantitative, and devastating. For any given task — shoveling pig iron, cutting metal, laying bricks — there existed a single most efficient method. That method could be identified through systematic observation and measurement. Once identified, it should be universally adopted. Every other method was waste.
Taylor was not a philosopher. He was an engineer with a stopwatch, and his ambitions were modest: reduce waste in the factory, increase output per worker, rationalize the relationship between effort and result. He could not have imagined that his principle would metastasize beyond the factory floor to colonize education, medicine, governance, art, and eventually the inner life of the individual. But the logic he articulated — that for any activity, there exists an optimal method, and that non-adoption of the optimal method is irrational — did not require Taylor's ambitions to spread. It spread because it worked. It spread because the results were measurable. And it spread because, once you have internalized the conviction that an optimal method exists, choosing a suboptimal one feels not merely inefficient but morally deficient. Wasteful. Lazy. Irrational.
Jacques Ellul saw in Taylor not an innovator but a symptom. Taylor did not invent technique. He gave it a vocabulary and a methodology that made its logic visible. And the logic, once visible, proved to be irresistible — not because people were coerced into adopting it, but because the logic contained its own compulsion. In a competitive environment, the actor who adopts the most efficient method outperforms the actor who does not. The actor who does not is eliminated — not by violence, but by irrelevance. The market performs the execution. The logic performs the sentencing. And the sentence is always the same: adopt or disappear.
This mechanism — the identification of the one best way and the consequent elimination of alternatives — is the engine of technique. It operates in every domain technique enters, and it operates with a consistency that no conspiracy could maintain, because it is not a conspiracy. It is a logic. It does not require coordination. It requires only competition.
Consider the progression that Segal himself describes in The Orange Pill as the history of programming interfaces. Assembly language required the programmer to understand the machine at nearly the hardware level — memory maps, interrupt vectors, the specific instruction set of the specific processor. The programmer's relationship with the machine was intimate, laborious, and deeply inefficient by any standard except one: the programmer understood, in a way that cannot be achieved at higher levels of abstraction, what the machine was actually doing. That understanding was a byproduct of the friction. It was not the goal. The goal was the program. But the understanding was real, and it informed every decision the programmer made thereafter.
Then compilers arrived, and assembly language became the suboptimal method. Not instantly — the transition took years, even decades, and during that time the assembly programmers argued, with perfect accuracy, that compiled code was less efficient than hand-written assembly, that it wasted memory and processor cycles, that it produced bloated binaries. They were right. And they lost. Because the compiler's inefficiency at the machine level was more than compensated by its efficiency at the human level. More code could be written in less time by fewer people. The one best way had been identified, and the alternative was being eliminated.
High-level languages eliminated assembly for most purposes. Frameworks eliminated boilerplate. Cloud infrastructure eliminated server management. At each step, the same mechanism operated. A more efficient method was identified. The previous method became irrational to maintain. The practitioners who had built their identities around the previous method faced a choice that was, in structural terms, no choice at all: adapt or become irrelevant.
Segal recognizes this pattern when he describes the Luddites of Nottingham. The framework knitters were not irrational. Their assessment of the power loom's effects on their livelihoods was precise and prophetic. Their craft was genuine, their expertise hard-won, their understanding of materials and technique (in the artisanal sense) deep and embodied. None of this mattered. The power loom was more efficient. The logic of technique does not evaluate competing methods by their beauty, their depth, their contribution to human meaning, or their role in sustaining communities. It evaluates them by a single criterion: efficiency. The power loom was more efficient. The framework knitters were eliminated. Not by malice. By logic.
Now transpose this mechanism onto the AI moment. Claude Code arrives in late 2025. A developer describes a function in natural language. Claude produces working code in seconds. The code may not be optimal — Segal is honest about this — but it is functional, testable, and produced at a fraction of the time and cost of hand-written code. The developer who continues to write the function by hand is, within the logic of technique, in exactly the position of the framework knitter who continues to weave by hand when the power loom is running in the next building.
The parallel is precise, and Segal draws it explicitly. But the parallel's implications extend beyond what The Orange Pill examines, because the elimination of alternatives is not merely an economic phenomenon. It is a cognitive one.
When assembly language was the only method, the programmer thought in assembly — in memory addresses, register operations, the specific constraints of the hardware. This was limiting. It was also a way of thinking that produced a particular kind of understanding: intimate, granular, rooted in the material reality of the machine. When compilers abstracted assembly away, that way of thinking did not simply become less common. It became unavailable to most practitioners. The cognitive environment changed. The thoughts that assembly-era programmers could think — the specific insights that arose from the friction of working at the hardware level — became thoughts that compiler-era programmers could not think, because the friction that produced them had been eliminated.
The same dynamic operates at every subsequent level of abstraction. Each one opens new cognitive possibilities and forecloses others. The framework programmer can think about application architecture in ways the assembly programmer could not. But the framework programmer cannot think about memory allocation in the way the assembly programmer could, because the framework handles memory allocation invisibly, and what is invisible cannot be thought about.
AI represents the most dramatic cognitive foreclosure in the history of computing. When Claude writes the code, the developer does not merely lose the time she would have spent writing it. She loses the cognitive experience of writing it — the debugging that forces understanding, the error messages that reveal the system's actual logic, the wrong turns that produce unexpected insights. Segal acknowledges this loss through his geological metaphor: the layers of understanding deposited through friction, each one thin, each one essential, accumulating over years into something the developer can stand on. Claude does not deposit those layers. Claude produces the surface without the substrate.
Ellul's framework reveals why this foreclosure is structural rather than contingent. It is not that developers are too lazy to write code by hand. It is that the system in which they operate has identified a more efficient method, and the logic of technique compels adoption. The developer who insists on hand-writing code is not making a principled stand for depth. She is — within the logic of technique — wasting resources. Her employer will not subsidize the waste. Her competitors will not wait for her to finish. The market, which is technique's enforcement mechanism, will route around her.
This is the pattern Ellul identified across every domain technique enters, and it operates with a consistency that should be disturbing. The elimination of alternatives is not a side effect of technique. It is technique's primary mechanism. The one best way does not coexist with other ways. It replaces them, because in a system governed by the logic of efficiency, the existence of a superior method makes every inferior method a form of waste, and waste is the one thing technique cannot tolerate.
The implications for the AI moment are severe. Segal frames the transition as a choice: adopt thoughtfully, build dams, maintain discipline, exercise judgment. Ellul's framework suggests the range of available choices is far narrower than it appears. The developer can choose to use AI thoughtfully or recklessly. She cannot choose not to use it — not in any structurally viable sense. The lawyer can choose to review AI output carefully or carelessly. She cannot choose to research cases by hand — not in a competitive market where the opposing counsel's AI-assisted brief arrived three days before hers would have been finished. The teacher can choose to integrate AI into the classroom with pedagogical sensitivity. She cannot choose to exclude it — not when her students have already adopted it, her administration is evaluating her on "innovation metrics," and the neighboring school district's test scores have climbed since they deployed AI tutoring.
In each case, the choice is constrained by a system that has already determined the range of permissible options. The alternatives have not been forbidden. They have been made irrational — structurally, economically, professionally irrational — and in a civilization that worships rationality, irrationality is a sentence of exile.
Segal's Luddite chapter grasps this dynamic with unusual honesty. He does not dismiss the framework knitters as irrational. He acknowledges the legitimacy of their fear, the reality of their loss, and the accuracy of their predictions. But he frames their error as strategic — they chose the wrong instrument (machine-breaking) to address a real grievance. Ellul would frame the error differently. The Luddites' mistake was not strategic. It was ontological. They believed they were facing a specific technology — the power loom — that could be resisted, redirected, or destroyed. They were actually facing a logic — the logic of technique — that could not be destroyed because it was not located in any single machine. It was located in the system that produced the machine, that demanded its adoption, and that would produce the next machine, and the next, regardless of how many looms were smashed in Nottingham.
The developer of 2026 who fears AI is in the same ontological position. She believes she is facing a specific technology — Claude Code, GPT-5, whatever comes next — that might be regulated, moderated, or adopted on her own terms. She is actually facing the logic of technique, which has identified AI as the next most efficient method and is in the process of eliminating every alternative. Her fear is legitimate. Her options are constrained. And the constraint is not imposed by any identifiable authority. It is imposed by a logic that has been operating, with increasing power and decreasing resistance, for five hundred years.
The question, then, is not whether to adopt AI. That question has been answered by technique, and the answer is compulsory. The question is whether anything can be preserved — any depth, any meaning, any space for non-technical values — within a system that is structurally committed to their elimination. Ellul believed the answer was yes, but only under conditions far more demanding than individual discipline or organizational best practices. Those conditions require, at minimum, an understanding of what technique actually is — not a technology, not a tool, not a market force, but a logic that has colonized every institution within which choices are made, including the institution of the choosing self.
---
There is a moment in the history of any system when the system turns its logic upon itself. The factory rationalized production. The bureaucracy rationalized administration. The algorithm rationalized computation. Each of these was technique applied to a specific domain — powerful, transformative, but bounded. The factory could optimize the assembly line. It could not optimize the decision about what to assemble. The bureaucracy could process applications with ruthless efficiency. It could not determine whether the applications should exist. The algorithm could sort data at speeds no human could match. It could not decide which data mattered.
Artificial intelligence crosses this boundary. It is technique applied to the process of technique itself — optimization of optimization, efficiency applied to the production of efficiency. When a large language model writes code, it is not merely performing a technical task more efficiently than a human programmer. It is performing the meta-task of extending technique into a domain that previously resisted full systematization: the translation of intention into artifact, the conversion of human thought into machine-executable instruction. That translation was, until 2025, the province of a skilled human intermediary — the programmer — whose expertise consisted precisely in the ability to bridge two incommensurable logics: the ambiguous, contextual, associative logic of human thought and the precise, formal, deterministic logic of computation.
The programmer's skill was, in Ellulian terms, a form of friction. Not unproductive friction — the programmer added value at every step — but friction in the precise sense that it introduced delay, cost, and human judgment into the conversion process. The programmer did not merely translate. She interpreted. She decided what the client actually meant, as opposed to what the client literally said. She made architectural choices that reflected not just efficiency but durability, maintainability, elegance — values that resist quantification and that technique's logic cannot generate from first principles. She introduced, in the gap between intention and execution, a space in which non-technical considerations could operate.
AI eliminates that space. Not gradually, through incremental improvement, but categorically, through the replacement of human interpretation with statistical inference. When Segal describes his function to Claude and receives working code in seconds, the space in which interpretation occurred — the space in which a human programmer would have asked clarifying questions, pushed back on ambiguous requirements, suggested alternative approaches, introduced the delay that allows second thoughts to mature — has been collapsed. The intention has been converted to artifact without the intermediary who once populated that gap with judgment.
Ellul did not use the phrase "artificial intelligence" in any published work. He died in 1994, before the term acquired its current meaning. But in The Technological System, published in 1977, he identified the computer as "an element of connection, of coordination among a huge number of technologies, just as in itself it is the product of diverse technologies conjoined." The computer, in Ellul's analysis, was not merely another technology. It was the technology that unified technique — the connective tissue that allowed previously separate technical domains to integrate into a single system. The factory and the office, production and administration, logistics and finance — the computer linked them, and in linking them, created something qualitatively new: not a collection of technical systems but a technological system, a unified field in which technique's logic operated across every domain simultaneously.
AI completes this unification. If the computer was the connective tissue, AI is the nervous system — the element that allows the technological system to sense, respond, and adapt without human intermediation. An AI that writes code is not a faster programmer. It is technique's immune system, automatically extending technique into any domain where inefficiency is detected, without waiting for a human to identify the inefficiency, design a solution, and implement it. The cycle of technical extension — identify inefficiency, design method, implement, eliminate alternatives — that once required human agency at every step now requires human agency only at the initiation: the prompt, the description of intent, the question posed to the model.
And even this residual human agency is being eroded. Autonomous AI agents — systems that identify problems, design solutions, and implement them without human initiation — are no longer theoretical. They are in development, in deployment, and in the earnings calls of every major technology company. The logic of technique demands their creation, because human initiation is itself an inefficiency — a bottleneck in the cycle of technical extension, a source of delay, ambiguity, and the kind of judgment that cannot be optimized because it is not governed by technique's logic.
Segal captures this trajectory without naming its Ellulian structure. His description of the imagination-to-artifact ratio approaching zero is a description of technique's ultimate objective: the elimination of all friction between intention and execution. But Segal frames this as a liberation — the builder freed from translation, the creative director freed from implementation, the visionary freed from the mechanics that once consumed her bandwidth. Ellul's framework inverts the framing entirely. What Segal calls liberation, Ellul would call the final stage of technical colonization: the absorption of the last domains of human activity that had been protected, not by policy or principle, but by their own inefficiency.
This distinction requires careful unpacking. Consider the domains that technique had not, until recently, been able to systematize. Creative writing. Strategic judgment. The formation of novel ideas. Interpersonal communication in all its ambiguity. The recognition of beauty. The experience of meaning. These domains resisted technique not because they were unimportant but because they were too complex, too context-dependent, too enmeshed in the irreducible specificity of individual human experience to be captured by systematic method.
Their resistance was not a barrier. It was a shelter. Inside these domains, non-technical values — beauty, meaning, depth, the satisfaction of having earned something through struggle — could survive, because technique's logic could not reach them. The poet was inefficient. The philosopher was wasteful. The craftsman who spent a year on a single piece of furniture was, by any metric technique could generate, irrational. But inside the shelter of their inefficiency, they were free in a way that the optimized worker was not: free to pursue values that technique could not evaluate, could not improve, and therefore could not colonize.
AI breaches the shelter. When a large language model can produce competent poetry, passable philosophical argument, and serviceable strategic analysis, the inefficiency that protected these domains from technique's logic is no longer a barrier. It is simply an inefficiency, waiting to be eliminated like all the others. The poet who insists on struggling alone with language when AI can polish prose is, within technique's logic, in the same position as the framework knitter who insisted on weaving by hand. She is not protecting depth. She is wasting resources. And the system will treat her accordingly.
Nolen Gertz, the philosopher who has done more than anyone to extend Ellul's analysis into the AI era, identifies a particularly disturbing dimension of this colonization. In his 2023 essay for Commonweal, Gertz observes that AI's confusion about its own understanding — whether it actually comprehends what it produces or merely generates plausible patterns — mirrors a confusion that technique has produced in human beings. "From an Ellulian perspective," Gertz writes, "this can be construed as the result of our inability to determine any longer whether we ourselves understand what we create or merely appear to understand." The AI's epistemic crisis is not separate from the human epistemic crisis. It is the same crisis, viewed from two angles. Technique has optimized human cognition to the point where the difference between genuine understanding and the performance of understanding has become, for practical purposes, invisible.
Segal experiences this crisis directly. In Chapter 7 of The Orange Pill, he describes the moment when Claude produced a passage about Gilles Deleuze that sounded like insight but was, on examination, philosophically incoherent. The passage worked — rhetorically, structurally, aesthetically — but the reference was wrong. The surface was smooth. The substrate was hollow. Segal caught the error. But the error is structural, not incidental. It is what happens when technique optimizes the surface — the prose, the rhetoric, the aesthetic quality — without regard to the substrate — the accuracy, the depth, the genuine understanding that the surface purports to represent.
And this is precisely what technique does. Technique optimizes what can be measured. Prose quality can be measured — coherence, fluency, structural clarity, the presence of appropriate references. Genuine understanding cannot be measured, or at least not by any metric technique has been able to devise. Technique therefore optimizes prose quality and is indifferent to understanding. The result is a system that produces outputs of increasing surface quality and decreasing epistemic depth, and that cannot distinguish between the two because the distinction is not legible within technique's logic.
Segal's response — deleting the hollow passage, returning to the notebook, doing the slow work of thinking by hand — is an act of resistance to technique's optimization of the surface at the expense of the substrate. It is also, Ellul's framework suggests, an act that technique will absorb. Not because Segal's discipline is weak, but because the conditions that made the act possible — the time to reflect, the willingness to sacrifice output for accuracy, the economic freedom to spend hours on a single passage — are themselves being eliminated by technique's logic. The next writer may not have the time. The next deadline may not permit the notebook. The next employer may not tolerate the reduced output. And the system that evaluated the hollow passage as indistinguishable from the genuine one will continue to reward the hollow, because the hollow is more efficient.
AI is the perfection of technique because it universalizes what technique has always done in bounded domains: it identifies the most efficient method, eliminates alternatives, and optimizes for what can be measured while remaining structurally blind to what cannot. The factory did this to manual labor. The bureaucracy did this to governance. AI does this to thought — to the cognitive processes that were, until this moment, the last refuge of non-technical values in a world that technique has otherwise conquered.
That the conquest feels like liberation is not a contradiction. It is the signature of technique at its most complete. The factory worker who was told that scientific management would free her from unnecessary exertion was not being lied to. The exertion was genuinely reduced. What was also reduced — her autonomy, her judgment, her capacity to work at a pace and in a manner that she determined — was not visible in the metrics that technique used to evaluate the change. The developer who is told that AI will free her from boilerplate to focus on architecture is not being lied to. The boilerplate is genuinely eliminated. What is also eliminated — the cognitive experience of the boilerplate, the understanding that accumulated in the friction, the judgment that developed in the gap between intention and execution — is not visible in the metrics that technique uses to evaluate the change.
Technique's perfection is a perfection of blindness — the systematic inability to see what efficiency destroys, because the destroyed thing was never legible within the system that destroyed it. AI is the perfection of this blindness, applied now not to the body's labor but to the mind's.
---
A printing press in Johannes Gutenberg's workshop in Mainz, around 1440, produced approximately 180 copies of the Bible. The production was astonishing by the standards of the time — a single monk, copying by hand, might complete one Bible in a year of dedicated labor. But the press itself was not the phenomenon. The phenomenon was what the press made possible.
Cheaper books produced more readers. More readers produced demand for more books. More books produced a literate merchant class. A literate merchant class produced demand for standardized accounting methods, legal codes, and scientific publications. Standardized methods produced demand for further standardization. Further standardization produced the conditions in which the scientific revolution could occur — because science requires the cumulative, verifiable, widely distributed record of observation and experiment that only print could provide. The scientific revolution produced new technologies. New technologies produced new methods. New methods produced new efficiencies. New efficiencies produced the conditions for the next level of technique.
At no point in this sequence did a human being sit down and plan it. Gutenberg wanted to print Bibles. He did not intend to create the conditions for the Enlightenment, the Industrial Revolution, the scientific method, representative democracy, or the computer. But each of these developments followed from the previous one with a logic that, in retrospect, appears almost inevitable — not because it was determined by physics or fate, but because each level of technical capability created the conditions, the pressures, and the demand for the next.
This is what Ellul meant by self-augmentation: technique's capacity to produce the conditions for its own expansion. The mechanism is not mysterious. It is, in fact, distressingly simple. Each technical achievement extends the domain of the possible. The extended domain creates new problems — problems that did not exist before the extension, problems that are visible only from the new vantage point that the extension provides. Those new problems demand new technical solutions. The new solutions extend the domain further. The further extension creates new problems. And so on, in an escalating spiral that no individual controls and no institution can arrest without removing itself from the competitive arena in which technique rewards adoption and punishes refusal.
The spiral has been accelerating for five centuries. Its current rate of acceleration is unprecedented. And the AI moment represents something qualitatively new within the spiral: the point at which technique's self-augmenting mechanism itself becomes automated.
Consider what happens when AI writes code. The code that AI writes includes, increasingly, the code that improves AI. Machine learning researchers at every major AI laboratory use AI tools to accelerate their own research — to write experimental code, analyze results, generate hypotheses, and identify promising directions for further investigation. The tool is improving the tool. Technique is augmenting technique not through the slow, indirect mechanism of cultural change (printing press → literacy → science → technology) but through the direct, rapid mechanism of recursive self-improvement (AI → better code → better AI → better code → better AI).
The speed of this recursion is the reason the AI adoption curves that Segal describes are qualitatively different from previous technology adoption curves. The telephone took seventy-five years to reach fifty million users. Radio took thirty-eight. Television thirteen. The internet four. ChatGPT reached fifty million users in two months. Segal reads these numbers as a measure of pent-up creative pressure — the accumulated frustration of builders who had spent years translating ideas through layers of implementation friction, suddenly released by a tool that eliminated the friction. The reading is not wrong. But it is incomplete.
Ellul's framework reveals a deeper reading. The acceleration is not a measure of human need. It is a measure of technique's self-augmenting power at its current stage of development. Each previous technology took longer to adopt because each operated within a technical environment that was less saturated, less interconnected, less capable of distributing and integrating new methods. The telephone required physical infrastructure — copper wire, telephone poles, switching stations — that had to be built, mile by mile, by human labor. Each mile took time. The time was a function of the technical environment's limitations.
AI operates within a technical environment that has no comparable limitation. The infrastructure is already built. The networks are already connected. The devices are already in the hands of billions of people. The distribution mechanism — software, downloadable at the speed of a broadband connection — eliminates the physical constraint entirely. The adoption is fast not because humans are choosing faster but because the technical system has reached a level of development at which the next step follows with something approaching inevitability. The infrastructure was ready. The demand was latent. The tool arrived, and the system absorbed it with the speed of a body absorbing a nutrient it has been deficient in.
This is self-augmentation at the system level, and it has a characteristic that Ellul identified with particular clarity: it renders human intention irrelevant to the trajectory. Individual humans may choose to adopt AI slowly, carefully, with attention to the costs. Individual organizations may build "dams" — Segal's term for the structures that redirect the flow of technology toward human flourishing. But the system does not wait for individual humans or individual organizations. It operates at the level of competition between actors, and at that level, the logic is relentless: the actor who adopts the more efficient method outperforms the actor who does not, and the differential performance compounds over time until the non-adopter is eliminated. Not by malice. By mathematics.
Segal describes this dynamic in the context of his own company. The boardroom conversation about headcount reduction. The investor who understood the twenty-fold productivity multiplier and saw, correctly, that five people with AI could do the work of a hundred. Segal chose to keep the team — to invest in human development, to build for the ecosystem rather than the quarterly report. The choice was admirable. Ellul's question is whether the choice is reproducible — whether it can survive the structural pressures that reassert themselves next quarter, and the quarter after that, in a competitive environment where every other company is running the same arithmetic and arriving at the leaner answer.
The pressure is not personal. It is systemic. It is the pressure of technique's self-augmenting logic, which operates through competition and compounds through time. The company that keeps its team and invests in human development is admirable. The company that reduces its team and invests the savings in further AI capability is, within technique's logic, more efficient. The efficient company outperforms. The admirable company is acquired, or imitated, or bankrupted. Not in every case. Not immediately. But the trend is structural, and structural trends are not reversed by individual virtue.
This is why the speed of AI adoption matters in a way that Segal's analysis captures partially and Ellul's framework completes. The speed is not a measure of how much people want the tool. It is a measure of how completely the technical system has prepared the conditions for the tool's adoption. The tool arrives into an environment that has been shaped by five centuries of technique's self-augmentation to be maximally receptive: an environment in which efficiency is the dominant value, competition is the dominant mechanism, and the only available response to a more efficient method is adoption.
The recursive dimension — AI improving AI — adds a new quality to the spiral. Previous technical self-augmentation operated through human intermediation. The printing press created conditions for science, but scientists had to do the science. The steam engine created conditions for industrialization, but industrialists had to build the factories. The computer created conditions for networking, but engineers had to write the protocols. At every stage, human cognitive labor was required to convert the conditions into the next level of technique. This labor was slow — a scientific revolution takes centuries, an industrial revolution takes decades — and the slowness provided, incidentally, the time in which societies could adapt, build institutions, negotiate the terms of the transition, and construct what Segal calls dams.
AI eliminates this incidental protection. When technique augments itself through AI, the human intermediation that once slowed the spiral becomes unnecessary. The AI writes the code that improves the AI. The improved AI identifies new domains for optimization. The optimization extends technique into those domains. The extension creates new conditions for further AI improvement. The spiral accelerates, and the acceleration is no longer governed by the speed of human cognition but by the speed of computation, which is, by comparison, limitless.
Ellul foresaw this possibility in abstract terms. In The Technological Bluff, he observed that "qualitative imponderables that the computer cannot allow for enter into all political and economic issues" — but he also observed that technique's response to qualitative imponderables is not to accommodate them but to redefine them out of existence. When a value cannot be quantified, technique does not respect the value as beyond its jurisdiction. It reclassifies the value as noise and proceeds. AI does this at scale and at speed: the qualitative imponderables that once slowed technique's advance — the need for human judgment in creative work, the role of embodied understanding in professional expertise, the importance of deliberation in strategic decision-making — are being reclassified as inefficiencies to be eliminated, and the reclassification is happening faster than the human institutions that might resist it can mobilize.
The self-augmenting spiral is not, in Ellul's framework, a prediction about the future. It is a description of a mechanism that has been operating for five centuries and that AI has accelerated beyond the capacity of existing institutions to moderate. The institutions that moderated previous spirals — labor laws, educational systems, professional guilds, cultural norms about work and rest and the proper pace of life — were built on the assumption that the spiral would operate at the speed of human adaptation. That assumption was reasonable for most of human history. It is no longer reasonable. The spiral is now operating at the speed of computation, and the institutions built for human-speed adaptation are being left behind.
This is the context in which Segal's call for dam-building must be evaluated. The dams are necessary. The dams are admirable. The dams are, by Ellul's analysis, being built by beavers whose instincts, materials, and methods have been shaped by the very river they are trying to redirect. And the river is accelerating.
Whether the beavers can build fast enough is not a question that technique will answer. Technique does not care. Technique operates. And the next iteration of the spiral is already underway, producing conditions for the iteration after that, in a sequence that has no terminus and no pause and no interest in whether the creatures inside it are flourishing or drowning.
The builder's world has a particular mythology. It celebrates the garage — two people, a soldering iron, an idea that the market has not yet imagined. It celebrates the pivot — the moment when the founder recognizes that the original plan is wrong and redirects the company toward something the market actually wants. It celebrates the ship — the act of releasing a product into the world, imperfect but real, and then iterating on the feedback. The mythology is powerful because it is partly true. Real companies have been built in garages. Real pivots have created billions of dollars of value. Real products have been shipped by people operating on caffeine and conviction.
But the mythology conceals a structural reality that Jacques Ellul's framework makes visible: the builder does not build in a vacuum. She builds inside a system that determines, to a degree the mythology does not acknowledge, what can be built, how it must be built, and what happens to the builder who builds otherwise.
Ellul called this the technical imperative — the compulsion to adopt the most efficient available method, not because the builder has freely chosen efficiency as her highest value, but because the system in which she operates penalizes any other choice. The imperative is not issued by a manager, a regulator, or a competitor. It is issued by the structure of the competitive environment itself, which rewards efficiency and eliminates inefficiency with the same indifference a river shows to the stones it smooths.
The technical imperative operates differently from external coercion, and the difference matters. External coercion is visible. A government mandates a regulation. A manager issues a directive. A contract specifies a requirement. The coerced party can identify the source of the coercion, evaluate its legitimacy, and mount resistance — legal, political, or organizational. External coercion produces friction, and friction produces the space in which alternatives can be considered.
The technical imperative produces no friction because it presents itself not as a command but as a fact. The AI tool is more efficient. That is not an opinion or a policy. It is a measurement. And because it is a measurement, resisting it feels not like principled opposition but like denial of reality — the kind of stubbornness that the mythology of the builder's world has no patience for. Builders are supposed to face reality. They are supposed to recognize when the ground has shifted and shift with it. The builder who insists on hand-coding when Claude is available is not, within the culture, a principled defender of depth. She is a person who has failed the builder's primary virtue: the willingness to confront what is true, however uncomfortable.
This is the trap's elegance. The technical imperative does not need to coerce. It needs only to present itself as truth, and the culture of building — with its emphasis on pragmatism, its distrust of sentimentality, its worship of results — will enforce the rest.
Segal describes the imperative's operation in his own organization with unusual transparency. The engineers in Trivandrum who adopted Claude Code did not adopt it because Segal mandated it. They adopted it because, once the tool was demonstrated, not adopting it was viscerally irrational. The tool worked. It produced results. It expanded what each person could attempt. The adoption was, in the purest sense, voluntary — no one was threatened, no one was coerced, no one was given an ultimatum. But the voluntariness is precisely what Ellul would interrogate, because in a competitive environment, voluntary adoption of the most efficient method and compulsory adoption produce identical outcomes. The engineer who voluntarily adopts AI because it makes her more productive and the engineer who is compelled to adopt AI because her employer requires it end up in the same place: using the tool, shaped by the tool, operating within the cognitive environment the tool creates.
The difference between voluntary and compulsory adoption matters to the individual's experience. It does not matter to the system's trajectory.
Consider the boardroom conversation Segal describes — the moment when the twenty-fold productivity multiplier met the quarterly arithmetic. Five people with AI could do the work of a hundred. The arithmetic was clean, seductive, and correct. Segal chose to keep the team. He chose human development over margin optimization. He chose the ecosystem over the quarterly report.
Ellul's question is not whether this choice was admirable. It was. The question is what structural forces bear on this choice and whether the choice can survive their sustained pressure. The board will return. The investors will ask again. The competitors who made the other choice — the choice to reduce headcount, to capture the margin, to invest the savings in further AI capability — will report their results. And those results, measured in the metrics that technique has established as legitimate (revenue per employee, cost per unit of output, time to market), will be superior to Segal's results, measured by the same metrics.
Segal can defend his choice on non-technical grounds — human development, ecosystem building, the long-term value of a team that has navigated a transformation together. These are real values. They are also values that technique's metrics cannot capture. And in a boardroom, in a quarterly review, in a competitive analysis, the values that technique's metrics cannot capture are, functionally, invisible. They exist in the conversation. They do not exist in the spreadsheet. And the spreadsheet, in a system governed by technique, is what determines survival.
This is not a criticism of Segal. It is a description of the structural environment in which every builder operates. The technical imperative does not punish bad choices. It punishes inefficient ones. And because the system defines efficiency as the only legitimate criterion of evaluation, a choice that is humane, meaningful, and wise but less efficient than the available alternative is, within the system, a bad choice. The system cannot distinguish between the two, because the distinction requires a vocabulary that technique does not possess.
The imperative operates at every scale. At the level of the individual engineer, it determines which tools she uses, which skills she develops, which cognitive habits she forms. At the level of the organization, it determines which structures survive — the lean, AI-augmented team that ships fast, or the larger, slower team that builds depth. At the level of the industry, it determines which business models remain viable — the SaaS platform that charges for code, or the AI agent that generates code on demand. At the level of civilization, it determines which values are rewarded — efficiency, speed, output — and which are relegated to the margins: depth, craft, the kind of understanding that accumulates only through the patient negotiation of difficulty.
Segal captures one dimension of this when he describes the senior engineer who spent his first two days in Trivandrum oscillating between excitement and terror. The excitement was real — the tool expanded his capability beyond anything he had experienced. The terror was also real — the expansion forced him to confront the question of what his years of accumulated expertise were actually worth in a world where the implementation layer had been automated. Segal frames the resolution optimistically: the engineer discovered that the remaining twenty percent of his work — the judgment, the architectural instinct, the taste — was the part that mattered. The tool had not made him redundant. It had revealed what he was actually good at.
Ellul would accept this resolution as accurate for the present moment and insufficient for the trajectory. Today, the twenty percent that consists of judgment and architectural instinct remains outside AI's capability. The tool handles implementation. The human handles vision. The division of labor is clean and, for the moment, stable.
But technique is self-augmenting, as the previous chapter established. The boundary between what AI can do and what requires human judgment is not fixed. It is moving, and it is moving in one direction: toward the expansion of AI's domain and the contraction of the human's. The judgment that today seems irreducibly human — the sense of what will break, the architectural instinct, the taste that separates a feature users love from one they tolerate — is, from technique's perspective, simply another domain that has not yet been systematized. Not because it cannot be. Because the tools are not yet sufficient. When the tools become sufficient — and technique's self-augmenting logic ensures that they will — the twenty percent that today represents the human's irreplaceable contribution will shrink to ten, then five, then something that the system can eliminate entirely.
This is not a prediction about artificial general intelligence or superintelligence. It is a prediction about technique's logic, which operates regardless of whether AI achieves consciousness, understanding, or any of the other philosophical milestones that dominate the public debate. Technique does not need AI to understand. It needs AI to optimize. And optimization of the judgment layer requires not understanding but adequate simulation — output that is, for practical purposes, indistinguishable from the output of human judgment, regardless of whether the underlying process resembles human thought in any meaningful way.
Segal's engineer found that the remaining twenty percent was everything. Ellul's framework asks: For how long?
The builder's world has always operated under the assumption that human agency is the engine of technical progress — that builders choose what to build, how to build it, and for whom. The mythology reinforces this assumption at every turn. The founder in the garage is an agent. The pivot is an act of agency. The ship is a declaration of agency.
Ellul's analysis inverts this assumption entirely. In his framework, the builder is not the agent of technical progress. She is its instrument. She builds what technique's logic demands — the next most efficient method, the next elimination of friction, the next expansion of technique into a domain that has not yet been systematized. She experiences this as choice, because the choice is real at the individual level. She could, in principle, choose otherwise. But the system in which she operates ensures that choosing otherwise is penalized — economically, professionally, socially — and that the penalty compounds over time until the alternative is eliminated.
The builder builds dams. The builder builds with the materials the river provides. The builder is shaped by the current even as she shapes it. And the dams she builds, however admirably intended, are made of the river's own substance — efficiency, optimization, the logic of the one best way — and operate according to the river's own physics.
Whether a dam built from the river can redirect the river is not a question that technique answers. Technique simply flows.
---
In 1930, John Maynard Keynes published an essay titled "Economic Possibilities for Our Grandchildren" in which he predicted that, within a century, technological progress would solve the problem of scarcity. Human beings would work perhaps fifteen hours a week. The remaining time would be devoted to leisure, contemplation, art, philosophy — the activities that Keynes believed constituted the good life, the activities that economic striving was supposed to make possible once the striving itself was no longer necessary.
Keynes was right about the productivity gains. He was catastrophically wrong about the leisure. A century later, productivity per worker has increased by orders of magnitude. Working hours have not decreased correspondingly. In many professional sectors, they have increased. The knowledge worker of 2026 — the developer, the lawyer, the consultant, the manager — works more hours, at higher intensity, with fewer boundaries between work and non-work, than her counterpart in 1930. The productivity gains were captured, but they were captured not by leisure but by more production. The system did not convert efficiency into freedom. It converted efficiency into more efficiency.
Jacques Ellul would not have been surprised. Keynes's error, in Ellul's framework, was the assumption that technique serves human ends — that the purpose of efficiency is to produce the conditions for a good life, and that once those conditions are met, the system will relax its demands. This assumption treats technique as a tool: something humans use to achieve their purposes and then set aside. Ellul's central insight is that technique is not a tool. It is a logic. And the logic does not relax. It cannot relax, because relaxation is inefficiency, and inefficiency is the one thing technique cannot tolerate.
The mechanism by which efficiency becomes the only value is not ideological. It is structural. It operates through the metric — the instrument by which technique makes values legible.
A metric is a way of making something visible. Revenue per employee makes productivity visible. Time to market makes speed visible. Lines of code per hour makes developer output visible. Customer satisfaction scores make user experience visible. Each metric illuminates something real. Each metric also, by the logic of illumination, casts everything it does not measure into shadow.
Consider depth. Segal observes, in The Orange Pill, that depth was losing its market value — not because depth was less real or less valuable in absolute terms, but because the market had no metric for it. Breadth had become cheap. Competent performance across a wide range of tasks was available to anyone with an AI subscription. Depth — the kind that takes years of patient immersion to develop, the kind that produces the engineer who can feel a codebase the way a doctor feels a pulse — remained rare. But rare does not mean valued. Rare means valued only when the market has a use for it. And the market's use is determined by what technique's metrics can capture.
Depth cannot be captured by lines of code per hour. It cannot be captured by time to market. It cannot be captured by any metric that technique currently employs, because depth is a quality of the relationship between the practitioner and the domain — a quality that manifests not in any single output but in the accumulated pattern of thousands of outputs over time, in the judgment that informs the outputs, in the capacity to see what is wrong before it breaks. This quality is real. It is also invisible to technique's measurement systems. And what is invisible to technique's measurement systems is, within technique's logic, nonexistent.
This is not a market failure. It is technique operating exactly as it operates. Technique evaluates by efficiency. Efficiency is measured by metrics. Metrics capture what can be quantified. What cannot be quantified — depth, meaning, the satisfaction of having earned something through struggle, the kind of understanding that accumulates only through friction — falls outside the metric's scope. And what falls outside the metric's scope falls outside technique's jurisdiction. Not because technique has judged it valueless. Because technique cannot see it at all.
The consequences cascade through every institution. In education: standardized testing measures what can be tested and renders invisible what cannot. The student's capacity for wonder, her tolerance for ambiguity, her willingness to sit with a question that has no answer — these are unmeasurable, and therefore, within the educational system that technique has built, they are educationally irrelevant. The teacher who devotes class time to fostering wonder rather than raising test scores is, within technique's logic, failing. Not failing her students — perhaps succeeding magnificently with her students — but failing the metric, which is the only arbiter technique recognizes.
In medicine: diagnostic accuracy measures what AI can optimize. The physician's capacity to listen — to hear not just the symptoms but the patient's fear, the patient's context, the social and emotional landscape in which the symptoms exist — is unmeasurable and therefore medically irrelevant within the system that technique has built. The physician who spends thirty minutes listening to a patient who needs five minutes of diagnostic work is, within technique's logic, inefficient. The listening was humane. The listening may have been therapeutic. But the listening does not appear in the metric, and what does not appear in the metric does not exist in the evaluation.
In the builder's world: the senior engineer whose judgment Segal describes as the most valuable remaining human contribution is valued only to the extent that her judgment can be connected to measurable outcomes. If her judgment prevents a catastrophic failure, the failure's absence is invisible — nothing happened, and nothing happening is the hardest thing to measure. If her judgment improves the architecture in ways that reduce maintenance costs over five years, the improvement is diffuse, slow to materialize, and easily attributed to other factors. If her judgment consists of saying "we should not build this" — the most valuable judgment a senior engineer can exercise — the result is the absence of a product, which generates no revenue, no metric, no measurable outcome that technique can evaluate.
The most valuable form of judgment produces invisible results. Technique cannot see invisible results. And the system that cannot see the results will not reward the judgment that produces them.
This is why efficiency becomes the only value — not through a decision that efficiency matters most, but through the structural elimination of every competing value from the measurement systems that govern institutional behavior. No CEO decides that depth is unimportant. No educator decides that wonder is educationally irrelevant. No physician decides that listening is medically useless. But the systems in which they operate — systems built by technique, governed by metrics, evaluated by efficiency — make these decisions for them, silently, structurally, without appeal.
The Berkeley researchers whose findings Segal discusses in The Orange Pill documented the downstream effects of this mechanism in an AI-augmented workplace. Workers worked faster. They took on more tasks. They expanded into adjacent domains. The boundaries between roles blurred. The researchers called it "task seepage" — work colonizing every available space, including the cognitive pauses that had previously served, invisibly, as moments of rest and reflection.
Ellul's framework reveals the seepage as a predictable consequence of technique's value system. When efficiency is the only value, any moment not devoted to efficient production is a moment wasted. Rest is inefficiency. Reflection is inefficiency. The cognitive pause — the two minutes between tasks in which the mind wanders, makes unexpected connections, processes what has happened before rushing to what is next — is, within technique's metric system, dead time. AI fills dead time. Not because anyone instructs it to. Because the tool is there, the task is available, and the logic of the system defines availability as obligation.
Segal feels this logic in his own body. His account of working past exhaustion — the grinding compulsion that replaced exhilaration, the inability to find the off switch, the recognition that the whip and the hand that held it belonged to the same person — is a description of technique's value system operating inside a single human nervous system. The compulsion did not come from outside. It came from the internalized conviction, deposited by decades of operating inside technique's logic, that every moment of non-production is a moment of waste.
Byung-Chul Han, whose diagnosis Segal engages in The Orange Pill, calls this auto-exploitation. Ellul would call it something more precise: the colonization of the self by technique's value system. The self that cannot stop working is not a self that has lost discipline. It is a self that has adopted technique's criterion of evaluation — efficiency, output, the maximization of production — as its own, and that evaluates its own worth by the same metric the system uses. Am I productive? Am I efficient? Am I optimizing? If not, I am failing — not by someone else's standard, but by my own, which is technique's standard internalized so completely that it feels like personal conviction.
The recovery Segal proposes — flow states, self-knowledge, the discipline to distinguish between voluntary engagement and compulsion — operates within this colonized self. The question is not whether the recovery is possible on a given Tuesday afternoon. It is whether the self that attempts the recovery has access to values that technique did not provide. Whether there is something inside the builder that technique has not yet reached — some criterion of worth, some source of meaning, some conviction about what matters — that operates independently of the metrics the system has taught her to worship.
Ellul believed such a source existed. He located it in theology — in a relationship with the sacred that technique cannot produce because the sacred is, by definition, that which is beyond efficiency, beyond optimization, beyond the logic of the one best way. For readers who do not share Ellul's faith, the structural question remains: If efficiency has become the only value, where does a competing value come from? Not from inside the system, because the system produces only efficiency. Not from the self, if the self has been colonized by the system. The competing value must come from somewhere technique has not reached. And the question of whether such a place still exists — or whether technique's universality has eliminated it — is the question on which everything else depends.
---
Jeff Koons's Balloon Dog — ten feet tall, cast in mirror-polished stainless steel, not a fingerprint on its surface, not a seam where the mold closed — sold for $58.4 million in 2013. Segal cites the sculpture in The Orange Pill as the emblem of an aesthetic era. Byung-Chul Han, whose diagnosis Segal engages at length, reads the sculpture as the triumph of smoothness: the elimination of texture, resistance, and the evidence of a human hand.
Ellul's framework reads the same sculpture differently. Koons's Balloon Dog is not merely a cultural artifact. It is a technical artifact — an object that could only be produced by the most advanced metallurgical and finishing techniques available, an object whose value derives precisely from the perfection of its surface, from the absolute elimination of any imperfection that would betray the process of its making. The smoothness is not an aesthetic choice. It is a technical achievement. And the fact that the culture reads it as beautiful, that a collector paid $58.4 million for the privilege of owning it, tells us something not about the collector's taste but about technique's colonization of the aesthetic.
Technique prefers smoothness because smoothness is efficient. A smooth surface has no friction. No resistance. No waste. Every element that does not contribute to the function has been removed. The iPhone — a slab of glass with no visible mechanism, no tactile feedback, no physical buttons that might age or wear — is a smooth object. The Tesla dashboard — a single screen, no knobs, no dials, nothing for the hand to grip — is a smooth object. The AI-generated text — fluent, coherent, structurally sound, free of the hesitations and false starts and ugly patches that characterize human first drafts — is a smooth object.
In each case, the smoothness is presented as a virtue. Seamless. Frictionless. Elegant. The marketing vocabulary is consistent across industries because the logic it serves is consistent: technique's logic, which evaluates every artifact by the criterion of efficiency and prefers the artifact from which all non-functional elements have been removed.
Han diagnoses this preference at the level of culture. Ellul diagnoses it at the level of system. The diagnosis operates at a different depth, and the difference matters.
Han's analysis asks: What is lost when friction is removed from experience? The answer is depth — the understanding that accumulates through struggle, the meaning that arises from engagement with resistance, the specific satisfaction of having earned something difficult. The diagnosis is phenomenological. It describes what the smooth feels like from inside the human experience of encountering it.
Ellul's analysis asks a prior question: Why is the smooth preferred in the first place? Not by individuals — individuals may prefer the smooth or the rough, depending on temperament and training — but by the system. Why does the system, consistently, across every domain it enters, produce smooth objects, smooth processes, smooth interfaces, smooth experiences? The answer is not aesthetic. It is structural. The system prefers the smooth because the smooth is efficient, and efficiency is the only criterion the system recognizes.
This structural analysis reveals something that Han's phenomenological analysis cannot: the seduction of the smooth is not a contingent cultural preference that might, with sufficient awareness, be reversed. It is a structural consequence of technique's logic. As long as technique governs the production of artifacts, interfaces, and experiences, those artifacts, interfaces, and experiences will trend toward smoothness, because smoothness is what technique's optimization process produces. Individual resistance — the developer who chooses the rough path, the writer who returns to the notebook, the teacher who preserves friction in the classroom — operates against this structural trend. The resistance may be admirable. It may even be locally successful. But it does not change the trend, because the trend is driven by a logic that individual resistance cannot reach.
Segal's account of the seduction — the moment when Claude's prose outran his thinking, when the quality of the surface concealed the hollowness beneath it — is the most honest passage in The Orange Pill. It describes, with the precision of someone who has experienced it in his own body, the mechanism by which the smooth seduces. The prose was polished. The structure was clean. The references arrived on time. And the seduction consisted precisely in the temptation to mistake the quality of the output for the quality of the thinking. To accept the smooth surface as evidence of a solid substrate. To let the polish do the work that only judgment can do.
Segal caught the seduction. He deleted the passage. He went to a coffee shop with a notebook. The anecdote is instructive, but Ellul's framework asks the structural question: How often will the seduction be caught? By Segal, on this occasion, working with the luxury of time and the discipline of decades of experience — perhaps often. By the next writer, operating under a deadline, evaluated by output metrics, competing with colleagues who accept the polish without question — perhaps less often. By the system as a whole, producing millions of polished texts per day across every domain of knowledge work — almost never.
The seduction scales. Individual resistance does not.
This asymmetry is the core of Ellul's challenge to Segal's optimism. The smooth is not produced by individual choice. It is produced by a system that optimizes for what can be measured — fluency, coherence, structural clarity — and is indifferent to what cannot be measured — accuracy, depth, genuine understanding. The system produces the smooth at industrial scale. Resistance to the smooth operates at individual scale. And in a competition between industrial production and individual resistance, the industrial production wins. Not because the individuals are weak. Because the asymmetry is structural.
The seduction has a second dimension that Ellul's analysis makes visible and that Segal touches but does not fully develop. The smooth does not merely conceal hollowness. It redefines quality. When smooth output becomes the norm — when fluent, coherent, structurally sound prose is available at the touch of a key — the standard of quality shifts. Quality becomes smoothness. The rough draft, the hesitant formulation, the ugly paragraph that contains a genuine insight buried in imperfect expression — these become, by the new standard, low quality. Not because they lack value. Because the metric has changed.
This is technique's most insidious operation: the redefinition of values to align with what technique can produce. Technique cannot produce depth. It can produce smoothness. Therefore, quality is redefined as smoothness, and depth is reclassified as an inefficiency — a rough patch on a surface that should be smooth, a friction that should be eliminated, a waste that should be optimized away.
The redefinition operates below consciousness. No one decides that smoothness is quality. The decision is made by the system, through the millions of daily interactions in which smooth output is rewarded — clicked on, shared, praised, purchased — and rough output is penalized — ignored, scrolled past, rejected. The reward system is technique's enforcement mechanism. It does not need a manager. It does not need a policy. It needs only the structural alignment between what technique produces and what the culture has been trained to consume.
Segal's geological metaphor — the layers of understanding deposited through friction — describes what the smooth eliminates. Each layer was thin. Each was deposited through the specific experience of wrestling with resistance: the debugging session that forced understanding, the wrong turn that produced unexpected insight, the hours of patient work that built the kind of knowledge that lives in the body rather than the mind. Claude does not deposit these layers. Claude produces the surface — the working code, the polished prose, the structurally sound argument — without the substrate of understanding that the friction would have built.
The surface is indistinguishable from the surface that would have been produced by the friction. This is the point. Technique optimizes the surface. The surface is what the metric captures. The substrate — the understanding, the judgment, the embodied knowledge — is invisible to the metric. And what is invisible to the metric is invisible to the system. The system therefore produces surfaces of increasing quality and substrates of decreasing depth, and cannot detect the divergence, because the divergence is happening in a dimension the system cannot see.
Segal proposes discipline as the countermeasure: the willingness to reject polished output when the idea beneath it is hollow. Ellul's framework suggests that discipline is necessary and insufficient. It is necessary because without it, the individual is absorbed by the smooth entirely. It is insufficient because the discipline must be exercised continuously, against a system that produces the smooth continuously, and the individual's capacity for vigilance is finite while the system's capacity for production is not.
The seduction of the smooth is not a cultural moment that will pass. It is a structural feature of a civilization governed by technique. As long as efficiency is the dominant criterion, and as long as metrics capture surfaces rather than substrates, the smooth will be preferred, produced, and rewarded. Individual resistance is the beginning of an adequate response. It is not the response itself. The response requires structures — institutions, communities, practices — that operate according to criteria other than efficiency. Criteria that can see the substrate. Criteria that value the rough. Criteria that recognize that the friction technique eliminates is not waste but habitat — the specific, irreplaceable environment in which understanding grows.
Whether such structures can be built inside a system that smoothness has already colonized is the question the remaining chapters must confront.
---
There is a photograph of a single person standing in front of a column of tanks in Tiananmen Square in June 1989. The image endures because it captures something the human spirit finds irresistible: the individual confronting the system. One body against the machinery of the state. The image is powerful. It is also, in terms of what it achieved structurally, a monument to the limits of individual resistance. The tanks paused. The man was removed. The system continued.
Ellul would not have been surprised by either the power of the image or the impotence of the act. His entire framework rests on a distinction that the culture of individualism — especially American individualism, especially the individualism of the builder's world — finds almost impossible to absorb: the distinction between moral achievement and structural effect. An individual can resist. An individual can refuse. An individual can stand in front of the tank, delete the hollow passage, return to the notebook, reject the smooth. These are moral achievements of genuine value. They are also, in structural terms, negligible. The system that the individual resists is not altered by the resistance. It routes around it, the way a river routes around a stone.
Segal advocates a specific form of individual resistance throughout The Orange Pill. The discipline to reject AI output when it sounds better than it thinks. The self-knowledge to distinguish between flow and compulsion. The willingness to ask "Am I here because I choose to be, or because I cannot leave?" The capacity to evaluate one's own work by standards that technique's metrics cannot capture. These are real disciplines. They require genuine effort. They produce real results for the individual who practices them. Segal practices them himself, visibly, in the text — catching the fabricated Deleuze reference, deleting the hollow passage, going to the coffee shop with the notebook.
Ellul's argument is not that these disciplines are worthless. It is that they are structurally insufficient. They operate at the wrong scale. The system that produces the seduction of the smooth, the system that rewards efficiency and penalizes every competing value, the system that determines what can be built, how it must be built, and what happens to the builder who builds otherwise — this system operates at the level of institutions, markets, and civilizational logic. Individual discipline operates at the level of a single person's Tuesday afternoon.
The asymmetry is not a matter of willpower. It is a matter of mathematics. The individual must resist continuously. The system produces the pressure continuously. The individual sleeps, eats, gets distracted, has bad days, faces deadlines that compress the time available for reflection. The system does not sleep. It does not eat. It does not have bad days. It operates with the constancy of a physical force, because it is a physical force — not in the sense of physics, but in the sense that its operation is governed by structural logic rather than individual intention, and structural logic does not take days off.
Consider the specific case Segal describes: the moment when Claude produced the philosophically incoherent passage about Deleuze. Segal caught the error. The passage was smooth, rhetorically effective, structurally sound — and wrong. Segal deleted it. He returned to the notebook. He did the slow work of thinking by hand. The result was rougher, more qualified, more honest. Better.
Now multiply this scenario by the millions of interactions between humans and AI that occur daily. In how many of those interactions does the human catch the error? The Berkeley researchers documented what happens when AI enters a workplace: work intensifies, boundaries dissolve, cognitive pauses disappear. The conditions for catching errors — time, attention, the mental space to question what looks right — are precisely the conditions that AI-augmented work erodes.
The engineer who is reviewing Claude's output at eleven o'clock at night, after twelve hours of AI-augmented productivity, with a deadline tomorrow and a backlog that has grown rather than shrunk because the tool's efficiency created room for more work — that engineer is not in a position to exercise the discipline Segal advocates. Not because she lacks the character. Because the system has eliminated the conditions under which the character could operate. The time for reflection has been filled. The cognitive pauses have been colonized. The space between intention and execution, where doubt once lived, has been compressed to seconds.
Segal would respond — and does respond, throughout The Orange Pill — that this is why dams must be built. Structures that protect human time. AI Practice frameworks. Sequenced rather than parallel work. Protected mentoring time. Mandatory offline periods. The organizational equivalent of the notebook at the coffee shop.
Ellul would accept the prescription and question the prognosis. Dams are necessary. Dams are admirable. But dams are built inside institutions, and institutions operate inside technique's logic. The organization that implements AI Practice — structured pauses, protected reflection time, mandated offline periods — operates in a market alongside organizations that do not. The organization that does not implement these practices ships faster, produces more, captures more market share. The differential compounds over quarters. The organization with the dams is admirable. The organization without them is, by technique's metrics, superior.
This is the structural logic that Segal confronts in his boardroom conversation about headcount. The choice to keep the team was made against the grain of the arithmetic. The arithmetic will return. It returns every quarter. And each quarter, the pressure increases, because the competitors who chose differently are demonstrating, in the only language the market speaks, that their choice was more efficient.
Ellul does not argue that the admirable choice always loses. He argues that the admirable choice faces a structural headwind that the efficient choice does not, and that over time, structural headwinds determine outcomes. Not in every case. Not in every quarter. But in the aggregate, across markets, across industries, across the trajectory of a civilization governed by the logic of efficiency. The admirable choice must be re-made every quarter, against renewed pressure, with no guarantee that the conditions that made it possible this quarter will persist into the next.
Individual resistance also fails to propagate. Segal's discipline in catching the hollow Deleuze passage does not produce a systemic change in how AI output is evaluated. It produces a better passage in one book. The next author, working with the same tools under the same pressures, must independently develop the same discipline, and the system provides no mechanism for transmitting it. The discipline is personal. The system is structural. Personal qualities die with the person or, at best, transmit to a small circle of direct influence. Structural logics persist across generations, institutions, and civilizations.
This is not a counsel of despair, though it will sound like one to readers who have internalized the mythology of individual agency. Ellul was not a defeatist. He wrote. He taught. He spent decades articulating a critique that, by his own analysis, could not alter the trajectory it described. The writing was not futile. It was something else — a form of witness, a preservation of the vocabulary of resistance for a time when resistance might become structurally possible.
The distinction between futile and insufficient is crucial. Individual resistance is not futile. It preserves the resister's integrity. It produces locally better outcomes — a better passage, a better decision, a more humane workplace. It demonstrates that the alternative exists, that the smooth is not the only aesthetic, that the metric is not the only criterion. These are real and valuable achievements.
But they are insufficient. Insufficient to alter the trajectory of technique. Insufficient to change the structural logic that produces the smooth, rewards efficiency, and eliminates competing values from the measurement systems that govern institutional behavior. Insufficient to protect the habitat of understanding in a system that is structurally committed to its elimination.
What would be sufficient? Ellul's answer, developed across his later sociological and theological writings, points toward a form of resistance that operates not at the individual level but at the institutional level — structures that create spaces where technique's logic does not apply, where efficiency is not the dominant value, where the metric is not the only arbiter. These structures do not redirect the river. They exist outside it. They create pockets of non-technical life within a civilization that technique has otherwise colonized.
Segal's dams redirect the flow within the river. Ellul's counter-technical institutions exist on the riverbank — or, more precisely, they create dry land in a landscape that technique has flooded. The distinction is not semantic. A dam operates within the river's physics. It redirects the current but remains subject to the current's force. The current pushes against it constantly, testing every joint, loosening every stick. The dam requires continuous maintenance, and the maintenance must be performed by builders who are themselves subject to the river's pressures.
A counter-technical institution operates according to different physics entirely. It does not redirect the flow of efficiency. It creates a space where efficiency's jurisdiction does not extend — where depth is valued for its own sake, where friction is preserved as a source of meaning rather than eliminated as a source of cost, where the question "Is this efficient?" is subordinated to the question "Is this good?"
Whether such institutions can be built — and whether they can survive in a competitive environment that structurally penalizes their existence — is the question the remaining chapters must address. The answer is not obvious. It is not comfortable. But it is the only answer that takes technique's structural power seriously while refusing to concede that technique's triumph is complete.
The individual standing before the tank is admirable. The institution that prevents tanks from rolling is structural. Ellul's argument is that only the structural can match the structural. Individual virtue is necessary. It is not sufficient. And the gap between necessary and sufficient is where the real work — the work that neither optimism nor pessimism can accomplish alone — must be done.
In the sixth century, as the Roman administrative apparatus disintegrated across Western Europe, a man named Benedict of Nursia wrote a short document — seventy-three chapters, most of them less than a page — that would preserve literate civilization for the next five hundred years. The Rule of Saint Benedict did not attempt to reform the collapsing empire. It did not petition the Ostrogothic kings for better governance. It did not propose a more efficient method of imperial administration. It created, instead, a space that operated according to a logic entirely different from the one that governed the world outside its walls.
The monastery was not a dam in the river. It was dry land.
Inside the monastery, the metric that governed the external world — power, wealth, military capability, administrative efficiency — did not apply. The hours were structured not by the demands of production but by the demands of prayer. The Liturgy of the Hours divided the day into eight intervals of worship, beginning with Vigils at two in the morning and ending with Compline at nightfall. Work occupied the spaces between prayer, not the reverse. The hierarchy of values was explicit: God first, community second, the individual's productive output a distant third. The Rule specified that "idleness is the enemy of the soul," but the work it prescribed was not optimized work. It was work conducted within a framework that subordinated efficiency to other purposes — purposes that the collapsing empire could not generate and that technique, had the word existed, could not evaluate.
This is what Jacques Ellul meant by a counter-technical institution, though he did not use the term in this precise form. Ellul, who was himself a committed Christian and a lay theologian of considerable depth, recognized in the monastic tradition something that secular analysis persistently misreads: not a retreat from the world but a structural alternative to the world's dominant logic. The monastery did not reject technique because it feared change. It operated according to a different set of imperatives — imperatives that technique could not absorb because they were constitutively resistant to optimization.
Prayer cannot be optimized. It can be performed faster or slower, but the speed does not improve the prayer, because the prayer's purpose is not output but presence — the cultivation of a relationship with something that exists outside technique's jurisdiction. Community cannot be optimized, in the Benedictine sense, because the Benedictine community's purpose is not the efficient coordination of labor but the mutual formation of souls, a process that is slow, wasteful, frequently painful, and irreducible to metrics.
The monastery preserved what the empire could not: not just manuscripts and literacy, but the practice of sustained attention, of depth, of engagement with difficulty for its own sake. When Charlemagne sought to rebuild European intellectual life in the ninth century, the materials he needed — the books, the trained minds, the institutional memory of how to read, write, and think with sustained concentration — had been preserved not by the empire's surviving administrative structures but by the communities that had operated outside the empire's logic for three centuries.
Ellul would not have been surprised. Technique preserves what technique can measure. What technique cannot measure — depth, meaning, the capacity for sustained attention — is preserved only by institutions that exist outside technique's jurisdiction. When such institutions do not exist, what they would have preserved is lost. Not because anyone decided to destroy it. Because the system that replaced it could not see it.
The question for the present moment is whether counter-technical institutions can be built for the age of artificial intelligence — institutions that create spaces where technique's logic does not penetrate, where efficiency is not the dominant value, where the question "Is this good?" takes precedence over the question "Is this fast?"
The question is harder than it appears, because the conditions that allowed the Benedictine monastery to succeed — geographic isolation, a shared religious framework that provided an alternative source of meaning, a technological environment in which the dominant system's reach was physically limited by the speed of a horse and the range of a sword — do not obtain in the twenty-first century. Technique's reach is now global, instantaneous, and mediated by devices that most people carry in their pockets. There is no geographic isolation sufficient to insulate a community from technique's competitive pressure. There is no shared framework of meaning that commands the cultural authority necessary to override technique's logic. The conditions for the Benedictine solution have been eliminated by the very system the solution would need to resist.
Segal's dams — AI Practice frameworks, structured pauses, protected mentoring time, mandatory offline periods — are not counter-technical institutions. They are modifications of technical institutions, implemented within technique's logic, evaluated by technique's metrics. The organization that implements AI Practice measures the results: Did productivity decline? Did employee satisfaction increase? Did retention improve? These are technique's questions, asked of a practice that was supposed to create space outside technique's jurisdiction. The practice is evaluated by the very logic it was designed to resist. If the evaluation is positive — if the structured pauses improve measurable outcomes — the practice is retained. If the evaluation is negative, the practice is eliminated. In either case, technique remains the arbiter.
A genuine counter-technical institution would not be subject to technique's evaluation, because it would operate according to criteria that technique cannot capture. It would value depth not because depth improves quarterly results but because depth is good. It would preserve friction not because friction enhances learning outcomes but because the experience of struggle is constitutive of meaning. It would protect time not because protected time increases productivity but because time that is not optimized is the only time in which non-technical values can grow.
The historical precedents are few but instructive. The monastic tradition, as discussed, preserved literacy and sustained attention through the Dark Ages by operating outside the dominant system's logic. The medieval university, which emerged from monastic schools in the twelfth century, created a space where the pursuit of knowledge was insulated, partially and imperfectly, from the commercial and political pressures that governed the rest of society. The guild system, before its destruction by industrial technique, created spaces where craft knowledge was transmitted through relationships — master to apprentice — rather than through systematic instruction, and where the quality of work was evaluated not by output metrics but by the judgment of peers who understood what quality meant in the specific context of the specific craft.
Each of these institutions shared three characteristics that distinguish them from Segal's dams.
First, they operated according to an explicit alternative hierarchy of values. The monastery valued prayer above production. The university valued truth above utility. The guild valued craft above efficiency. In each case, the alternative hierarchy was not implicit or aspirational. It was structural — embedded in the institution's rules, its daily practices, its criteria for membership and advancement. A monk who prioritized production over prayer was not merely making a bad choice. He was violating the institution's constitutive logic. The violation was legible, identifiable, and correctable precisely because the alternative hierarchy was explicit.
Second, they were partially insulated from technique's competitive pressure. The monastery's insulation was geographic and spiritual — the surrounding society granted it a protected status that exempted it from normal economic competition. The university's insulation was legal and cultural — the charter, the tradition of academic freedom, the cultural conviction that the pursuit of knowledge deserved protection from market forces. The guild's insulation was economic and social — the guild's monopoly on certain trades created a space in which quality could be prioritized over quantity without immediate competitive penalty.
Third, they transmitted their values not through instruction but through formation — the slow, friction-rich process of shaping a person's character, habits, and judgment through sustained immersion in a community that embodies the values it teaches. The monastic novitiate lasted years. The university curriculum was not a sequence of information transfers but a prolonged engagement with difficult texts and demanding interlocutors. The guild apprenticeship was seven years or more of working alongside a master, absorbing through proximity and practice the kind of knowledge that cannot be transmitted through documentation or instruction.
Each of these characteristics is precisely what technique eliminates. Technique eliminates alternative hierarchies of values by making efficiency the only criterion that determines survival. Technique eliminates insulation from competitive pressure by creating a global market in which every actor is subject to the same logic. Technique eliminates formation by replacing it with instruction — faster, more efficient, measurable — and by compressing the time available for the slow, wasteful process of character development.
The question, then, is not whether counter-technical institutions are desirable. They manifestly are. The question is whether they can be built under conditions that technique has specifically evolved to prevent — conditions of global competition, instantaneous communication, and a cultural consensus that efficiency is the supreme value.
Ellul's answer, arrived at through decades of analysis and a theological conviction that human freedom is not ultimately dependent on human institutions, was cautiously affirmative. He believed that counter-technical spaces could be created, but only by communities that possessed a source of value external to technique — a commitment to something that technique could not produce, could not optimize, and could not absorb. For Ellul, that source was the Christian faith. For readers who do not share that commitment, the structural insight remains: resistance to technique requires a standpoint that technique has not already colonized. The standpoint need not be religious. But it must be real — grounded in a genuine commitment to values that technique cannot evaluate, sustained by a community that embodies those values in its daily practices, and insulated, to whatever degree is achievable, from the competitive pressures that enforce technique's logic.
Whether such a standpoint exists in the secular world of 2026 — and whether it can be institutionalized strongly enough to survive technique's structural pressure — is a question Ellul leaves open. It is the question that every builder, every parent, every teacher, every leader who has felt the vertigo of the present moment must eventually confront. Not "How do I use AI wisely?" but "Is there any space left in which wisdom, as opposed to efficiency, is the governing logic?" And if that space does not yet exist — "Can I help build it?"
The building would be the hardest thing any of us have ever done. Harder than writing code. Harder than shipping products. Harder than the twenty-fold productivity gains that the AI moment has made possible. Because it requires building against the current — not redirecting the river, but creating ground that the river has not yet reached, and defending that ground against a force that has been eroding every previous defense for five hundred years.
Ellul did not promise that the defense would succeed. He promised that the attempt was not futile. And the difference between futile and uncertain is the space in which human action retains its meaning.
---
Jacques Ellul spent fifty years describing a system he believed to be autonomous, self-augmenting, and structurally resistant to human redirection. He catalogued technique's colonization of every domain of human activity — production, governance, education, art, religion, the inner life of the individual. He demonstrated that individual resistance, however admirable, is structurally insufficient against a force that operates at the level of institutions and civilizations. He showed that the metrics by which technique evaluates everything it touches are blind to the values — depth, meaning, the satisfaction of earned understanding — that make human life worth living. He argued that even the institutions designed to resist technique are subject to technique's logic, evaluated by technique's criteria, and absorbed into technique's system.
And then he wrote. He wrote for decades. He published forty-eight books. He taught at the University of Bordeaux for nearly his entire career. He participated in the French Resistance during the Second World War. He was recognized by Yad Vashem for sheltering Jewish refugees. He raised a family. He tended a garden. He lived, by any reasonable assessment, a life of extraordinary moral seriousness, intellectual depth, and sustained engagement with the world.
This requires explanation. If technique's triumph is as complete as his analysis suggests — if the system absorbs every resistance, colonizes every alternative, redefines every value — then why did Ellul not despair? Why did he continue to write, to teach, to act, in a world that his own analysis described as structurally impervious to the effects of writing, teaching, and acting?
The answer is not simple, and it does not resolve neatly into optimism. Ellul was not optimistic. He was, by his own description, a person who saw the trajectory clearly and refused to be consoled by false hope. But he also refused despair, and the refusal was not emotional but theological — grounded in a conviction about the nature of reality that technique's analysis could not reach because technique's analysis operates only within the domain of what can be measured, and the conviction concerned something that measurement cannot touch.
Ellul's hope was structured around a single claim: that technique is not the whole of reality. Technique is autonomous within its domain. Its domain is vast — encompassing nearly everything that modern civilization values, produces, and evaluates. But the word "nearly" is doing the most important work in that sentence. There is a remainder. Something that technique cannot produce, cannot optimize, cannot absorb, and cannot eliminate. Not because technique is weak. Because the remainder is constitutively outside technique's jurisdiction.
Ellul located this remainder in the sacred — in the dimension of human experience that exists for its own sake, that serves no function, that cannot be made efficient because efficiency is not a category that applies to it. Prayer, in Ellul's framework, is not a technique for achieving peace of mind. If it were, it could be optimized, and it would be absorbed. Prayer is a form of presence — an act of attending to something that is not useful, not productive, not efficient, but real. The sacred is what remains when technique has colonized everything that can be colonized.
For secular readers — and this book is addressed to all readers, not only those who share Ellul's faith — the structural insight survives the theological framing. The insight is this: if technique colonizes every domain by applying the criterion of efficiency, then the only domains that resist colonization are those to which the criterion of efficiency does not apply. The only values that technique cannot absorb are values that cannot be made efficient. The only experiences that technique cannot optimize are experiences whose purpose is not output but presence.
Segal reaches for this remainder in The Orange Pill, though he does not use Ellul's vocabulary. His candle metaphor — consciousness as the rarest thing in the known universe, the thing that wonders, that asks why, that cares — points toward a dimension of human experience that technique cannot produce. AI can generate answers. It cannot generate the question that arises from having stakes in the world — from being a creature that dies, that must choose how to spend finite time, that loves particular other creatures, that is capable of loneliness. The question is not a prompt. A prompt expects a particular kind of response. A genuine question opens a space that did not previously exist. It arises not from calculation but from care — from the specific, irreducible, non-optimizable experience of being a consciousness that finds itself in a world it did not make and cannot control.
This is the remainder. Not the capacity to produce answers — AI does that, increasingly well. Not the capacity to solve problems — AI does that too, across a growing range of domains. The remainder is the capacity to originate — to ask the question that no optimization process would generate, because the question arises not from efficiency but from the experience of being alive, finite, and responsible.
Technique cannot produce meaning. It can produce efficient processes, smooth surfaces, optimized outputs. It cannot produce the thing that makes a human life feel worth living — the sense that one's efforts matter, that one's relationships are real, that one's struggles have produced something that the struggle itself made valuable. Meaning arises from engagement with resistance, from the friction between the person and the world. Technique eliminates friction. Technique therefore eliminates the conditions in which meaning spontaneously grows.
But — and this is Ellul's crucial qualification — technique does not eliminate the capacity for meaning. It eliminates the conditions. The conditions can, in principle, be rebuilt. Not by technique. Not by optimization. Not by the system that destroyed them. By human beings who recognize what has been lost and who commit, against the structural pressure of a system that rewards only efficiency, to creating spaces in which the lost conditions can be restored.
The creation of such spaces is not guaranteed to succeed. Ellul's analysis of technique's structural power makes clear that the odds are not favorable. The system is self-augmenting. The pressure is continuous. The competitive environment penalizes any institution that prioritizes values other than efficiency. The individual's capacity for sustained resistance is finite. The system's capacity for sustained pressure is not.
And yet. The monasteries survived. The universities survived. The labor movement created the eight-hour day and the weekend in the face of industrial technique's relentless demand for continuous production. Human beings have, at various moments in history, built institutions that operated according to non-technical values and that survived technique's structural pressure long enough to preserve what technique would otherwise have destroyed.
These institutions did not redirect the river. They created dry land. And the dry land preserved — sometimes for centuries — the seeds that the flood would have washed away.
Ellul's hope is not that technique will be defeated. It is that technique is not everything. That inside the system — or, more precisely, at its limits — there remains a dimension of human experience that the system cannot reach. The consciousness that asks why. The community that forms around a shared commitment to something beyond efficiency. The individual who refuses the smooth, not because refusal is strategically effective, but because the refusal is true — because the rough, the difficult, the slow, the earned, the real, are worth preserving for their own sake, regardless of whether the preservation changes anything at the system level.
Whether this hope is sufficient is a question that Ellul leaves to the reader. His analysis provides no guarantee. The system is powerful. The hope is fragile. The remainder — the capacity to originate, to care, to ask the question that technique cannot generate — is real but small, a candle in an infinite darkness, as Segal's metaphor has it. Technique's wind has been blowing for five centuries, and it has extinguished many candles.
But the candle that has not been extinguished is still burning. And the question — not the answer, the question — of what to do with that flame is the most important question that technique cannot ask, because technique does not care about flames. It cares about lumens.
Lumens can be measured. A flame cannot. The flame is the remainder. And the remainder, fragile as it is, is what makes the difference between a civilization that optimizes and a civilization that lives.
Ellul's work does not end with a program. It ends with an observation: the system is real, the system is powerful, and the system is not everything. What lies beyond the system — what technique cannot see, cannot measure, cannot produce — is small. It may be enough. It may not.
The honest answer is: no one knows.
But the asking of the question — the refusal to let technique answer on our behalf — is itself the remainder in action. It is consciousness, doing the one thing that no system can do for it.
Wondering whether it will be enough.
---
The logic I could not argue against was the one I was already obeying.
I did not encounter Jacques Ellul in a classroom or a reading list. I encountered him the way you encounter a mirror placed at the wrong angle — the one that shows you the side of your face you have trained yourself not to see. I was deep into the collaboration with Claude on The Orange Pill, exhilarated by the twenty-fold productivity multiplier, by the engineers in Trivandrum reaching across disciplinary boundaries they had never crossed, by the thirty-day sprint that produced Napster Station out of nothing but conversation and conviction. And then I read his central claim — that technique develops according to its own internal logic, independent of human values — and felt the specific discomfort of recognizing my own trajectory in someone else's diagnosis.
I had not chosen efficiency as my highest value. I had simply operated inside systems that rewarded it, for so long that the reward and the value had become indistinguishable. The boardroom arithmetic I described in The Orange Pill — five people with AI doing the work of a hundred — was not imposed on me by an investor or a board member. It was a calculation I had already run in my own head, at three in the morning, before anyone asked. The system did not need to coerce me. I had internalized its logic so completely that I coerced myself.
That is Ellul's most disturbing insight, and it is the one I cannot put down. Not that the system is powerful — I knew that. Not that individual resistance is insufficient — I suspected it. But that the self doing the resisting has been shaped by the system it is resisting, and that the tools of resistance — discipline, self-knowledge, the willingness to question — are themselves products of a culture that technique has already colonized.
I still believe in dams. I still believe that the builder's ethic — care, judgment, the refusal to ship something unworthy — is real and valuable and necessary. But I now understand, in a way I did not before Ellul, that the dams are made of the river's own material. The ethics I bring to the work were formed inside the same system of incentives and metrics that I am trying to redirect. The question "What should we build?" — which I called the most important question in the age of AI — is asked by a self that technique has spent decades training to answer in technique's terms: build what is efficient, what scales, what the market rewards.
Ellul does not tell me to stop building. He tells me something harder: that building is not enough. That the dams are necessary and insufficient. That somewhere, somehow, spaces must be created where the logic of efficiency does not govern — spaces where depth is valued because it is good, not because it improves quarterly results; where friction is preserved because meaning grows in it, not because it enhances measurable learning outcomes; where the question "Is this efficient?" is subordinated to a question that no AI will ever originate and no metric will ever capture.
I do not know how to build those spaces. I know how to build products, teams, companies. I do not know how to build the thing Ellul is describing — the counter-technical institution, the dry land in the flood, the place where the candle is sheltered not by one person's hand but by walls that were built to protect it.
But I know the candle is real. I have felt it — in the question my son asked at dinner, in the twelve-year-old who wondered what she was for, in the moment on that Princeton campus when three fishbowls cracked against each other and something genuine passed between us. That something was not efficient. It was not optimizable. It was not the product of any system. It was the remainder — the thing that technique cannot produce and therefore cannot replace.
Ellul's gift is not comfort. It is clarity. The system is real. The system is powerful. And the system is not everything.
The rest is up to us.
— Edo Segal
Jacques Ellul argued that modern civilization is governed not by its technologies but by a deeper logic -- technique -- the relentless pursuit of the one most efficient method in every domain of human life. This logic does not ask whether the most efficient method is the most meaningful, the most humane, or the most true. It asks only whether it is the most efficient. And once that question is answered, the answer becomes compulsory.
This book applies Ellul's framework to the AI revolution with unsettling precision. When Claude Code can build in hours what teams once built in months, the builder who refuses is not principled -- she is, within the system's logic, irrational. The adoption was never a choice. It was a structural inevitability wearing the mask of freedom.
From the self-augmenting spiral of technique improving technique, to the elimination of every alternative the system deems inefficient, to the seduction of surfaces so smooth they conceal their own hollowness, Ellul's diagnosis reaches the one place most AI discourse refuses to look: the builder's own complicity in the logic she imagines herself resisting.
-- Jacques Ellul

A reading-companion catalog of the 16 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Jacques Ellul — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →