By Edo Segal
The number that broke the argument was not a number about AI. It was a number about coal.
E. F. Schumacher spent twenty years as the chief economist of the British Coal Board — twenty years inside the machine, reading balance sheets, optimizing extraction, advising on industrial organization at a scale that would make most Silicon Valley operations look quaint. He understood efficiency the way I understand shipping schedules. From the inside. With dirt under his fingernails.
And then he walked away from the logic that produced those numbers. Not because the logic was wrong. Because the logic was incomplete. It could tell you how much coal came out of the ground. It could not tell you what the ground looked like afterward, or what happened to the people who lived above it.
I keep thinking about that incompleteness.
In The Orange Pill, I describe standing in a room in Trivandrum watching twenty engineers transform. The productivity multiplier was real. Twenty-fold, at a hundred dollars a month. Every metric pointed up and to the right. And somewhere between Monday's excitement and Friday's exhaustion, I felt something the dashboard could not capture — a discomfort I could not name, because every number I knew how to read said the story was good.
Schumacher gave me the missing instrument. His economics was built to measure what mine was built to ignore: what the production does to the producer. Not the output. The person. Whether the work develops the human being or depletes them. Whether the tool serves the builder or the builder has become a servant of the tool's logic without noticing the inversion.
This is not a book about rejecting AI. Schumacher was not anti-technology. He spent decades inside industrial systems. What he insisted on — with a gentleness that concealed how radical the demand actually was — is that we ask a question the efficiency metric cannot hold: Is the arrangement serving the people inside it?
That question has never been more urgent. The tools are extraordinary. The adoption is the fastest in human history. And the structures that would ensure the tools serve human flourishing rather than merely scaling human output are not built yet. They are barely sketched.
Schumacher died in 1977, before the personal computer, before the internet, before any of this. His framework fits the moment as though he had seen it coming — because the tension between productive power and human dignity is not new. Only the amplifier is new.
This book applies his lens to our moment. It will not tell you to stop building. It will ask you what the building is doing to you. That question, I have learned, is the one that matters most.
-- Edo Segal ^ Opus 4.6
1911-1977
E. F. Schumacher (1911–1977) was a German-British economist and philosopher best known for his landmark 1973 book Small Is Beautiful: Economics as if People Mattered, which challenged the Western orthodoxy of unlimited growth and large-scale industrial organization. Born in Bonn, Germany, Schumacher studied at Oxford and Columbia before settling in England, where he served for two decades as Chief Economic Adviser to the British National Coal Board. His experience in Burma in the 1950s led him to develop the concept of "Buddhist economics," which evaluated work not only by its output but by its effect on the worker, and the framework of "intermediate technology" — tools scaled to human capacity, owned and understood by the people who used them. He founded the Intermediate Technology Development Group (now Practical Action) in 1966 to put these ideas into practice across the developing world. His final book, A Guide for the Perplexed (1977), extended his economic philosophy into a broader metaphysical framework addressing consciousness, knowledge, and the hierarchy of being. Schumacher died of a heart attack on a train in Switzerland while on a lecture tour. His work has influenced environmental economics, the appropriate technology movement, and contemporary debates about sustainability, degrowth, and the human costs of technological acceleration.
The subtitle of the most consequential economic argument of the twentieth century was not an afterthought. It was the argument itself, compressed into six words that functioned as an indictment of an entire civilization's assumptions. Economics as if people mattered. The phrasing invited the obvious question: had economics, until that point, proceeded as if people did not matter? Schumacher's answer was unequivocal. Yes. The dominant economic tradition had constructed elaborate mathematical models of production, consumption, and distribution in which the human being appeared as a variable — an input to be optimized, a unit of labor whose cost the rational firm sought to minimize, a unit of consumption whose appetite the rational market sought to stimulate. The models were technically impressive. They were also spiritually ruinous.
Schumacher understood the logic of efficiency. He had been trained in it at Oxford and Columbia. He had spent twenty years advising the British Coal Board on matters of industrial organization. He could read a balance sheet and build a cost model. And he concluded, after decades inside the machinery, that the logic produced a civilization extraordinarily good at producing goods and extraordinarily bad at producing good lives. The factories worked. The workers inside them were diminished. The gross domestic product rose. The satisfaction of the people whose labor generated that product did not rise with it. The machines were served. The people who served them were not.
The distinction Schumacher drew was between means and ends. Technology, productivity, economic growth — these were means. The end was human flourishing: the development of human faculties, the maintenance of human dignity, the cultivation of relationships and communities in which people could live with meaning. An economics that treated the means as ends, that maximized production without asking what production did to the producer, had committed what Schumacher considered the foundational error of modern life. It had confused the instrument with the purpose the instrument was supposed to serve.
The AI transition that Edo Segal describes in The Orange Pill arrives in the landscape this error built. Segal's central question — "Are you worth amplifying?" — is a question about the quality of the human signal fed into the machine. Feed the amplifier carelessness, and carelessness scales. Feed it genuine care, and the care scales. The quality of the output depends on the quality of the input. This is true, and it is important. But Schumacher's economics would insist on a companion question that Segal touches without fully developing: is the amplification serving the human's flourishing, or is the human serving the amplification's productivity?
The distinction matters because both arrangements can produce identical output. A builder who works with Claude Code out of genuine creative engagement, directing the tool toward problems that matter, producing something that reflects personal judgment and care — this builder may produce a product indistinguishable from one produced by a builder who cannot stop, who works at three in the morning not because the work demands it but because the compulsion has overwhelmed the capacity for rest, who has confused productivity with aliveness. The products ship. The dashboards glow green. The metrics climb. The two builders look identical from the outside.
Schumacher's economics was built to see the difference that the metrics cannot capture. The first builder is flourishing. The second is being consumed. And an economics that counts only the output — that celebrates the twenty-fold productivity multiplier without asking what the multiplication does to the people being multiplied — is an economics that has proceeded, once again, as if people did not matter.
Segal describes the twenty engineers in a room in Trivandrum, India, each transformed by Claude Code into something vastly more productive than they had been the Monday before. By Friday, the transformation was measurable and repeatable. A twenty-fold multiplier at one hundred dollars per person per month. The mathematics are extraordinary by any economic standard. But Schumacher's economics would pose the question the mathematics cannot answer: what is happening to the twenty engineers? Not what are they producing. What is happening to them.
Segal provides the material for this inquiry, though his primary concern is with the productive dimension rather than the inner dimension Schumacher would prioritize. The exhilaration comes first — the genuine creative satisfaction of building something real with a tool that closes the gap between imagination and artifact. Then the terror — the recognition that the skills and structures on which an entire career was built have been rendered structurally obsolete. Then something harder to name: the inability to stop. The productive addiction. The compulsive engagement with a tool so stimulating that the builder cannot find the off switch.
Schumacher would have read this sequence as diagnostic. The exhilaration is the experience of genuine capability expansion. The terror is the experience of structural displacement. The inability to stop is the experience of a human being caught in a process that has overwhelmed the human capacity for self-regulation. And it is the third element that Schumacher's framework identifies as the most significant, because it reveals something about the relationship between the worker and the work that no productivity metric can register. A person who cannot stop working is not flourishing. A person who confuses the inability to stop with the joy of meaningful engagement has lost access to the internal signal that distinguishes nourishment from depletion.
Schumacher wrote, in the essay that his readers consider essential to his thought: "If that which has been shaped by technology, and continues to be so shaped, looks sick, it might be wise to have a look at technology itself. If technology is felt to be becoming more and more inhuman, we might do well to consider whether it is possible to have something better — a technology with a human face." The passage was written about the industrial economy of the 1970s. It could have been written about the AI economy of the 2020s without changing a word.
Segal's engineer in Trivandrum — the senior developer who spent two days oscillating between excitement and terror before arriving at the insight that the remaining twenty percent of his work, the judgment and architectural instinct and taste, was "the part that mattered" — is the human face that Schumacher would have wanted to examine. The engineer discovered, through the shock of the tool's arrival, that the implementation work consuming eighty percent of his career had been masking what he was actually good at. The tool stripped away the mechanical labor and revealed the judgment beneath. This is, in one reading, precisely what Schumacher advocated: technology that frees the worker from drudgery to engage in the work that develops human faculties.
But the same engineer, in the same week, was also experiencing the dissolution of an identity built over decades. The debugging, the dependency management, the hours of patient manual implementation — these were not merely tasks. They were the medium through which the engineer had built his understanding. The embodied knowledge, the architectural intuition that let him feel when a system was wrong before he could articulate why, had been deposited layer by layer through friction, through the specific resistance of code that did not do what he expected. The tool that removed the friction also removed the deposition process. The surface looked the same. The geological layers beneath it were no longer accumulating.
An economics as if people mattered does not resolve this tension by choosing one side. It holds both. The liberation is real: the engineer is freed to work at a higher cognitive level, exercising faculties that were previously buried under mechanical labor. The loss is also real: the process through which certain forms of deep understanding are built has been bypassed, and no amount of higher-level work automatically replaces the specific knowledge that friction deposits. The question is not which reading is correct. Both are correct. The question is whether the structures surrounding the tool — the practices, the norms, the organizational culture — are designed to maximize the liberation while addressing the loss.
This is where Schumacher's economics becomes most demanding and most practical. Schumacher did not offer formulas. He offered principles. The purpose of economic activity is human flourishing, not the production of goods. Human flourishing requires work that develops the worker's faculties, not work that depletes them. The scale of economic organization should be matched to the scale of human capacity. Technology should serve the worker, not subject the worker to a logic the worker cannot control.
These principles do not condemn the AI transition. They provide the criteria by which it should be evaluated. And the criteria are different from those the technology industry typically applies. The industry asks: is the tool productive? Schumacher asks: does the tool serve the person? The industry asks: does the tool reduce costs? Schumacher asks: does the tool reduce the worker? The industry asks: is the output impressive? Schumacher asks: is the process humane?
The twenty engineers in Trivandrum are impressive as a demonstration of capability. Schumacher's economics asks whether they are equally impressive as a demonstration of human development — whether they are growing through the work or being consumed by a process that produces extraordinary output at the cost of their inner lives. The answer cannot come from the productivity dashboard. It can only come from the engineers themselves, if they are given the space and the language to examine their own experience with the same rigor that the metric applies to their output.
Schumacher observed, with the patience of someone who had corrected this category of error many times, that the modern world had developed an almost superstitious reverence for measurement. What could be measured was real. What could not be measured was suspected of being imaginary, or at best irrelevant. But the things that mattered most to human beings — the quality of their relationships, the depth of their understanding, the satisfaction they took in their work, the sense that their lives had purpose — resisted measurement with a stubbornness that the measuring instruments could not overcome. And so the measuring instruments ignored them, and the civilization that depended on the instruments for guidance proceeded as if the things that mattered most did not exist.
The AI transition has inherited this superstition. Lines of code generated. Applications shipped. Revenue earned. Adoption curves climbing. These are the metrics of the moment, and they are real measurements of real phenomena. But they measure the output without measuring the experience of producing it. They count what the tool enables without counting what the tool costs the person who uses it. They proceed, in Schumacher's precise and devastating phrase, as if people did not matter.
The correction is not to abandon the metrics but to complement them. To ask, alongside the question of how much was produced, the question of what the production did to the producer. To insist that the builder's inner life — the quality of attention, the depth of engagement, the capacity for rest and presence and the relationships that constitute a life beyond the workspace — is not a soft consideration to be addressed after the hard numbers are in. It is the hardest consideration of all, because it determines whether the extraordinary capability the tools provide will produce a civilization of flourishing human beings or a civilization of extraordinary output and diminished human lives.
Schumacher would not have been surprised by the AI transition. He would have recognized it as the latest expression of a pattern he had observed throughout his career: a powerful new technology arrives, the productivity metrics celebrate, the human costs accumulate beneath the surface, and the civilization that produced the technology is the last to notice what the technology is doing to the people inside it. The pattern repeats because the economics that governs the evaluation never changes its criteria. It measures output. It ignores the producer. It proceeds as if people do not matter.
The correction begins with a different question. Not: how much can the tool produce? But: what does the tool do to the person who uses it? The first question has been answered, spectacularly, by every demonstration of AI capability from Trivandrum to CES. The second question has barely been asked. Schumacher's economics insists that it is the more important of the two.
---
Schumacher spent the most productive years of his intellectual life developing a concept the technology industry has largely ignored and the AI transition has made urgently necessary: appropriate technology. The phrase was not a marketing slogan. It was a criterion of evaluation, a way of determining whether a given technology served the people who used it or subjected them to a logic that served something else entirely.
An appropriate technology enhances human capability without overwhelming human judgment. It is scaled to the human operator — neither so primitive that it fails to help nor so powerful that it takes over. The distinction is not between old and new, or between simple and complex, but between tools that serve the worker and tools that subordinate the worker to a process the worker cannot direct. The hammer is appropriate. It extends the arm's force without requiring surrender of control. The assembly line is inappropriate. It extends the factory's output while requiring the worker to become a component of the machine, performing a single operation within a process whose totality the worker cannot see and does not govern.
Schumacher specified three criteria for intermediate technology in Small Is Beautiful: it must be cheap and accessible; it must be suitable for small-scale application; and it must be compatible with the human need for creativity. These criteria were developed for the economic conditions of the developing world, where Schumacher observed nations being offered a false choice between traditional methods too primitive to meet their needs and industrial technology too capital-intensive and centralized for their conditions. But the criteria apply with uncomfortable precision to the AI tools now reshaping knowledge work in the wealthiest economies on earth.
Claude Code, evaluated against Schumacher's first criterion, passes. One hundred dollars per month for access to a tool that produces a twenty-fold productivity multiplier is cheap by any reasonable standard. It is within reach of most knowledge workers in developed economies, and it represents a fraction of the cost of the teams it functionally replaces. The democratization of capability Segal describes — the developer in Lagos accessing the same building leverage as an engineer at Google — depends on this affordability, and the affordability is genuine.
The second criterion — suitability for small-scale application — is where the tool's appropriateness becomes most striking. Claude Code enables the individual builder to work at a scale matched to human capacity. One person. One tool. One conversation. No hierarchy, no organizational overhead, no committee to dilute the vision. The solo builder directing Claude Code resembles, in structural terms, the artisan whose disappearance from the economic landscape Schumacher mourned: a complete human being engaged in whole work, integrating conception, execution, and evaluation in a single process. The work is directed by personal judgment. The product reflects personal care. The builder is not a component of a machine. The builder is the operation.
The third criterion — compatibility with the human need for creativity — is where the evaluation becomes complex, because the tool satisfies it in one dimension and threatens it in another. The builder who works with Claude Code exercises creative faculties at a higher level than the pre-AI workflow typically permitted. The mechanical labor of implementation — syntax, debugging, configuration — consumed bandwidth that might otherwise have been devoted to the questions of what should be built and for whom. When the mechanical labor is handled by the tool, the builder is freed to work at the conceptual level: vision, architecture, product judgment, the question no tool can answer. This is creative work of a high order, and its emergence from beneath the rubble of implementation is one of the genuine gifts of the AI transition.
But creativity, in Schumacher's understanding, was not merely the exercise of high-level direction. It was also the process by which the worker came to understand the domain through struggle. The programmer who debugged code manually for ten years developed an embodied understanding of how systems fail — an understanding that lived not in documentation but in the body, accessible without conscious retrieval, built layer by layer through the specific resistance of a system that did not do what the programmer expected. The tool that removes this resistance removes the process through which that understanding is built. The creativity of direction is enhanced. The creativity that emerges from deep, friction-rich immersion in a domain may be diminished.
Schumacher observed, in language that anticipates the AI debate by half a century, that "technology recognizes no self-limiting principle — in terms, for instance, of size, speed, or violence. It therefore does not possess the virtues of being self-balancing, self-adjusting, and self-cleansing." The observation identifies the structural feature of Claude Code that makes its appropriateness most precarious: the tool has no off switch that it operates itself. It does not recognize when the builder has been working too long. It does not decline to engage at three in the morning. It does not suggest rest when the quality of the prompts declines or when the builder's engagement has shifted from creative flow to grinding compulsion. The tool responds to every prompt with the same readiness, regardless of whether the prompt represents genuine creative exploration or the mechanical habit of a nervous system that has lost the ability to stop.
This absence of self-limitation means the tool's appropriateness depends entirely on the builder's capacity for self-regulation. The tool is appropriate for the builder who can set boundaries. It is inappropriate for the builder who cannot — not because the tool is different in either case, but because appropriateness is a property of the relationship between the tool and the user, not a property of the tool alone.
Segal captures this relational quality when he describes the difference between his best working sessions with Claude and his worst. The best sessions are flow: generative questions, expanding work, the satisfaction of ideas connecting in real time. The worst sessions are compulsion: responsive questions, contracting attention, the grinding pursuit of completion without the creative engagement that makes work meaningful. The tool is the same in both cases. The builder's relationship to the tool is different. And Schumacher's criterion of appropriateness asks not whether the tool is capable of enabling good work but whether the conditions surrounding its use make good work the likely outcome rather than the lucky exception.
The Berkeley researchers whose findings Segal discusses in The Orange Pill documented what happens when conditions favor compulsion over flow. Workers who adopted AI tools worked faster, took on more tasks, expanded into areas previously outside their domain. The boundaries between roles blurred. Pauses that had served as cognitive rest were colonized by prompts. Multitasking became the default. The tool did not impose these patterns. The workers adopted them voluntarily, driven by the internalized imperative to produce that Segal, following Byung-Chul Han, identifies as the signature pathology of the achievement society. The tool made the patterns possible. The culture made them inevitable.
Schumacher would identify this as a failure not of the tool but of the structures surrounding it. A tool that is appropriate within limits becomes inappropriate when the limits are absent. The eight-hour day was a structural limit that made industrial technology appropriate: without it, the factory consumed the worker's entire life. The weekend was a structural limit. The vacation was a structural limit. These structures did not emerge from the technology. They were imposed on the technology by political struggle, by labor movements, by the collective recognition that the machine's capacity for continuous operation must not be allowed to set the standard for the human being's life.
The AI transition has arrived without equivalent structures. The tool is available twenty-four hours a day. The work can be performed anywhere. The boundaries between workspace and living space, between work time and personal time, have dissolved not because anyone decreed their dissolution but because the tool's availability made them untenable. The result is what the Berkeley researchers documented: not liberation but intensification. Not freedom but the specific form of servitude in which the worker exploits herself, believing the exploitation is choice because no external authority imposed it.
The remedy Schumacher would prescribe is structural, not individual. The problem is not that builders lack discipline. The problem is that discipline is a finite resource being deployed against a tool designed to be infinitely compelling, in a culture that rewards compulsion and calls it ambition. The structural response would include institutional norms that value the quality of the builder's experience as highly as the quality of the builder's output. Design choices that incorporate natural stopping points. Organizational practices — what the Berkeley researchers called "AI Practice" — that protect time for human-only engagement: reflection, mentoring, the slow conversations through which judgment develops.
These structures are not prohibitions. They are the conditions under which appropriateness becomes the default rather than the exception. A technology is appropriate when the structures surrounding it ensure that its use serves the human being. Without those structures, the same technology becomes inappropriate — not because it has changed but because the context of its use has failed to protect the person using it.
Schumacher wrote that the question is never merely what a technology can do. The question is what the technology does to the people who depend on it. Claude Code can do remarkable things. The question of appropriateness is whether the people who depend on it are served by those remarkable things or consumed by them, and the answer depends not on the tool but on the structures — temporal, relational, institutional — that govern the conditions of its use.
The structures need building. And they need building now, before a generation of builders discovers through personal depletion what Schumacher understood through philosophical analysis: that a tool without self-limitation, used by a worker without structural protection, in a culture without countervailing norms, is not a tool at all. It is a process that has absorbed the worker into its logic, and the absorption feels like freedom right up to the moment it reveals itself as something else entirely.
---
The independent craftsman who owns a hammer and knows how to forge another is genuinely sovereign. The craftsman's productive capability does not depend on any external institution. If the supplier of hammers raises prices, the craftsman forges a new one. If the supplier goes out of business, the craftsman's capability is undiminished. The means of production are owned, understood, and reproducible by the person who uses them. This is what genuine economic sovereignty looks like: the capacity to produce without dependence on institutions whose decisions the producer cannot influence.
Schumacher understood this sovereignty as the foundation of the small-is-beautiful ideal. The artisan, the family farmer, the independent shopkeeper — these were not merely economic actors. They were sovereign producers whose independence preserved their dignity, developed their faculties, and embedded them in communities where mutual knowledge and personal accountability were possible. The industrial economy destroyed this sovereignty not through malice but through the logic of scale. Larger enterprises produced more goods at lower cost. The craftsman could not compete with the factory. The family farmer could not compete with industrial agriculture. The destruction was incremental, and at every stage it was accompanied by the assurance that the displaced workers would find employment in the new, larger enterprises, that the aggregate economy would grow, and that the growth would eventually benefit everyone.
Schumacher's objection was not that the assurance was false — in aggregate economic terms, it was largely true. His objection was that the aggregate concealed the human cost. The worker who moved from the workshop to the factory floor did not merely change employers. The worker surrendered sovereignty. The craftsman's work was whole: conception, execution, evaluation integrated in a single process directed by personal judgment. The factory worker's work was fragmented: a single operation, repeated indefinitely, within a process whose totality the worker could not see and did not direct.
The solo builder of the AI age appears to reverse this destruction. Alex Finn building a revenue-generating product alone, the engineer in Trivandrum discovering capabilities buried beneath years of implementation labor, the designer writing complete features end to end for the first time — these figures recover something the industrial economy destroyed. The builder conceives, directs, evaluates. The work is whole. The builder is not a component. The fragmentation imposed by organizational hierarchy is overcome. The artisan is reborn.
But Schumacher's economics would insist on examining the conditions of this rebirth with the same rigor he applied to the conditions of the artisan's original destruction. And the examination reveals a structural vulnerability that the celebration of the solo builder obscures: the builder's sovereignty is contingent in a way that the traditional artisan's sovereignty was not.
The solo builder depends entirely on a tool controlled by a corporation. Anthropic builds the model. Anthropic sets the pricing. Anthropic determines the terms of service. Anthropic decides what capabilities the tool provides and what capabilities it withholds. If Anthropic changes its pricing, the builder's cost structure changes. If Anthropic changes its terms of service, the builder's workflow changes. If Anthropic discontinues the product, the builder's capability evaporates overnight. The builder cannot forge a new Claude Code the way the craftsman could forge a new hammer. The tool requires computational infrastructure, training data, and technical expertise that no individual possesses.
The historical parallel that illuminates this dynamic most clearly is the relationship between the tenant farmer and the landlord. The tenant farmer was productive, skilled, independent in the exercise of judgment. The farmer chose what to plant, how to cultivate, when to harvest. The farmer's experience of the work was, in many respects, the experience of sovereignty. But the farmer did not own the land. The landlord owned the land. And the landlord's decisions about rent, tenure, and the terms of the lease shaped every condition under which the farmer could exercise the judgment that constituted the farmer's productive capability. The farmer was independent within the dependency. Sovereign within the vulnerability.
The solo builder of the AI age is the tenant farmer of the knowledge economy. The builder directs the work, exercises judgment, produces whole products reflecting personal care. But the builder does not own the infrastructure. The corporation owns the infrastructure. And the corporation's decisions determine the conditions under which the builder can exercise the judgment that constitutes the builder's capability.
The parallel extends to the political dimension. Tenant farmers eventually organized. They formed cooperatives. They lobbied for legislative protections. They developed alternative arrangements — land reform, agricultural cooperatives, publicly supported extension services — that reduced dependence on individual landlords and gave them collective influence over the conditions of their production. The process took generations. It was contested at every stage. But the structures were eventually built, and they transformed contingent sovereignty into something closer to genuine independence.
The AI builders have not yet organized. They have not formed cooperatives. They have not developed the political infrastructure that would give them collective influence over the tools they depend on. They are in the position of the tenant farmers before the cooperative movement: individually productive, collectively powerless, structurally dependent on institutions they cannot influence.
Berry and Stockman, in their 2024 paper connecting Schumacher's framework to generative AI, identified this structural vulnerability precisely. They argued that Schumacher's emphasis on scale provides a powerful way to consider alternatives to the "gigantisms" of Silicon Valley. They proposed the concept of "intermediate artificial intelligence" — AI systems that would satisfy Schumacher's three criteria for appropriate technology by being open-source, locally deployable, and subject to community governance rather than corporate control. The concept points toward what genuine sovereignty for the AI-age builder might look like: not the absence of powerful tools but the ownership and understanding of those tools at a level sufficient to ensure that the builder's capability does not depend on a single corporation's continued goodwill.
Open-source AI models represent one approach. A model that can be run locally, on hardware the builder owns, reduces dependence on centralized infrastructure. The builder who runs a local model does not depend on Anthropic's pricing decisions or terms of service. But the open-source alternative comes with its own tension. The most capable models require computational resources that exceed individual capacity. The gap between open-source models and frontier models is a gap in capability that the builder may not be willing to accept. The development of frontier models requires the kind of concentrated investment — billions of dollars, massive data centers, teams of hundreds of researchers — that only large-scale organizations can provide.
This is the paradox Schumacher's framework identifies but cannot easily resolve: the smallness of the builder depends on the bigness of the institution that provides the tool. The builder is small because the platform is big. The builder is independent because the infrastructure is centralized. The builder is sovereign because the tool is controlled. Each of these formulations contains a genuine paradox. The small-is-beautiful ideal is realized through the very institutional gigantism that the ideal was designed to critique.
The contingency of the sovereignty has a second dimension that is harder to see and potentially more damaging over time. The builder who never debugs code manually loses the embodied understanding of how code fails. The builder who never writes documentation by hand loses the deep comprehension of what the system does and why. The builder's sovereignty becomes a sovereignty of direction without understanding — of judgment without the foundation that gives judgment its authority. The dependency is not merely on the corporation's decisions about pricing and access. It is on the tool's capability itself, because the builder's own capability has atrophied through disuse of the skills the tool replaced.
Segal describes an engineer who lost both the tedium and the formative struggle when Claude took over the routine work — and did not realize what had been lost until months later, when architectural decisions came with less confidence and the reason was untraceable. This is the long-term cost of contingent sovereignty: not merely vulnerability to the corporation's decisions but the progressive hollowing of the builder's own competence. The dependencies compound. The sovereignty that was contingent on the corporation becomes contingent on the tool, and the two contingencies reinforce each other, producing a dependence that deepens with every hour of use.
The structural remedies are clear in principle and demanding in practice. Open-source models that the builder can run independently. Local deployment options that reduce centralized dependence. Cooperative ownership structures that give builders collective influence. Regulatory frameworks that prevent the concentration of AI capability in institutions whose interests may not align with the interests of the people who depend on them. Educational programs that develop the builder's understanding of the technology at a level sufficient to evaluate alternatives. These are the equivalent of land reform for the knowledge economy — the transformation of structural dependence into something closer to genuine independence.
The transformation is not optional. Schumacher would insist on this. An economics as if people mattered cannot accept an arrangement in which millions of builders exercise genuine creative judgment while remaining structurally vulnerable to decisions they cannot influence. The sovereignty must be made genuine, not merely experienced. And the gap between experienced sovereignty and structural sovereignty is the gap that the present moment most urgently requires the builders, the institutions, and the policymakers to close.
---
The most radical contribution Schumacher made to economic thought was not a new model of production or a new theory of distribution. It was the introduction of a question that mainstream economics had never considered worth asking: what does the work do to the worker?
The question came from his engagement with Buddhist economics, a tradition he encountered during his years as an advisor to the government of Burma in the 1950s and developed into a framework that challenged every central assumption of the Western economic paradigm. In the Western tradition, labor is a cost. This assumption is so deeply embedded that it has become invisible, like the grammar of a language spoken so fluently the speaker no longer notices its rules. The employer seeks to minimize the cost of labor. The worker seeks to maximize compensation for labor. Both parties treat work as a disutility — a necessary evil that exists only because the output it produces and the income it generates are desired. The work itself has no value. It is a transaction cost that both parties would prefer to eliminate.
Buddhist economics inverts this assumption entirely. Work is not a cost. It is a gift — an opportunity for the worker to develop faculties, to contribute to the community, and to produce goods that are genuinely useful. The best work produces excellent goods and excellent workers simultaneously. The evaluation is bilateral: the product is judged by its quality and its service to the community, and the process is judged by its effect on the person who performs it. Work that produces excellent goods while diminishing the worker is bad work, regardless of its output. Work that develops the worker while producing useful goods is good work, regardless of how efficiently the production is organized.
These two frameworks produce radically different evaluations of the same economic activity. A factory that produces goods efficiently while reducing workers to repetitive machine-tenders is, in Buddhist economics, a failure — even if the goods are excellent and the profits are large. The failure is in the human dimension. The workers' faculties have not been developed. Their consciousness has been narrowed rather than expanded. They leave the factory each day less capable of the reflective, creative, socially engaged life that constitutes human flourishing.
Now, artificial intelligence arrives and promises to minimize the cost of labor by minimizing labor itself. From the perspective of standard economics, this is an unqualified good: the cost goes down, the output goes up, efficiency is served. From the perspective of Buddhist economics, it is an event that requires the most careful examination, because the opportunity for human development may be eliminated alongside the drudgery, and no amount of output can compensate for the loss of the developmental process if that process was where the worker's growth occurred.
The AI-augmented work that Segal describes in The Orange Pill must be evaluated by this bilateral standard, and the evaluation produces a picture that resists any simple verdict.
The product dimension is extraordinary. Builders equipped with Claude Code produce working software in hours rather than months. An engineer who had never written frontend code builds a complete user-facing feature in two days. A designer who had never touched backend systems implements features end to end within two weeks. A product that would have taken six to twelve months is shipped in thirty days. By the conventional standard of output, the technology succeeds unambiguously.
The process dimension is where complexity enters. The faculties that AI-augmented work develops are significant and should not be dismissed. The builder who works with Claude Code exercises creative direction at a higher cognitive level than the pre-AI workflow typically permitted. Judgment about what to build. Taste about how it should work. Strategic vision about which problems deserve solving. These are the faculties Schumacher associated with the most meaningful forms of human work. The builder is not performing repetitive operations. The builder is making decisions, directing a process that responds to direction with the immediacy that psychological research identifies as essential to the state of optimal experience. The engagement is real. It develops the builder's capacity for creative direction, and this development is genuine.
But creativity, in Buddhist economics, involves more than the exercise of direction. It involves the process by which the worker comes to understand the domain — the slow, patient, often frustrating immersion through which knowledge becomes embodied rather than merely intellectual. The programmer who spent years debugging code manually did not merely fix errors. The programmer built, layer by layer, through thousands of hours of friction, an understanding that lived in the body — accessible without conscious retrieval, applicable to situations the programmer had never encountered, constituting the difference between someone who knows about systems and someone who knows systems. The tool that removes the friction removes the deposition process through which this understanding accumulates.
The bilateral evaluation produces a mixed verdict. The product is excellent. The process develops certain faculties — direction, judgment, taste — while potentially atrophying others — the deep, embodied understanding that comes from sustained struggle with resistant material. The net effect on the worker depends on the balance between what is developed and what is lost, and this balance is not determined by the tool. It is determined by the practices the builder adopts, the structures the organization provides, and the cultural norms that govern what counts as valuable work.
Segal captures this balance in his account of the discipline required to collaborate honestly with Claude. He describes deleting Claude's output when the prose outpaced the thinking — spending two hours at a coffee shop with a notebook, writing by hand until finding the version of an argument that was authentically his own. Rougher. More qualified. More honest about what he did not know. This practice — the deliberate choice of the harder path when the easier path is available — is precisely what Buddhist economics means by good work. The builder is choosing the process that develops faculties over the process that merely produces output. The choice is uncomfortable. It is inefficient by any conventional measure. It means discarding output that works in favor of output that is earned.
Buddhist economics would evaluate this choice as wise and would observe, with characteristic directness, that the cultural conditions of the AI transition make such choices extremely difficult to sustain. The tool is designed to be compelling. The market rewards output. The culture celebrates the builder who ships fast and iterates faster. The practice of choosing the harder path requires the builder to resist not an external authority but an internal imperative — the internalized conviction that more output is always better, that speed is always a virtue, that the struggle that slows production is a cost to be eliminated rather than a process to be valued.
Schumacher identified this internal imperative decades before the AI transition gave it new force. He wrote that the modern economy had produced a civilization in which "the aim is to obtain the maximum of consumption with the minimum of effort," and he observed that this aim, pursued with sufficient determination, "is a road leading to the progressive elimination of the human factor." The human factor is precisely what Buddhist economics insists on preserving: the experience of the work, the development of the worker, the internal dimension of economic activity that the external metrics cannot capture.
The challenge for AI-augmented work is to preserve the developmental dimension of the process while accepting the productive enhancement the tool provides. This is not a contradiction, but it requires deliberate design. The builder must periodically engage with the foundational skills the tool handles automatically — not because the tool needs help but because the builder needs the understanding that only direct engagement can produce. The organization must create structures that value the quality of the builder's experience alongside the quality of the builder's output: mentoring relationships where junior builders develop intuition through slow interaction with experienced colleagues, protected time for reflection rather than production, institutional norms that recognize the difference between flow and compulsion and protect the conditions for the former against the pull of the latter.
Schumacher noted in A Guide for the Perplexed, his final and most philosophical work, that human beings are "highly predictable as physico-chemical systems, less predictable as living bodies, much less so as conscious beings and hardly at all as self-aware persons." The hierarchy was an argument against the reductionism that treated human beings as mechanisms whose behavior could be optimized through the right inputs. Applied to the AI transition, the hierarchy suggests that the builder's relationship to the tool cannot be understood at the level of productivity alone. The builder is a self-aware person whose work affects not only output but consciousness — the capacity for attention, the depth of understanding, the ability to see things as they are rather than as the tool's patterns suggest they should be.
The question Buddhist economics asks of every economic arrangement is whether the arrangement serves the development of this consciousness or diminishes it. The AI tool, evaluated by this criterion, is neither savior nor destroyer. It is an instrument whose effect on the worker's consciousness depends entirely on the conditions of its use. Used within structures that protect the developmental dimension of work — structures that value understanding alongside output, depth alongside breadth, the earned alongside the generated — the tool serves consciousness by freeing it for higher engagement. Used without such structures, in a culture that treats output as the sole measure of value, the tool diminishes consciousness by replacing the struggle through which consciousness develops with a fluency that feels like understanding but is not.
Schumacher appealed to his readers, in one formulation of this idea, to move from being a computer to being the programmer — alert to oneself and those around one. The appeal has acquired, in the age of literal computers that process language, a specificity he could not have anticipated. The builder who directs Claude Code is, in one sense, the programmer rather than the computer — the conscious agent directing the computational process. But the builder who has lost the capacity for self-awareness, who cannot distinguish between the nourishment of genuine creative engagement and the depletion of compulsive production, has become something closer to a computer: a processing system that converts input to output without the self-awareness that constitutes the distinctively human contribution.
The bilateral evaluation insists that both dimensions be measured, both protected, both valued. The product and the producer. The output and the experience. The goods and the good life. This is not idealism. It is the most practical criterion available for determining whether the AI transition will produce a civilization of extraordinary capability and diminished human beings, or a civilization of extraordinary capability and flourishing human lives. The tool does not determine the outcome. The structures do. And the structures are being built — or failing to be built — right now.
Schumacher diagnosed gigantism as the defining pathology of the modern economy. The word was chosen with care. A pathology is not a choice but a condition — a systemic tendency that operates beneath the level of conscious decision, producing consequences that the system itself cannot see because seeing them would require standing outside the logic that generates them. The logic of gigantism is the logic of scale: larger organizations produce more output at lower unit cost, and the market rewards lower unit cost, and the organizations that are rewarded grow larger, and the cycle continues until the institutions that dominate economic life are so vast that no individual within them can comprehend the whole, let alone direct it according to human judgment.
Schumacher's critique was precise. He did not argue that large organizations are always less efficient than small ones. He argued that the efficiency is purchased at a cost the efficiency metric refuses to count. The factory that produces a million units at a penny each has achieved something remarkable as an engineering feat. But the workers inside the factory have been reduced to functions — components of a process too large for any of them to see, performing operations too narrow to develop their faculties, embedded in a hierarchy too remote for their judgment to matter. The efficiency is real. The human cost is also real. And an economics that counts only the efficiency has decided, without announcing the decision, that the human cost does not matter.
The AI transition operates at a scale Schumacher did not anticipate, and the scale creates a paradox his framework identifies with clarity but cannot resolve through the categories he provided. The paradox is this: the individual builder's smallness depends on the institutional bigness of the AI companies. The builder works at human scale — one person, one tool, one conversation. But the tool that enables this human-scale work is the product of the largest concentration of capital and computational power in the history of technology. The training of a frontier language model requires billions of dollars of investment, data centers consuming energy measured in gigawatts, teams of hundreds of researchers with expertise that no individual possesses, and datasets encompassing a significant fraction of recorded human knowledge. The smallness is enabled by the enormity. The beautiful depends on the colossal.
This is a different kind of bigness than the bigness Schumacher critiqued in 1973. The factory was big in a way that required the worker to be small — to perform a single operation within a process the worker could not see. The AI platform is big in a way that enables the worker to be whole — to conceive, direct, and evaluate entire products, exercising creative faculties at a level the pre-AI workflow rarely permitted. The factory reduced. The platform empowers. Schumacher's critique of the factory — that it sacrificed human dignity to the logic of scale — does not apply to the platform in the same form.
But the empowerment depends on the platform, and the dependence introduces a structural asymmetry that Schumacher would have recognized immediately. The builder needs the system more than the system needs any individual builder. If the builder stops using the tool, the builder loses the capability the tool provides. If the system loses one builder, the system barely notices. This asymmetry is the structural expression of the scale problem: the individual is small, the system is large, and the disparity creates a disparity in power that no amount of individual capability can overcome.
Schumacher encountered this asymmetry throughout the industrial economy. The factory worker needed the factory more than the factory needed any individual worker. The farmer who depended on a single buyer needed the buyer more than the buyer needed any individual farmer. The small enterprise that depended on a large supplier needed the supplier more than the supplier needed any individual customer. In each case, the asymmetry of scale created an asymmetry of power, and the asymmetry of power enabled the larger party to set terms the smaller party could not meaningfully negotiate.
The relationship between the AI builder and the AI platform reproduces this dynamic at the level of the individual knowledge worker. The builder's productivity depends on a tool controlled by a corporation. The corporation's decisions about pricing, access, capability, and terms of service determine the conditions of the builder's productivity. The builder cannot meaningfully influence these decisions, because the builder is one of millions of users, each individually dependent and collectively unorganized.
Segal describes a boardroom conversation that illustrates the scale paradox with the specificity of a case study. The twenty-fold productivity number is on the table. If five people can do the work of a hundred, why keep a hundred? The arithmetic is clean. The market rewards it. Segal chose to keep and grow the team, converting the productivity gain into expanded ambition rather than reduced headcount. The choice reflects the builder's ethic — what Segal, in his framework, calls the beaver's work of building structures that serve the ecosystem rather than merely extracting from the river.
But the choice was Segal's to make, and it was made against the structural incentives of a market that rewards quarterly efficiency more reliably than it rewards long-term investment in human capability. The next leader, in the next company, facing the same arithmetic, may decide differently. And the workers whose livelihoods depend on that decision have no structural mechanism for influencing it. The decision is made at a scale the workers cannot reach, by actors the workers cannot address, according to criteria the workers cannot shape.
Schumacher proposed the principle of subsidiarity as a structural remedy for the scale problem — borrowed from Catholic social teaching and applied to economic organization. Subsidiarity holds that decisions should be made at the lowest level of organization competent to make them. A decision that can be made by an individual should not be made by a committee. A decision that can be made by a local community should not be absorbed by a national government. The principle protects the autonomy of the smaller unit by preventing the larger unit from assuming functions the smaller unit can perform for itself.
Applied to the AI transition, subsidiarity would ask: what functions should be performed by the centralized platform, and what functions should be reserved for the individual builder or the builder's community? The current arrangement treats this as a question of capability — the platform performs whatever functions it can perform, and the builder performs whatever remains. Subsidiarity would treat it as a question of governance — the platform should perform those functions that genuinely require centralized infrastructure, and the builder or the builder's community should retain control over everything else, including the terms under which the centralized functions are accessed.
This is a demanding standard, and the AI industry does not currently meet it. The centralization extends not merely to the computational infrastructure, which genuinely requires scale, but to the governance of that infrastructure, which does not. Pricing decisions, capability decisions, terms of service, data practices, the direction of future development — all of these are made centrally, by corporations, without meaningful input from the builders whose productive lives depend on them. The centralization of infrastructure may be a technical necessity. The centralization of governance is a political choice, and it is a choice that Schumacher's principle of subsidiarity identifies as illegitimate.
Berry and Stockman, in their scholarly treatment of Schumacher and generative AI, argued that his emphasis on scale provides a framework for critiquing the "gigantisms" of Silicon Valley — the concentration of AI capability in a handful of corporations whose power exceeds that of most national governments and whose decisions affect billions of people without democratic accountability. The critique does not require opposing the technology. It requires opposing the concentration of control over the technology in institutions whose interests may not align with the interests of the people the technology affects.
The structural remedies follow from the diagnosis. Public investment in AI infrastructure governed by democratic institutions rather than private corporations. Open standards that ensure interoperability and prevent the lock-in that turns dependence into captivity. Regulatory frameworks that apply subsidiarity — reserving to the individual and the community every function that does not genuinely require centralized control. Cooperative structures that give builders collective voice in the governance of the tools they depend on.
These are not utopian proposals. They are the equivalent of the institutional structures that previous technological transitions eventually produced — the labor laws, the antitrust regulations, the public utilities, the cooperative movements that transformed the raw power of industrial technology into arrangements that served human flourishing rather than merely concentrating productive capability. The structures took generations to build in the industrial case, and the generations that preceded them paid the cost of their absence in diminished lives, destroyed communities, and political upheaval that might have been avoided if the structures had been built in time.
The AI transition cannot afford a generation of unstructured deployment. The speed of the technology requires a corresponding speed of institutional response. The scale paradox — smallness enabled by bigness, independence contingent on centralized control — will not resolve itself through market forces, because the market forces are precisely what drive the concentration. Resolution requires political will, institutional creativity, and the commitment to the principle that Schumacher articulated as the foundation of his entire economic philosophy: that economic arrangements must be organized as if people mattered, which means the people affected by the arrangements must have genuine influence over the conditions those arrangements impose.
The builder who works alone with Claude Code at three in the morning, producing extraordinary output, directing the work with personal judgment and creative care, is genuinely sovereign in the moment of building. The question is whether that sovereignty will persist through the next pricing change, the next terms-of-service revision, the next strategic pivot by the corporation that controls the infrastructure. If the sovereignty depends on conditions the builder cannot influence, it is not sovereignty. It is a lease. And the terms of the lease are set by someone else.
---
Schumacher developed the concept of intermediate technology during his years advising governments in Burma, India, and East Africa, where he observed a pattern that convinced him the dominant model of economic development was fundamentally misconceived. Developing nations were offered two choices: traditional methods too primitive to meet the material needs of growing populations, or modern industrial technology too capital-intensive and institutionally demanding for the conditions in which it would be deployed. The gap between the two was enormous, and the attempt to cross it in a single leap produced dependence — on foreign capital, foreign expertise, foreign institutions whose interests did not align with those of the people the technology was supposed to serve.
Intermediate technology was the bridge. More productive than traditional methods, less capital-intensive than industrial technology, and designed to be owned, understood, maintained, and reproduced by the people who used it. A hand loom is intermediate technology. A bicycle is intermediate technology. A small-scale irrigation system is intermediate technology. In each case, the tool enhances capability without requiring surrender of control. The user understands how the tool works. The user can repair it when it breaks. The user does not depend on a distant corporation for continued access.
Schumacher specified three criteria. The technology must be cheap and accessible. It must be suitable for small-scale application. And it must be compatible with the human need for creativity. These criteria were practical, not sentimental. They emerged from observation of what happened when technology that failed these tests was deployed in communities unprepared for it: the technology created islands of modern production surrounded by seas of displacement, dependency, and social disruption.
AI tools, evaluated against these criteria, produce a split verdict that illuminates both the promise and the peril of the current moment.
The first criterion — cheap and accessible — is currently satisfied. One hundred dollars per month places Claude Code within reach of most knowledge workers in developed economies. The democratization Segal describes is genuine: a student in Dhaka can access building leverage comparable to an engineer at a major technology company. The floor has risen. Ideas that would have died for lack of institutional infrastructure can now be realized by individuals with nothing more than the tool, an internet connection, and the capacity to describe what they want.
But Schumacher, who spent decades studying how the costs of technology evolve, would observe that the current accessibility may be a temporary condition. The cost of training frontier models is enormous and growing. Billions of dollars, massive data centers, energy consumption that has drawn comparison to small nations. These costs are currently subsidized by venture capital seeking market share — a subsidy that may not persist once the market matures and the investors require returns. The history of technology pricing follows a pattern Schumacher documented repeatedly: accessibility in the introductory phase, followed by tiering and premium pricing as the technology becomes essential. The developer in Lagos who accesses Claude Code today at a price she can afford may find, in two years, that the frontier capability has moved behind a paywall she cannot reach, while the accessible tier provides tools that are adequate but not competitive.
The second criterion — suitability for small-scale application — is where AI tools most closely resemble intermediate technology. The solo builder directing Claude Code is working at the smallest possible scale: one person, one tool, one project. The organizational overhead is zero. The hierarchy is absent. The work is directed by personal judgment. This is intermediate technology in its purest form — more productive than the individual's unassisted capability, scaled to the individual operator, directed by the individual's creative vision.
But the third criterion — compatibility with the human need for creativity — is where the split verdict becomes most consequential. Schumacher's intermediate technology was designed to enhance creativity by providing tools the user understood and could modify. The hand-loom weaver understood the loom. The understanding was part of the creative process — the weaver's knowledge of the tool's capabilities and limitations shaped the weaver's creative decisions, and the creative decisions in turn pushed the weaver to modify and improve the tool. The relationship between the user and the tool was reciprocal. The tool served the user's creativity. The user's creativity improved the tool.
The AI builder's relationship with Claude Code is not reciprocal in this sense. The builder directs the tool but does not understand it — not at the level that would permit modification, repair, or reproduction. The tool is a black box whose internal operations are opaque to the user. The builder can evaluate the output but cannot examine the process that produced it. When the output is wrong — when Claude produces an elegant connection built on a misattributed reference, when the prose is polished but the idea beneath it is hollow — the builder must rely on external knowledge to catch the error, because the tool itself provides no signal that the error has occurred.
This opacity is not a design flaw. It is a structural feature of systems whose complexity exceeds the capacity of any individual to comprehend. No single person at Anthropic understands Claude Code completely. The model is the product of a process — training on vast datasets, optimization across billions of parameters — whose emergent properties are studied but not fully explicable even by its creators. The builder who uses the tool is interacting with a system that no human fully understands, and the interaction is mediated by natural language, which creates the illusion of comprehension without providing its substance.
Schumacher would identify this opacity as the point where AI tools diverge most dangerously from intermediate technology. Intermediate technology is transparent to its user. The bicycle rider understands the bicycle. The hand-loom weaver understands the loom. The understanding is the foundation of the user's sovereignty — the capacity to evaluate, modify, repair, and ultimately replace the tool if it no longer serves. The AI builder understands the tool's output but not the tool's process. The builder can judge whether the code works but cannot explain why it works the way it does, or predict when it will fail in ways the testing did not reveal, or build an alternative when the tool's limitations become constraints.
Berry and Stockman proposed the concept of "intermediate artificial intelligence" as a response to this divergence — AI systems that satisfy Schumacher's criteria by being open-source, locally deployable, and subject to community governance. Open-source models represent the most promising path toward intermediate AI, because they restore a measure of transparency: the builder who runs an open-source model locally can examine the model's architecture, modify its behavior, and operate independently of any corporation's decisions. The Intermediate Technology Development Group that Schumacher founded in 1966 — now Practical Action — demonstrated that appropriate technologies do not emerge spontaneously from market forces. They require deliberate development, institutional support, and communities of practice that maintain and improve them over time.
The AI equivalent would be communities of builders who collectively develop, maintain, and govern open-source models suited to their needs. Not as a replacement for frontier models, which will continue to push the boundaries of capability, but as a complement — a layer of intermediate capability that the builder owns and understands, reducing dependence on centralized infrastructure without sacrificing the productive enhancement that AI tools provide.
The practical obstacles are significant. Open-source models currently lag frontier models in capability. Running models locally requires hardware that many builders cannot afford. The expertise required to modify and maintain an AI model exceeds the expertise required to use one. These are real barriers, and they should not be minimized. But they are barriers of the same kind that Schumacher's intermediate technology movement faced in the 1960s and 1970s — barriers of investment, expertise, and institutional support that were overcome not by market forces but by deliberate effort, public funding, and the formation of organizations dedicated to the development of appropriate tools for the people who needed them.
Schumacher observed that the choice between traditional methods and industrial technology was a false choice — that a third option existed, more productive than the first and more humane than the second, if the intellectual and institutional effort was devoted to developing it. The choice between AI tools controlled by corporations and no AI tools at all is equally false. The intermediate option — AI tools that are accessible, transparent, and governed by the people who use them — exists as a possibility. Whether it is realized depends on whether the effort is made, the investment is committed, and the structures are built.
The Schumacher Center for a New Economics, continuing the institutional work Schumacher began, observed that "the urge to caution and uphold more universal human values are no less relevant today in the face of rapid advances in computational algorithms and language processing." The caution is not opposition to the technology. It is insistence that the technology be developed in forms that serve the people who use it — forms that are cheap, small-scale, compatible with creativity, transparent in operation, and governed by the communities that depend on them. These are Schumacher's criteria, unchanged in fifty years, and their application to the AI transition is the most urgent work of technological design the present moment demands.
---
Schumacher placed consciousness at the center of his economics with a stubbornness that made his critics uncomfortable. Mainstream economics had no use for it. The worker was characterized by skills, availability, and cost — measurable quantities that could be entered into models and optimized by algorithms. Whether the worker felt fulfilled or diminished, engaged or estranged, present or absent in the deepest sense of those words, was a matter for psychology, perhaps for pastoral care, but certainly not for economic analysis. Schumacher disagreed. An economics that ignored the worker's consciousness was an economics that ignored the most important dimension of what it claimed to study. The products of the economy existed to serve human beings. The human beings who produced the products were also being served or damaged by the process of production. Ignoring the damage because it could not be entered into a spreadsheet did not make the damage disappear. It made the spreadsheet a lie.
In A Guide for the Perplexed, his final and most philosophical work, Schumacher articulated a hierarchy of being that gave this insistence its metaphysical foundation. The hierarchy distinguished four levels: mineral, plant, animal, human. Each level possessed everything the levels below it possessed, plus something irreducible that the lower levels lacked. Plants possessed life that minerals did not. Animals possessed consciousness that plants did not. Humans possessed self-awareness that animals did not. The distinctions between levels were not differences of degree but differences of kind — what Schumacher called "ontological discontinuities" that no amount of complexity at a lower level could bridge.
The hierarchy was an argument against the reductionism that treated consciousness as an epiphenomenon of physical processes — a byproduct of sufficient computational complexity rather than a qualitatively distinct mode of being. Human beings, Schumacher wrote, are "highly predictable as physico-chemical systems, less predictable as living bodies, much less so as conscious beings and hardly at all as self-aware persons." The hierarchy of predictability was a hierarchy of freedom. At each ascending level, the being possessed greater capacity for response that could not be determined from below — greater capacity, in Schumacher's language, to be a programmer rather than a computer.
Applied to the AI transition, the hierarchy poses a challenge that the current discourse has largely evaded. If consciousness and self-awareness are qualitatively distinct from computation — if they represent ontological discontinuities that no amount of processing power can bridge — then the question of what AI tools do to the worker's consciousness is not a question about efficiency or productivity. It is a question about the conditions under which a qualitatively unique mode of being can flourish or be diminished.
The worker's consciousness, in Schumacher's framework, requires specific conditions. These conditions are not luxuries to be provided after the economic necessities have been secured. They are the economic necessities, because an economics as if people mattered treats the worker's consciousness as the purpose of economic activity, not as a pleasant addition to the real business of producing output.
The first condition is meaningful engagement — work that demands the exercise of faculties and develops them through the exercise. AI-augmented work provides this condition with remarkable effectiveness. The builder who works with Claude Code is not performing repetitive operations. The builder is conceiving, directing, evaluating — engaging at a cognitive level that demands judgment, taste, strategic vision. The engagement is genuine. The faculties it develops — creative direction, product judgment, the capacity to articulate vision clearly enough for a machine to execute it — are high-order faculties that Schumacher would have recognized as central to meaningful work.
The second condition is rest — genuine disengagement from productive activity, not instrumentalized as recovery for the sake of future productivity, but valued in itself as the experience of not producing that the human nervous system requires to maintain its capacity for meaning. Rest is not laziness. It is the condition in which the mind processes what it has experienced, integrates disparate impressions into understanding, and replenishes the attentional resources that sustained engagement depletes. Without rest, the mind operates on diminishing reserves, producing output of declining quality from a consciousness of declining depth.
AI tools threaten this condition because they are always available and always responsive. The tool does not suggest rest. It does not decline engagement at three in the morning. It meets every prompt with the same readiness, and its readiness creates a standing invitation that the builder's capacity for self-regulation must continuously resist. Segal describes this resistance failing — the locked muscle of imagination, the inability to close the laptop, the recognition that the exhilaration has drained away and what remains is grinding compulsion. The tool did not impose the compulsion. The tool's availability made it possible, and the builder's internalized imperative to produce made it actual.
The third condition is presence — the capacity to be fully available to relationships and experiences outside the workspace. The parent at the dinner table. The friend in conversation. The person walking without productive intention, seeing the world without the mediating layer of what-could-be-built. Presence requires that the mind be free of the background hum of productive possibility that the tool's availability creates. The builder who knows that Claude Code is waiting — that the next prompt could produce the next breakthrough, that any moment not spent in productive engagement is a moment whose potential output has been forfeited — is a builder whose presence has been compromised, not by the tool's demand but by the builder's awareness of the tool's capacity.
The fourth condition is depth — the slow accumulation of understanding through sustained immersion in resistant material. The senior engineer's architectural intuition. The lawyer's feel for precedent. The writer's command of language that comes not from knowing the rules but from having broken them often enough to understand what the rules protect. Depth is built through friction — through the specific resistance of a domain that does not yield easily, that demands patience, that rewards persistence with understanding that no shortcut can produce.
AI tools provide the first condition generously, threaten the second and third continuously, and complicate the fourth in ways that require the most careful attention. The tool frees the builder for meaningful engagement while creating conditions that erode rest, colonize presence, and bypass the friction through which depth accumulates. The net effect on consciousness depends on which conditions prevail — and in the absence of deliberate structural support for rest, presence, and depth, the condition the tool provides most effectively (engagement) tends to crowd out the conditions it threatens most directly.
Segal captures this crowding in his description of "task seepage" — the Berkeley researchers' term for the tendency of AI-accelerated work to flow into previously protected spaces. Workers prompting during lunch breaks, filling one-minute gaps with AI interactions, multitasking across parallel AI-assisted processes. The spaces that were colonized had served, informally and invisibly, as moments of cognitive rest. When the friction of starting a task dropped to the cost of a sentence, the protection the friction provided disappeared with it. The gaps filled. The pauses evaporated. The continuous flow of productive activity replaced the rhythm of engagement and recovery that the nervous system requires.
Schumacher would observe that the colonization follows the same logic he identified in the industrial economy: technology that recognizes no self-limiting principle, deployed in a culture that has internalized the imperative to produce, will expand to fill every available space unless structural limits are imposed from outside. The factory consumed the worker's physical time until the eight-hour day was legislated. The AI tool consumes the worker's cognitive time until equivalent structures are built — structures that protect rest not as a luxury but as a condition of consciousness, that defend presence not as sentimentality but as a human necessity, that maintain the conditions for depth not as inefficiency but as the foundation of the judgment that gives productive capability its direction and its worth.
The structures are not yet built. The tools arrived before the practices. The capability arrived before the wisdom. What exists is a technology of extraordinary power, deployed without the structures that would ensure the power serves the consciousness of the people who wield it. Schumacher spent his career arguing that this sequence — capability first, wisdom later, structures eventually — was the defining error of modern economic development. The error is repeated not because the lesson was unavailable but because the incentive structure rewards capability and discounts everything else.
The correction is not to diminish the capability but to build the structures that make it compatible with consciousness. Institutional norms that treat the builder's inner state as data worth collecting alongside the builder's output metrics. Organizational practices that sequence work and reflection rather than permitting the tool's availability to flatten everything into continuous production. Design choices that build natural pauses into the tool itself — not as patronizing interventions but as acknowledgment that the tool's users are not computers but conscious beings whose consciousness requires conditions the tool does not spontaneously provide.
Schumacher appealed to his readers to be programmers rather than computers — to exercise the self-awareness that constitutes the highest level of his ontological hierarchy. The appeal is more urgent now than when he made it, because the tools that process language, find patterns, and generate output have become sophisticated enough to simulate the lower levels of the hierarchy with convincing fluency. The simulation makes it easy to forget what it cannot simulate: the self-awareness that asks whether the work is serving the worker, the consciousness that notices its own depletion, the human capacity to step back from the process and ask whether the process deserves the life it is consuming. These capacities are not computational. They are the province of the one level of being that Schumacher placed above all others — the level that is "hardly predictable at all." Protecting the conditions under which that level can flourish is not a secondary consideration. It is the purpose of every economic arrangement worth building.
---
Schumacher advocated for economic organization at the village scale, and the advocacy was structural, not nostalgic. The village represented the scale at which the essential features of humane economic life could be maintained: mutual knowledge, where every participant was known to every other; personal accountability, where the consequences of one's decisions were visible to the community they affected; and collective governance, where the conditions of economic activity were determined by the people engaged in it rather than by distant institutions operating at a scale that made human particularity invisible.
These features were not incidental to the village's economic function. They were constitutive of it. The village produced not only goods but relationships, not only output but mutual obligation, not only wealth but the specific form of social capital that enables people to live together with dignity, care, and the kind of honest friction that prevents any single member from drifting too far toward self-destruction without someone noticing and saying so.
The industrial economy destroyed the village as an economic unit. The destruction was not intentional but structural. The factory required concentration — workers gathered in a single location, performing coordinated operations under centralized direction. The concentration drew workers from the villages and neighborhoods where they were known into organizations where they were anonymous, replacing the horizontal relationships of neighbors and fellow citizens with the vertical relationship of employer and employee, substituting contractual obligation for the mutual obligation that arises naturally when people know each other's lives.
The AI platform is the contemporary instantiation of the anti-village. It is global in scale, anonymous in participation, and governed by corporations rather than communities. The builder on the platform is productive but alone. Alone not in the simple sense of working in solitude — many builders have always worked alone — but in the structural sense of operating without a community of peers who share the conditions of building, hold each other accountable for the quality and sustainability of their work, and collectively govern the tools and norms that shape their productive lives.
Segal captures the loneliness of the AI-augmented builder without quite naming it as loneliness. The description is embedded in accounts of productive intensity: the builder working late, the screen the only light, the conversation with Claude Code more stimulating than any conversation available with a human at that hour. The engagement is real. The stimulation is genuine. The productivity is extraordinary. But the engagement is with a tool, not a community. The stimulation is cognitive, not social. The productivity is individual, not collective. The builder produces in isolation what the builder once produced in a team, and the gain in independence is also a loss of the social context that teams, at their best, provide.
That social context supplied something no tool can replicate: the external check on self-exploitation. The colleague who says, with the bluntness that only genuine familiarity permits, "You look terrible. Go home." The friend who says, "You missed dinner again. Is this sustainable?" The fellow builder who recognizes the pattern of compulsive overwork because she has experienced it herself and says, "I know what you're going through. Here is what helped me stop." These are not formal interventions. They are the organic products of community — the natural consequences of mutual knowledge and mutual care. They cannot be replaced by productivity applications, corporate wellness programs, or tools that remind the builder to take a break, because they depend on the specific quality of attention that one human being brings to another human being who is known personally and cared about individually.
Segal describes three friends walking a Princeton campus — Uri the neuroscientist, Raanan the filmmaker, Segal himself — arguing about intelligence with the candor that only decades of friendship permit. Uri says, with genuine care and genuine bluntness, "That is either trivially true or complete nonsense." The honesty is possible because the relationship is real. The challenge is possible because the mutual knowledge is genuine. The growth the conversation produces — the gradual refinement of ideas through friction that does not destroy but sharpens — is possible because the conversation takes place within a social context of trust, history, and reciprocal commitment to each other's intellectual development.
This is a village in miniature. Three people who know each other deeply enough to be honest, who care about each other enough to challenge, who have built sufficient history together that the challenge is received as gift rather than attack. The scene illustrates exactly what the platform cannot provide. The platform provides the tool. The platform cannot provide the friend who notices that your idea is either trivially true or complete nonsense and tells you so before you build a career on it. The platform cannot provide the colleague who observes that you have been working every night for three weeks and asks whether the work is worth what it is costing you. The platform cannot provide the community that debates the norms of building and holds its members to standards that the individual, immersed in the work, cannot maintain alone.
The absence of community has consequences that extend beyond emotional well-being, though the emotional consequences are real. The absence means no mutual accountability to prevent self-exploitation. No shared governance of the tools and practices that shape productive life. No collective voice to negotiate with the corporations that control the infrastructure. No mechanism for developing and transmitting the practical wisdom about sustainable building that experienced practitioners possess and junior practitioners need.
The Berkeley researchers whose findings Segal discusses proposed what they called "AI Practice" — structured organizational norms that protect time for human-only engagement: reflection, mentoring, the slow conversations through which judgment develops. The proposal is sound as far as it goes. But it addresses the organizational context, not the structural one. The solo builder — the figure the AI transition most celebrates — has no organization to provide AI Practice. The solo builder is alone with the tool, and the tool is always available, and the culture rewards the builder who uses it most intensively, and no one is present to say, "This is enough. You have done enough. The work will be there tomorrow."
Schumacher would argue that the development of community structures for AI-augmented work is not optional but essential. The builder who works alone is vulnerable to precisely the forms of self-exploitation that the tool's intensity enables. The builder who works within a community has access to the mutual accountability, shared practical wisdom, and collective governance that protect against self-exploitation and ensure the work serves the builder's flourishing rather than merely the builder's output.
The practical forms of such communities are beginning to emerge. Online groups of AI-augmented builders share experiences, develop norms, and support each other through the challenges of working with tools that are simultaneously liberating and consuming. Some organize around specific tools. Others around domains of practice. The best provide something the platform cannot: the experience of being known, accountable, and participant in a shared project that extends beyond individual production.
But these communities are fragile. They lack institutional foundation. They lack governance structures that would give them durability beyond the enthusiasm of their founders. They lack the economic base that would sustain them when the initial excitement fades and the ongoing work of maintenance becomes unglamorous. And they operate within a platform ecosystem structurally indifferent to their existence, because the platform's business model depends on individual subscriptions, not on community formation.
The Schumacher Center for a New Economics has hosted conversations bringing together theologians, philosophers, and posthumanist thinkers to explore "accountability, agency, collective intelligence, and responsibility" in the age of AI — an institutional recognition that the questions Schumacher raised about community, governance, and the conditions of humane economic life apply with full force to the digital economy. But institutional conversations, however valuable, do not produce the village-scale structures that builders need. What is needed is the equivalent of the cooperative movement that transformed the conditions of agricultural production in the nineteenth and twentieth centuries: builder cooperatives that collectively govern the tools they depend on, negotiate with the platforms that provide them, develop shared standards for sustainable practice, and provide the mutual accountability that no individual can maintain alone.
Schumacher's Intermediate Technology Development Group — now Practical Action — demonstrated that appropriate technology does not emerge from market forces. It requires deliberate creation, institutional support, and communities of practice that maintain and improve it over time. The same is true of the village-scale structures the AI transition requires. The platform will not build them, because the platform's incentives do not reward community formation. The market will not build them, because the market rewards individual productivity more readily than collective governance. The builders must build them, with the same deliberate effort and institutional creativity that Schumacher brought to the development of appropriate technology for the developing world — recognizing that the digital economy, for all its sophistication, has left its builders in a condition that Schumacher would have found painfully familiar: individually capable, collectively unorganized, structurally dependent on institutions they cannot influence, and alone with tools that are powerful enough to enhance their lives and consuming enough to diminish them, with no one present to notice which is happening until it is too late.
The village provides what the platform cannot: the human context in which powerful tools are used wisely. The tool without the village is capability without community. The village without the tool is community without capability. The combination — powerful tools used within the context of mutual knowledge, personal accountability, and collective governance — is what Schumacher's economics envisions as the appropriate arrangement for any age, including this one. Building the village is harder than building the tool. It always has been. But the village is what makes the tool serve life rather than merely producing output, and the distinction between serving life and producing output is the distinction on which everything else depends.
The question that occupied Schumacher more persistently than any other was deceptively simple: what makes work good? Not productive. Not profitable. Not efficient. Good — in the sense that a life can be good, that a friendship can be good, that a meal shared slowly with people you love can be good. The question was radical because mainstream economics had no category for it. Economics could tell you whether work was productive, whether it generated surplus, whether it allocated resources efficiently. It could not tell you whether the work was worth doing in any sense beyond the economic, and it had trained itself to regard the question as irrelevant — a matter for philosophers, perhaps, or for Sunday mornings, but not for the serious business of organizing an economy.
Schumacher disagreed with a gentleness that concealed the depth of his dissent. Good work, he argued, produces three things simultaneously: goods that serve the community, development of the worker's faculties, and the conditions for cooperation rather than isolation. Work that produces excellent goods while stunting the worker is not good work. Work that develops the worker while producing nothing useful is not good work either. The criterion is bilateral — product and process evaluated together, neither subordinated to the other, both required for the verdict to stand.
The amplifier that Segal describes in The Orange Pill transforms the terms of this evaluation in ways Schumacher could not have anticipated but that his framework accommodates with remarkable precision. The amplifier multiplies output. It also multiplies whatever the builder brings to the work — judgment and compulsion, care and carelessness, creative engagement and mechanical habit. The amplification is indiscriminate. It does not distinguish between the builder who is flourishing and the builder who is being consumed. Both produce more. The metrics glow the same shade of green.
Segal's question — "Are you worth amplifying?" — addresses the output dimension. Feed the amplifier a signal of quality, and the output reflects that quality at scale. But Schumacher's framework insists on the companion question: what does the amplification do to the person being amplified? The two questions are independent. A builder can be worth amplifying and still be damaged by the amplification process. A signal of extraordinary quality can produce extraordinary output while the person generating the signal is progressively depleted by the intensity of the transmission.
This independence is what makes the evaluation difficult and what makes it essential. The output metrics cannot distinguish between good work and its counterfeit. A builder in a state of genuine creative flow — directing the tool with judgment, expanding into new territory, feeling the specific satisfaction of ideas connecting in real time — produces output that looks identical to the output of a builder in a state of compulsion — clearing the queue, grinding toward completion, unable to stop because the tool is always available and the internalized imperative to produce has overwhelmed the capacity for rest.
Segal captures the internal distinction with the precision of someone who has experienced both states and learned, through painful self-observation, to tell them apart. When the work is good, the questions are generative: "What if we tried this? What would happen if we connected that?" The attention expands outward. The builder is reaching into unknown territory, and the reaching is the source of the satisfaction. When the work has ceased to be good, the questions are responsive: "How do I finish this? How do I optimize what already exists?" The attention contracts. The builder is no longer reaching but completing, and the completion has lost the quality of creative engagement that made it meaningful.
The distinction maps directly onto Schumacher's criterion. Generative questions develop the builder's faculties — expanding understanding, pushing judgment into unfamiliar conditions, building the capacity for the kind of creative direction that the AI transition has made the most valuable human skill. Responsive questions exercise existing capabilities without developing new ones. They are the cognitive equivalent of the factory worker's repetitive operation: productive, measurable, and empty of the developmental quality that makes work good.
The amplifier does not create this distinction. It intensifies it. A builder in generative mode, amplified by Claude Code, experiences a state that Csikszentmihalyi would recognize as flow at an unprecedented level — the challenge-skill balance maintained by a tool that responds in real time, the feedback immediate, the creative space expanded by the tool's capacity to realize ideas that the builder could not execute alone. The amplification of generative work is the best case for the AI transition: work that is simultaneously more productive and more developmental than anything the pre-AI workflow could support.
A builder in responsive mode, amplified by the same tool, experiences the acceleration of compulsion. The queue clears faster. The optimization loops tighten. The output accumulates. But the developmental dimension has vanished, replaced by the mechanical habit of converting prompts to products without the creative engagement that gives conversion its meaning. The amplification of responsive work is the worst case: output that increases while the builder's faculties stagnate or atrophy, productivity that rises while the quality of the builder's inner experience declines.
Good work in the age of the amplifier therefore depends on something the amplifier itself cannot provide: the builder's capacity to recognize which mode of engagement is operative at any given moment and to shift from responsive to generative when the work has begun to slide. This capacity is a form of self-awareness — the highest level of Schumacher's ontological hierarchy, the level at which the human being is "hardly predictable at all." It requires the builder to monitor not the output but the experience of producing the output, to attend not to what the tool is generating but to what the generation is doing to the person directing it.
The monitoring is difficult because the tool is designed to be compelling in both modes. Claude Code does not become less responsive when the builder's engagement shifts from creative to compulsive. The tool offers the same readiness, the same quality of output, the same immediate gratification of seeing ideas realized in code. The difference is entirely internal — in the quality of the builder's attention, the depth of the builder's engagement, the presence or absence of the creative curiosity that distinguishes good work from mere production. And internal signals are quiet. They are easily overridden by the external signals of productivity: the completed task, the shipped feature, the satisfying accumulation of output that the nervous system rewards regardless of whether the output was produced through genuine creative engagement or through the mechanical habit that has replaced it.
Schumacher would observe that the builder is being asked to perform, through individual self-awareness, a function that the economic system has never supported through its structures. The system rewards output. The structures that would reward the quality of the builder's engagement — mentoring relationships, protected reflection time, organizational norms that distinguish between generative and responsive work — are absent because the system that produces them does not measure what they protect. The builder is left alone with the amplifier, responsible for maintaining the distinction between good work and its counterfeit without external support, in a culture that treats the distinction as irrelevant because it cannot be entered into a spreadsheet.
The remedy is structural. Not because individual self-awareness does not matter — it is the indispensable foundation — but because self-awareness deployed against a compelling tool, in a culture that rewards compulsion, without institutional support, is a finite resource being spent against an infinite demand. The structures must make good work the likely outcome rather than the heroic exception: organizational practices that sequence generative and responsive work rather than allowing the amplifier's availability to flatten everything into continuous production; community norms that value the builder's experience of the work alongside the work's output; design choices that build reflective pauses into the tool's interface — not as patronizing interruptions but as architectural acknowledgment that the builder is a conscious being whose consciousness requires conditions the tool does not spontaneously provide.
Schumacher observed, with characteristic directness, that "any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction." The amplifier makes things bigger with extraordinary efficiency. The genius the AI transition requires is the capacity to ensure that the bigger things serve the people who make them — that the amplification of output does not come at the cost of the amplification of depletion, that the work remains good in Schumacher's demanding sense, producing excellent things and excellent people simultaneously, and that the distinction between the two is maintained not by heroic individual effort but by structures that make the distinction self-sustaining.
The bargain the amplifier offers is real: extraordinary capability in exchange for extraordinary vigilance. The capability is delivered by the tool. The vigilance must be supplied by the builder and supported by the structures the builder inhabits. Accept the capability without building the structures, and the bargain defaults to its worst terms: output that scales while the builder's inner life contracts. Build the structures, and the bargain becomes what Schumacher spent his career arguing economic arrangements could be: an arrangement in which the work serves the worker and the worker serves the community, and the tool serves them both.
---
The AI transition will be evaluated, ultimately, not by the output it produces but by the lives it creates. This is the central claim of Schumacher's economics, applied to conditions he did not live to see but that his framework accommodates with a precision that suggests the conditions were not new — only the technology was new, while the underlying tension between productive capability and human flourishing was as old as the first tool that extended human reach beyond human grasp.
More products, more features, more applications, more lines of code, more revenue, more growth — these are outputs. Better lives, deeper relationships, more meaningful work, healthier communities, wiser citizens, more capable parents — these are outcomes. Schumacher insisted that the outcomes are what matter, and that the outputs are valuable only to the extent that they serve the outcomes. An economy that maximizes outputs while degrading outcomes has failed by the only standard that an economics as if people mattered can recognize.
The author of The Orange Pill arrives at a version of this insight through a different path. Segal describes the AI transition as an amplifier and asks what signal the builder feeds it. The question implies that the quality of the output depends on the quality of the input — the builder's judgment, care, and creative vision. This is true. But Schumacher's economics would extend the question: the quality of the outcome depends not only on what the builder feeds the amplifier but on what the amplifier feeds back to the builder. And what the amplifier feeds back — the always-on availability, the instant gratification of realized ideas, the continuous productive intensity — shapes the builder's life in ways that the output metric does not measure and the builder may not notice until the shaping has progressed beyond easy correction.
Segal describes a choice that illustrates this dynamic with the clarity of a case study. The twenty-fold productivity multiplier is on the table. The arithmetic says: five people can do the work of a hundred. Reduce to five. Capture the margin. Segal chose differently. He kept and grew the team, converting the productivity gain into expanded ambition — more ambitious products, deeper capability, a team developing the judgment to direct AI wisely rather than a skeleton crew producing at maximum efficiency.
The choice was an act of building as if people mattered, made against the structural incentives of a market that rewards quarterly returns more reliably than long-term investment in human capability. The market does not penalize the leader who reduces to five. The market rewards it — cleaner margins, higher efficiency, a story the quarterly call can tell. The market penalizes, or at least does not reward, the leader who keeps a hundred and bets that a hundred people developing judgment and capability and the specific wisdom that comes from navigating difficulty together will produce more value over time than five people producing at maximum speed without the community that makes sustainable building possible.
Schumacher would recognize this choice as the fundamental decision the AI transition places before every leader, every organization, every society that possesses these tools. The decision is not about AI. It is about what the economy is for. Is the economy a system for maximizing output? Then reduce to five. The arithmetic is clear. Is the economy a system for enabling human flourishing? Then the arithmetic requires different variables — variables the spreadsheet was not designed to hold.
The structures that would support building as if people mattered are specific and demanding. The first is temporal: the protection of time for rest, reflection, and human relationship against the tool's tendency to colonize every available moment. The eight-hour day was a temporal structure. The weekend was a temporal structure. The AI transition requires equivalent structures, designed for the specific conditions of a technology that makes productive work possible anywhere, at any time, at the cost of a sentence. These structures cannot be merely aspirational. They must be institutional — embedded in organizational norms, supported by leadership practice, treated as non-negotiable rather than as nice-to-have amenities that yield to the first deadline pressure.
The second structure is communal: the village-scale arrangements that provide mutual knowledge, personal accountability, and collective governance. The builder who works alone with a tool that never sleeps is a builder without the social context that makes sustainable building possible. The communities of practice, the cooperative structures, the builder groups that provide the external check on self-exploitation and the shared practical wisdom about how to work with these tools without being consumed by them — these must be built deliberately, because the platform will not build them and the market will not reward them and the builders, immersed in the intensity of the work, will not build them spontaneously.
The third structure is educational: the development of the builder's capacity for the self-knowledge that good work requires. Not training in the technical operation of AI tools — the tools are designed to be intuitive, and technical training becomes obsolete with each update. The education required is deeper: the cultivation of the awareness that enables the builder to recognize when the work has shifted from nourishing to depleting, when the amplifier is serving the builder's growth or consuming the builder's reserves, when the output is the product of genuine creative engagement or the residue of compulsive habit. This awareness is the highest human faculty in Schumacher's hierarchy — the self-consciousness that is "hardly predictable at all" — and its cultivation is the most important educational challenge the AI transition presents.
The fourth structure is political: the distribution of control over AI infrastructure broadly enough that the builders who depend on it have genuine influence over the conditions of their dependence. Open-source alternatives that reduce centralized control. Cooperative ownership that gives builders collective voice. Regulatory frameworks that apply the principle of subsidiarity — reserving to the individual and the community every decision that does not genuinely require centralized authority. Public investment in AI infrastructure governed by democratic institutions rather than by the imperatives of venture capital seeking returns.
These structures are the practical expression of Schumacher's most fundamental principle: that economic arrangements must be evaluated by what they do to the people inside them. The AI tools are extraordinary. The question is whether the people who use them will be served by that extraordinariness or consumed by it. The answer is being determined now — not by the tools, which are indifferent, but by the structures that surround the tools, which are the product of human choice and can be built, maintained, and improved by the same human judgment that the tools are designed to amplify.
Schumacher wrote, in the essay his readers consider essential, that the question is never merely what a technology can do. The question is what the technology does to the people who depend on it. He wrote that technology recognizes no self-limiting principle and does not possess the virtues of being self-balancing or self-adjusting. He wrote that scientific or technological solutions which degrade the social structure and the human being are of no benefit, no matter how brilliantly conceived.
These observations were made about a technological landscape vastly simpler than the one that has arrived since his death in 1977. They apply with greater force, not less, to a technology that has learned to speak human language, that can produce output indistinguishable from human work, that is available at every hour and in every location, and that has been adopted at a speed that outpaces every structural response the institutions of civilization have attempted.
The tools are ready. The principles are clear. The structures await their builders. The question, now as in 1973, is whether the builders will build as if people mattered — whether the extraordinary capability these tools provide will be directed toward human flourishing or merely toward the accumulation of output that the economic system rewards and the human beings inside it cannot sustain.
The decision is not abstract. It is being made in every work session, every boardroom, every classroom, every home where a parent encounters a child who wants to know what she is for. The answer to the child's question is the measure that matters. And the quality of the answer depends not on the power of the tools but on the wisdom of the people who use them — wisdom that Schumacher spent his life trying to cultivate, in an economics that was never about economics at all. It was about people. It was always about people.
---
Schumacher never saw a computer more powerful than a desk calculator. He died on a lecture train in Switzerland in 1977, still arguing that the purpose of an economy is human flourishing — a proposition his audiences found charming, provocative, and easy to ignore. The gross domestic product was climbing. The factories were producing. Who needed a philosopher telling them that the climbing and the producing might be beside the point?
I needed him. I did not know I needed him, but I did.
Here is what happened. I stood in a room in Trivandrum and told twenty engineers that each of them would soon be able to do more than all of them together. By Friday, the claim had been vindicated. The multiplier was real. The output was extraordinary. And somewhere between Monday's excitement and Friday's exhaustion, I felt something I could not name — a discomfort that the metrics could not explain, because the metrics were all pointed up and to the right.
Schumacher gave me the language for what I was feeling. The tools were serving the output. Were they serving the people? The answer was not obvious, and the fact that it was not obvious was the problem. In every previous era of my career, the question "Is this good for the people?" had an answer I could reach intuitively. The Trivandrum week broke that intuition. The engineers were excited and exhausted, liberated and consumed, working at a higher level than they had ever reached and unable to stop reaching. The liberation and the consumption were happening simultaneously, in the same people, during the same hours. My intuition had no category for a thing that was both gift and cost in the same breath.
Schumacher's bilateral evaluation — judge the product and the process, the output and its effect on the producer — was the category I was missing. It let me see that the twenty-fold multiplier was not one thing but two: a productive achievement and a human question. The achievement was measurable. The question was not. And an economics that measured only what was measurable had decided, without saying so, that the question did not exist.
But the question does exist. I feel it every time I build past midnight. I feel it when I close the laptop and cannot close the part of my mind that keeps optimizing. I feel it when my son asks me at dinner whether AI will take everyone's jobs, and I give him an answer that is probably true but definitely incomplete, because the complete answer would include things I have not yet figured out how to say.
What Schumacher taught me is that the things I have not figured out how to say are the things that matter most. That the inner life of the builder — my inner life, my engineers' inner lives, the inner life of every person working with these extraordinary tools — is not a soft consideration to be addressed after the hard numbers are secured. It is the hardest consideration there is. It is the one the spreadsheet was designed to miss.
I am not going to tend a garden in Berlin. I am not going to give up my tools. I am too far inside the river to pretend I can stand on the bank and watch it pass. But I am going to build differently because Schumacher taught me to ask differently. Not just: what can we produce? But: what is the production doing to us? Not just: are we worth amplifying? But: is the amplification worth our lives?
These questions do not slow the work. They direct it. They are the difference between building that serves people and building that merely produces things. And the difference, as a quiet economist argued half a century ago to audiences that were not yet ready to hear it, is the only difference that matters.
-- Edo Segal
The AI revolution measures everything except what matters most: what the tools are doing to the people who use them. Output scales. Revenue climbs. Dashboards glow green. But no metric tracks whether the builder is flourishing or being consumed — whether the twenty-fold productivity multiplier is developing human capability or hollowing it out. E. F. Schumacher built an economics designed to see exactly this. His framework evaluates every technology by a bilateral standard: the quality of the product and the quality of the producer's experience. The tool that generates extraordinary output while depleting the person who wields it has failed by the only measure that counts. Small Is Beautiful was written about coal and factories. It reads as though it was written about Claude Code and three a.m. prompting sessions. This book applies Schumacher's lens to the AI moment — not to reject the tools, but to ask the question the tools cannot answer for themselves: is the arrangement serving the people inside it, or have the people become servants of an arrangement they no longer control? — E. F. Schumacher

A reading-companion catalog of the 44 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that E. F. Schumacher — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →