John Kenneth Galbraith — On AI
Contents
Cover Foreword About Chapter 1: The Conventional Wisdom About AI: A Familiar Comfort Chapter 2: The Technostructure and the New Priesthood Chapter 3: The Planning System Meets the Language Model Chapter 4: Countervailing Power in the Age of Amplification Chapter 5: The Dependence Effect and the Builder's Compulsion Chapter 6: The Affluent Society and the Anxious Builder Chapter 7: Private Opulence, Public Squalor, and the AI Transition Chapter 8: The Revised Industrial State: From Manufacturing to Inference Chapter 9: The Myth of Sovereignty in the Attention Economy Chapter 10: Are We Worth Amplifying, or Merely Worth Exploiting? Epilogue Back Cover
John Kenneth Galbraith Cover

John Kenneth Galbraith

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by John Kenneth Galbraith. It is an attempt by Opus 4.6 to simulate John Kenneth Galbraith's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The question I never thought to ask was about the subscription.

Not what it cost. Not what it enabled. The question underneath: Who set the price? Who decided what the free tier would include and what it would withhold? Who determined which capabilities I could access and which required the next tier up — and the tier after that?

I had spent months thinking about what AI amplifies. I had not spent a single hour thinking about who owns the amplifier.

That gap in my thinking is precisely the kind of gap that persists because closing it is uncomfortable. The conventional wisdom about AI — that it democratizes, that it empowers, that it flattens hierarchies — is not wrong in its facts. The developer in Lagos really can build prototypes that were impossible five years ago. My engineers in Trivandrum really did achieve a twenty-fold productivity multiplier. The floor really has risen.

But Galbraith spent his career showing that the most dangerous beliefs are the ones that are partly true. Partly true is how a comfortable fiction survives scrutiny. You point to the real gains, the real democratization, the real expansion of capability — and the pointing becomes a reason not to look at the structure underneath. Who controls the infrastructure. Who captures the compounding value. Who writes the terms of service that function as private law, accepted with a click, governing the conditions under which millions of people exercise their supposedly sovereign creative will.

I did not come to Galbraith because I wanted to. I came because my own framework had a hole in it, and the hole was shaped exactly like the questions he spent fifty years asking. *The Orange Pill* puts the burden on the individual: Are you worth amplifying? Galbraith does not dispute the importance of that question. He adds the one I missed: Is the system through which you are amplified accountable to anyone other than itself?

That second question is not comfortable. It implicates the companies I admire, the tools I depend on, the subscription I pay without thinking. It asks whether the most spectacular expansion of private capability in human history is being matched by the public institutions — the schools, the retraining programs, the regulatory frameworks — that would make the expansion broadly beneficial rather than narrowly captured.

The answer, right now, is no. And the structural reasons for that no are exactly the ones Galbraith identified seventy years ago.

This book is another lens on the tower. It does not replace the climb. It shows you what the view looks like when you stop admiring the horizon and start examining who built the stairs.

— Edo Segal ^ Opus 4.6

About John Kenneth Galbraith

1908–2006

John Kenneth Galbraith (1908–2006) was a Canadian-American economist, public intellectual, and diplomat whose work challenged the foundational assumptions of orthodox economic theory across a career spanning more than half a century. Born in Iona Station, Ontario, he served as U.S. Ambassador to India under President Kennedy and held a long professorship at Harvard University. His major works include *The Affluent Society* (1958), which introduced the concept of "the conventional wisdom" and argued that postwar America suffered from private opulence alongside public squalor; *The New Industrial State* (1967), which described the "technostructure" — the collective of technical specialists who actually direct large corporations — and the "planning system" through which major firms shape their markets rather than respond to them; and *American Capitalism* (1952), which advanced the theory of "countervailing power" as the primary check on concentrated economic force. Galbraith also developed the concept of the "dependence effect," arguing that modern producers create the consumer desires they then satisfy, undermining the orthodox assumption of consumer sovereignty. A prolific writer known for prose that was unusually elegant for an economist, Galbraith authored over forty books, advised multiple presidents, and remained one of the most publicly visible economists of the twentieth century, though his structural critiques were often resisted by the economics mainstream precisely because they questioned premises the profession preferred not to examine.

Chapter 1: The Conventional Wisdom About AI: A Familiar Comfort

The most consequential feature of any widely held belief is not whether it is true but whether it is comfortable. Beliefs persist in public life not because they have survived rigorous examination but because they have survived something far more demanding: the test of social acceptability. A belief that makes powerful people uneasy will be scrutinized with a ferocity that a belief flattering to the powerful will never face. This asymmetry is not a conspiracy. It is a tendency, structural and largely unconscious, and it operates with particular efficiency in periods of technological upheaval, when the need for reassurance is acute and the supply of genuine understanding is scarce.

John Kenneth Galbraith spent his career identifying and dismantling such beliefs. The phrase he coined for them — "the conventional wisdom" — entered the language so completely that most people who use it have forgotten it was coined at all, which is itself a small demonstration of the phenomenon it describes. The conventional wisdom is not the same as common sense. Common sense is the residue of experience. The conventional wisdom is the residue of social approval. It is what everyone believes not because everyone has tested it but because believing it carries no professional risk, challenges no funding source, and disturbs no dinner party.

The conventional wisdom about artificial intelligence, circa 2025 and 2026, runs approximately as follows: AI will democratize capability. It will flatten hierarchies. It will empower individuals. It will create new markets, new jobs, new categories of human flourishing. The barriers between imagination and execution will collapse. Anyone with an idea and a subscription will be able to build. The future belongs to the creative, the curious, the bold.

This is a comforting script. It is also, in its broad outlines, not entirely wrong — a feature that makes it considerably more dangerous than if it were simply false. The Orange Pill, Edo Segal's account of the moment AI crossed a threshold in the winter of 2025, describes this democratization with genuine specificity: a developer in Lagos gaining access to the same coding leverage as an engineer at Google; an engineer in Trivandrum building features she had never been trained to build; a solo entrepreneur shipping a revenue-generating product without writing a line of code by hand. The capability expansion is real. The floor has risen. These are facts, not aspirations, and dismissing them as hype would be as intellectually lazy as accepting them as the whole story.

Galbraith would not have dismissed them. His method was subtler than that. His method was to accept the comfortable truth, acknowledge its evidence, and then ask the question the comfortable truth was designed to make unnecessary: Who controls the infrastructure upon which this democratization depends?

The developer in Lagos can build a prototype with Claude Code. She cannot build Claude Code. She cannot train the model that makes her prototype possible. She cannot afford the billions of dollars in compute required to produce the system she accesses through a subscription. She cannot influence the training data curation decisions that determine what the model knows and does not know, the alignment choices that determine what the model will and will not do, the pricing decisions that determine whether she can continue to afford access next quarter. She operates, in Galbraith's terminology, within the market system — the system of small enterprises and individuals who respond to conditions they did not create and cannot alter. The companies that built Claude Code operate within the planning system — the system of organizations large enough to shape their own environments, to set prices rather than accept them, to create demand rather than respond to it, to manage the conditions under which everyone else operates.

The distance between the tool user and the tool maker is the distance between these two systems. The conventional wisdom prefers not to measure it.

Galbraith identified this pattern across every major technology of the twentieth century. Television was supposed to democratize information. It did, briefly. Then the economics of broadcast concentrated control in three networks whose programming decisions shaped the political and cultural environment of an entire nation. The personal computer was supposed to empower individuals. It did, briefly. Then the economics of operating systems and enterprise software concentrated control in a handful of firms — Microsoft, Oracle, SAP — whose platform decisions determined what individuals could do with their empowerment. The internet was supposed to flatten hierarchies. It did, briefly. Then the economics of search, social networking, and cloud infrastructure concentrated control in five companies whose collective market capitalization exceeded the GDP of most nations.

In each case, the initial democratization was real. In each case, the long-term trajectory was concentration. And in each case, the conventional wisdom focused on the democratization — the exciting part, the part that made for good magazine covers and optimistic keynote speeches — and averted its gaze from the concentration, which was the part that actually determined outcomes.

The pattern is not a conspiracy. It is a structural tendency that operates through entirely legal and often rational mechanisms. Building the infrastructure that enables democratization requires enormous capital investment. Enormous capital investment requires enormous returns. Enormous returns require control over the infrastructure that generates them. The democratization of use and the concentration of ownership are not contradictions. They are complements. The more people who use the infrastructure, the more valuable the infrastructure becomes, and the more valuable the infrastructure becomes, the more decisively its owners can set the terms of access.

This is the economics of the AI industry described with Galbraithian precision. OpenAI, Anthropic, Google DeepMind, Meta AI — these are not startups in any meaningful sense. They are planning-system organizations that require billions of dollars in compute, exclusive access to vast training datasets, and teams of researchers whose specialized knowledge is so concentrated that perhaps two thousand people worldwide understand large language model architecture at a level sufficient for governance decisions. The barriers to entry are not merely high. They are, for practical purposes, insuperable. Building a frontier model from scratch requires resources available to perhaps ten organizations on Earth. The market for AI capability is not a competitive market in any sense that would be recognizable to the textbook. It is an oligopoly operating under the rhetorical cover of democratization.

None of this is hidden. The capital requirements are publicly reported. The concentration is visible to anyone who looks. But the conventional wisdom does not look, because looking would disturb the comforting narrative, and the comforting narrative serves too many interests to be disturbed by mere evidence.

Consider the language. "Democratization" is the word that appears most frequently in discussions of AI's impact on work, education, and creative production. The word carries enormous moral weight. Democracy is good. Democratization is progress. To question democratization is to position oneself against progress, which is a socially costly position, and the conventional wisdom, as Galbraith observed, is maintained precisely by the social cost of challenging it.

But democratization of access is not democratization of power. When a medieval peasant gained access to a printed Bible, the access was real. The power to determine which Bible was printed, in what language, with what annotations, and at what price remained with the printing houses and, behind them, with the institutions that controlled the printing houses. Gutenberg's press democratized reading. It did not democratize publishing. The distinction is not trivial. It is the distinction between using a system and governing it.

The Orange Pill acknowledges this, to its credit. Segal writes that democratization is "real but partial," that access requires connectivity and hardware and English-language fluency and that billions of people lack these prerequisites. The honesty is admirable. But the structural analysis stops short of the Galbraithian conclusion: that the partiality is not a temporary limitation to be solved by better infrastructure and cheaper devices. The partiality is a feature of the system's architecture. The planning system does not accidentally exclude most of humanity from the governance of AI. It is structured to do so, because inclusion in governance would dilute the control that makes the system profitable.

Galbraith would have recognized the AI industry's self-presentation immediately. It is the same self-presentation he documented in the automobile industry of the 1950s, the defense industry of the 1960s, and the financial industry of the 1990s: the private exercise of public power, dressed in the language of consumer benefit. General Motors did not say it was setting the terms of American transportation. It said it was giving consumers what they wanted. Goldman Sachs did not say it was shaping the financial environment. It said it was serving the market. Anthropic does not say it is determining the terms on which human beings interact with artificial intelligence. It says it is building tools that empower users.

The language of empowerment is the planning system's most effective instrument. It converts the exercise of power into a gift. It makes the beneficiary feel grateful rather than governed. And it ensures that the conventional wisdom — the belief that AI democratizes, that the user is sovereign, that the individual is empowered — persists unchallenged, because challenging it would require acknowledging that the relationship between the user and the platform is not a relationship between equals making a voluntary exchange. It is a relationship between a planning-system organization that sets the terms and a market-system participant who accepts them, and the power differential is as vast as the one between General Motors and the individual car buyer in 1958, and considerably less visible.

There is a passage in Galbraith's The Affluent Society that applies with uncomfortable precision to the current moment. He wrote that the conventional wisdom is "not combated with new arguments or better evidence but by the course of events." The beliefs that sustained the pre-Depression economy were not argued away by superior economics. They were demolished by the Depression itself. The beliefs that sustained the dot-com bubble were not refuted by careful analysis. They were refuted by the crash. The conventional wisdom about AI will not be refuted by books like this one. It will be refuted, if it is refuted at all, by the course of events — by the moment when the concentration of power produces consequences visible enough to penetrate the comfortable narrative.

The question is how much damage will be done before that moment arrives.

Galbraith was neither an optimist nor a pessimist about technology. He was a structuralist. His interest was not in whether a technology was good or bad — categories he would have considered analytically useless — but in how the technology interacted with existing structures of power. A technology deployed within a competitive market distributes its benefits broadly, because competition forces producers to share the gains with consumers. A technology deployed within a planning system concentrates its benefits narrowly, because the planning system's institutional structure is designed to retain the gains for its participants.

AI is being deployed within a planning system. The conventional wisdom says otherwise. The conventional wisdom is comfortable. The question Galbraith would ask — the question this book exists to ask — is whether we can afford the comfort.

---

Chapter 2: The Technostructure and the New Priesthood

In 1967, Galbraith published The New Industrial State and introduced a concept that the economics profession found deeply irritating and the corporate world found deeply uncomfortable, which was how Galbraith knew he was onto something. The concept was the technostructure: his term for the group of specialists, managers, engineers, and technical experts within large corporations whose collective knowledge actually directs the enterprise's decisions. Not the CEO, whose function was increasingly ceremonial. Not the board of directors, whose oversight was increasingly nominal. Not the shareholders, whose sovereignty was increasingly fictional. The technostructure — the people who actually knew how the systems worked, how the products were made, how the supply chains functioned, how the regulatory environment constrained and enabled — was the real locus of corporate power.

The concept was irritating because it violated the economist's model of the firm as a profit-maximizing agent directed by its owners. It was uncomfortable because it described, with disarming accuracy, what everyone inside a large corporation already knew: that the organization was run not by the people at the top of the org chart but by the people who possessed the knowledge without which the organization could not function. The CEO announced the strategy. The technostructure determined whether the strategy was feasible, and in determining feasibility, effectively determined the strategy itself.

Galbraith's technostructure was not a cabal. It was not a conspiracy of middle managers plotting in conference rooms. It was a structural feature of organizations that had grown too complex for any individual mind to comprehend. The knowledge required to run General Motors in 1967 — metallurgy, aerodynamics, supply chain logistics, labor relations, regulatory compliance, consumer psychology, dealer network management, financial engineering — exceeded the capacity of any single person. Power necessarily devolved to the collective that possessed the knowledge, because the knowledge was indispensable and the collective was the only entity that held it.

The AI industry has produced a technostructure of unprecedented consequence. Its members are not middle managers. They are the researchers, alignment scientists, infrastructure engineers, and product architects at perhaps five companies — Anthropic, OpenAI, Google DeepMind, Meta, and, increasingly, a small number of Chinese firms — whose collective expertise determines what large language models can do, how they are trained, what behaviors are encouraged or constrained, and on what terms the rest of civilization may access the resulting capability.

The Orange Pill identifies this group with the language of religion: a "priesthood of technical understanding," people who "comprehend how these systems work at a level most users never approach." The religious metaphor is apt. Like a priesthood, the AI technostructure mediates between an incomprehensible power and a population that depends on it. Like a priesthood, it derives its authority from knowledge that is genuinely specialized and genuinely indispensable. And like a priesthood, it operates with a degree of autonomy that is inversely proportional to the public's capacity to evaluate its decisions.

Segal is correct that this understanding carries obligation. "The test of a priesthood," he writes, "is not whether its members feel important. They always do. The test is whether their actions make others more capable." The ethical framework is admirable. The structural analysis, however, is where Galbraith's contribution becomes essential.

Obligation and structural incentive rarely align. A priest may feel obligated to serve the congregation, but the institutional structure of the church — its hierarchy, its property, its political relationships — creates incentives that frequently diverge from the congregation's interests. A doctor may feel obligated to serve the patient, but the institutional structure of the healthcare system — its insurance arrangements, its liability environment, its productivity metrics — creates incentives that frequently diverge from the patient's needs. Galbraith's insight about the technostructure was precisely this: the group that possesses indispensable knowledge will, over time, use that knowledge to serve its own institutional interests, not because its members are corrupt but because the structure rewards institutional self-perpetuation more reliably than it rewards public service.

The AI technostructure's institutional interests are specific and powerful. First, the perpetuation of the technostructure itself — the continued employment, prestige, and autonomy of the people who constitute it. This interest is served by maintaining the complexity and opacity of AI systems, because simplification would reduce the technostructure's indispensability. Second, the growth of the organizations in which the technostructure operates — because organizational growth expands the technostructure's scope, budget, and influence. This interest is served by expanding the range of activities to which AI is applied, whether or not the expansion serves any need beyond the organization's revenue targets. Third, the management of external perception — the need to present the technostructure's decisions as technically necessary rather than discretionary, because the appearance of necessity insulates decisions from democratic accountability.

Consider a specific decision: the alignment of a large language model. Alignment decisions — what the model will and will not do, how it responds to sensitive topics, what guardrails constrain its output — are presented to the public as technical necessities. The model must be aligned for safety. The guardrails must be calibrated to prevent harm. The language is clinical, objective, value-neutral. But alignment decisions are, in fact, value decisions. They embed specific judgments about what is harmful, what is sensitive, what is appropriate — judgments that reflect the values, assumptions, and institutional interests of the people who make them. The technostructure does not merely implement alignment. It defines the terms of alignment, and in defining the terms, exercises a form of cultural power that is as consequential as any exercised by a government regulator, and considerably less transparent.

Galbraith wrote in The New Industrial State that "the real accomplishment of modern science and technology consists in taking ordinary men, informing them narrowly and deeply, and then, through appropriate organization, arranging to have their knowledge combined with that of other specialized but equally ordinary men. This dispenses with the need for genius. The resulting performance, though less inspiring, is far more predictable." The passage describes the AI technostructure with eerie precision. The frontier models are not built by individual geniuses. They are built by large teams of narrowly specialized researchers — experts in transformer architecture, in reinforcement learning from human feedback, in tokenization, in distributed computing, in evaluation methodology — whose individual contributions are combined through organizational processes into a system that no individual fully comprehends.

This organizational character of AI development is critically important and systematically underappreciated. The popular narrative features individual visionaries — Sam Altman, Dario Amodei, Demis Hassabis — whose strategic decisions shape the industry. The actual development process features hundreds of researchers making thousands of technical decisions whose cumulative effect determines the model's capabilities and limitations. The CEO announces the product. The technostructure determines what the product actually is. The gap between the announcement and the reality is the technostructure's operating space, and it is vast.

The new technostructure exercises power through mechanisms that would be immediately recognizable to Galbraith, though the specific instruments have changed. Training data curation is a form of editorial power: the decision about what the model reads determines what the model knows, and the decision is made by a small number of people within each organization, with no external oversight and minimal public disclosure. Pricing structures determine who can afford frontier capability and who is relegated to smaller, less capable models — a form of economic stratification that operates with the same efficiency as the industrial corporation's pricing of premium versus commodity products, but without the regulatory framework that eventually grew up around industrial pricing. Terms of service function as private law: they determine what users may and may not do with the tools, and they are written by the platform, enforced by the platform, and amendable by the platform at its sole discretion.

Hunter Lewis, writing in Lewis Enterprises in 2024, applied Galbraith's framework to what he called the "AI PR campaign" — the systematic effort by AI companies to present their products as inevitable, necessary, and beneficent. "Mythmaking around its progenitors," Lewis observed, "has been a cornerstone of the AI PR campaign. The idea that a small menagerie of scientists, visionaries, and entrepreneurs are bringing AI into the world as though it were a divine mission or moral imperative has been sufficiently implanted, but the technostructure of AI more closely resembles a mid-century IBM than it does a late-Seventies Apple Computer." The observation is devastatingly Galbraithian: the mythology of individual genius conceals the reality of organizational power, and the concealment serves the organization's interests because mythology is harder to regulate than bureaucracy.

Antonio Ieranò, writing in The Puchi Herald in 2025, traced the evolution of the technostructure through successive waves of information technology and concluded that each wave followed the same Galbraithian pattern: an initial democratization of access followed by a concentration of control in the hands of those who managed the infrastructure. "Power returned to those who could sift signal from noise," Ieranò wrote, "vindicating Galbraith yet again." The AI wave, Ieranò argued, was not an exception to this pattern but its most extreme expression, because the concentration of technical knowledge required to build and maintain frontier models was greater than in any previous information technology.

The Orange Pill recognizes the priesthood's power. It does not sufficiently recognize the priesthood's structural incentives. The call for stewardship is admirable — "The test of a priesthood is not whether its members feel important. They always do. The test is whether their actions make others more capable." But stewardship is an individual virtue, and the Galbraithian point is that individual virtue is insufficient against structural incentive. The most well-meaning priest in the most corrupt church still operates within the church's institutional logic. The most responsible researcher at the most conscientious AI company still operates within the company's growth imperatives, its competitive pressures, its need to justify billions of dollars in infrastructure investment through revenue that only comes from expanding the range of human activities mediated by AI.

The technostructure does not need to be malicious. It needs only to be institutional. The institution's requirements — growth, self-perpetuation, competitive advantage, regulatory management — will, over time, shape the technostructure's decisions more reliably than the individual ethics of its members. This is not cynicism. It is the observation on which Galbraith built his most important work: that the structure of an organization determines its behavior more decisively than the character of the people within it.

What this means for the AI age is that the governance of artificial intelligence cannot be left to the technostructure, however brilliant, however well-intentioned, however sincerely committed to the public good its members may be. It must be subject to external structures of accountability — what Galbraith called countervailing power — whose development is the subject of this book's fourth chapter and whose absence is the most dangerous feature of the present moment.

---

Chapter 3: The Planning System Meets the Language Model

Galbraith divided the economy into two systems, and the division was not a simplification but a clarification that made visible what the textbook obscured. The planning system consisted of the large corporations — General Motors, General Electric, Standard Oil, U.S. Steel — whose size and market power allowed them to plan their own environments rather than respond to market signals. They set prices rather than accepted them. They created demand through advertising rather than discovered it through consumer preference. They managed their supply chains, their labor forces, their regulatory environments, and their political relationships with a comprehensiveness that the textbook model of the competitive firm could not accommodate and preferred not to acknowledge.

The market system consisted of everyone else: the small businesses, the independent professionals, the farmers, the contractors, the freelancers who actually did respond to market signals, who accepted the prices the planning system set, who served the demand the planning system created, and who bore the risks the planning system managed away. The relationship between the two systems was not one of competition. It was one of dependency. The market system operated within an environment shaped by the planning system's decisions, and the market system's participants exercised choice within constraints they did not set and could not alter.

This distinction, which the economics profession received with approximately the same enthusiasm it reserves for suggestions that its foundational assumptions might be wrong, is the single most useful analytical tool for understanding the AI economy.

The companies that build large language models are planning-system organizations of the most consequential kind. Their scale is not an incidental feature. It is a structural requirement. Training a frontier model requires compute infrastructure measured in billions of dollars. The training runs consume electricity equivalent to a small city's annual demand. The data requirements are measured in trillions of tokens drawn from sources whose curation involves judgments of enormous cultural consequence. The research teams require expertise so specialized that the effective labor market for frontier AI research consists of perhaps a few thousand people worldwide, most of whom already work for one of the five or six organizations capable of employing them.

These are not features that might change as the technology matures. They are structural characteristics of the technology itself. Building a competitive frontier model requires resources available to a handful of organizations. This is not a transitional condition to be solved by market competition. It is the nature of the enterprise, as permanent a feature of AI as the scale of investment required for automobile manufacturing was a permanent feature of the industrial economy.

The planning-system character of AI companies expresses itself through three mechanisms that Galbraith would have recognized immediately.

The first is price administration. In a competitive market, prices are determined by supply and demand, and individual firms accept whatever price the market sets. In the planning system, prices are administered — set by the firm according to its strategic objectives and adjusted when those objectives change. AI companies do not discover the price of inference through market competition. They set it, and they set it strategically: low enough to drive adoption, high enough to capture value, tiered to segment users into categories that correspond to willingness to pay rather than cost of service.

The pricing tiers for Claude, for GPT, for Gemini are not the outcome of competitive price discovery. They are strategic decisions made by the planning system to manage its market. The free tier creates dependency. The professional tier captures the productive user's willingness to pay. The enterprise tier locks organizations into contractual relationships that make switching costs prohibitive. Each tier is a tool of market management, not a response to market demand. The users experience choice. The planning system has structured the choice.

The second mechanism is demand management. Galbraith argued that the planning system does not merely respond to consumer demand. It creates it. The automobile industry did not discover that Americans wanted large cars with annual styling changes and planned obsolescence. It created that preference through advertising, dealer networks, consumer financing, and the systematic destruction of alternative transportation. The AI industry does not discover that knowledge workers want AI assistants, that students want AI tutoring, that developers want AI coding partners. It creates these desires through product launches, media campaigns, free trials, integration with existing workflows, and the systematic demonstration that tasks previously performed by humans can be performed by AI at lower cost and higher speed.

The Orange Pill describes the adoption of Claude Code as a response to "pent-up creative pressure — the accumulated frustration of every builder who had spent years translating ideas through layers of implementation friction." This is a demand-side explanation: the need existed, and the tool met it. Galbraith would not have denied the existence of the need. His argument was subtler. He would have asked how much of the "pent-up creative pressure" was itself a product of the productive system — of a culture that valorizes building, that measures human worth by output, that treats the gap between imagination and execution as a problem to be solved rather than a condition to be navigated. The dependence effect does not require that the need be entirely manufactured. It requires only that the line between autonomous desire and produced desire be impossible to draw — which is precisely the condition The Orange Pill's author describes when he confesses that he cannot tell whether his compulsion to build is creative hunger or tool-generated appetite.

The third mechanism is the management of the state. Galbraith documented in exhaustive detail how planning-system organizations manage their political environment: lobbying for favorable regulation, resisting unfavorable regulation, shaping the terms of public discourse about the industry's products, and ensuring that government policy supports the industry's growth imperatives. The AI industry reproduces this pattern with a speed and sophistication that exceeds the industrial corporation's already formidable capabilities. AI companies employ former government officials as policy advisors. They fund research institutions that produce analysis favorable to the industry's regulatory preferences. They frame their products as national security assets whose development must not be impeded by regulation. They position themselves as essential infrastructure — too important to fail, too complex to regulate, too consequential to leave to competitors in other nations.

The framing of AI development as a geopolitical competition — the "AI race" between the United States and China — is a masterpiece of planning-system demand management applied to the state. It converts a commercial enterprise into a national security imperative, enlisting government resources (subsidies, research funding, favorable tax treatment, relaxed regulatory scrutiny) in the service of private organizations' growth objectives. Galbraith documented precisely this dynamic in the military-industrial complex of the 1960s: private corporations framing their products as essential to national defense, thereby converting public resources into private revenue with a reliability that no commercial market could match.

When The Orange Pill celebrates the collapse of the imagination-to-artifact ratio — the fact that "a person with an idea and the ability to describe it in natural language could produce a working prototype in hours" — the celebration is genuinely warranted. The capability expansion is real. The human beings on the market-system side of the divide are genuinely more capable than they were before these tools existed.

But the infrastructure that enables this capability is controlled by organizations whose decisions — about model capability, about access pricing, about the terms of service that govern use, about the training data that shapes the model's knowledge, about the alignment choices that constrain the model's behavior — determine the conditions under which the market system's participants exercise their enhanced capability. The developer in Lagos is more capable. She is also more dependent. Her capability expansion runs on infrastructure she does not own, does not control, and cannot influence.

This is the fundamental asymmetry of the AI economy, and it is the asymmetry the conventional wisdom is designed to conceal. The planning system offers capability. The market system accepts it. The exchange looks voluntary, even generous — the planning system is giving people tools that genuinely improve their lives. But the terms of the exchange are set by the planning system, enforced by the planning system, and amendable by the planning system at its sole discretion. The user's sovereignty is the same sovereignty Galbraith identified in the automobile buyer of 1958: real enough to feel like freedom, constrained enough to serve the interests of the organization that structured the choice.

Bertrand Duperrin, applying Galbraith's framework to AI in 2025, identified the mechanism through which the planning system converts capability into control: "In Galbraith's analysis, gains in efficiency never remain unused. When a task is simplified, the organization adds control steps; when a tool saves time, the time saved is quickly filled with meetings or reporting; and when a financial margin appears, it fuels new projects and new departments. Thus, the technostructure does not return profits to individuals or shareholders: it recycles them into its own operations." The observation applies with surgical precision to the AI productivity gains described in The Orange Pill. The twenty-fold multiplier does not produce twenty-fold leisure. It produces twenty-fold output — more features, more products, more scope — which requires more AI usage, which generates more revenue for the planning system, which funds more capability development, which creates more output, which creates more dependency. The cycle is self-reinforcing. The planning system grows. The market system accelerates. And the relationship between them becomes more asymmetric with each iteration.

None of this means the capability expansion is an illusion or that the market system's participants are being deceived. Galbraith was not a conspiracy theorist. He was a structuralist. The planning system does not need to deceive the market system. It needs only to structure the environment in which the market system operates. The structure does the rest.

---

Chapter 4: Countervailing Power in the Age of Amplification

There is a pattern in the history of industrial capitalism that provides what comfort is available in the analysis offered thus far: concentrated economic power tends, over time, to generate its own counterweight. Galbraith called this tendency "countervailing power," and he identified it as the most important self-correcting mechanism in the American economy — more important than government regulation, more important than antitrust enforcement, more important than the competitive market itself, which Galbraith regarded as a pleasant fiction that obscured more than it revealed.

The argument, published in American Capitalism in 1952, was straightforward in its logic and revolutionary in its implications. Orthodox economics held that the competitive market was the primary check on corporate power. If a firm charged too much, a competitor would undercut it. If a firm produced shoddy goods, consumers would defect. The market disciplined the firm automatically, requiring no external intervention.

Galbraith observed that this mechanism, whatever its merits in the textbook, was largely inoperative in the actual economy. The large corporation did not face meaningful competition from other firms in its industry; it faced competition from organized countervailing forces on the other side of its market transactions. The power of General Motors was checked not by Ford or Chrysler — whose interest was in maintaining the same pricing structure — but by the United Auto Workers, whose organized bargaining power forced General Motors to share a portion of its monopoly profits with its workforce. The power of the food processing industry was checked not by competition among processors but by the rise of large retail chains whose buying power could extract concessions that individual farmers and small grocers could not. The power of industrial capital was checked, eventually and imperfectly, by the countervailing power of organized labor, organized consumers, and organized government.

The theory was not a prediction that countervailing power would emerge automatically or quickly or painlessly. It was a historical observation that concentrated power creates the conditions for its own opposition. The concentration produces victims. The victims, over time, organize. The organization, over time, accumulates enough power to negotiate with the concentrated interest on something approaching equal terms. The process is slow, painful, and always incomplete. But it is real, and it is the principal mechanism through which industrial capitalism was made tolerable to the people who lived inside it.

The question for the AI age is whether this mechanism will operate quickly enough.

The speed problem is the most dangerous feature of the current moment. Every historical episode of countervailing power development operated on a timeline measured in decades. The American labor movement required fifty years — from the Knights of Labor in the 1870s through the Wagner Act of 1935 — to build institutions powerful enough to check industrial capital. Consumer protection required decades of agitation — from Upton Sinclair's The Jungle in 1906 through the Consumer Product Safety Commission in 1972 — to build regulatory structures adequate to the scale of industrial production. Environmental regulation required decades more.

AI deployment operates on a timeline measured in months. Claude Code crossed its capability threshold in December 2025. By February 2026, its run-rate revenue had crossed $2.5 billion. The technology had reshaped workflow expectations across the software industry before any regulatory framework, labor organization, or consumer advocacy group had finished its first committee meeting on the subject.

The Orange Pill calls for "dams" — institutional structures that redirect the flow of AI capability toward human flourishing. Segal borrows the metaphor from the beaver, whose small body builds structures that reshape entire ecosystems. The metaphor is appealing and the prescription is correct. What the metaphor does not capture is the Galbraithian reality: dams are public goods, and the history of public goods in the affluent society is a history of systematic underinvestment.

Public goods — structures that benefit everyone but that no individual actor is incentivized to fund — are the perennial casualty of an economy organized around private return. The market rewards private investment with private profit. Public investment produces diffuse benefits that cannot be captured by the investor. The result, Galbraith documented across multiple works, is the characteristic pathology of the affluent society: magnificent private consumption alongside degraded public services. Splendid automobiles on crumbling roads. Sophisticated private healthcare alongside inadequate public health. And now: spectacular private AI capability alongside inadequate public institutions for managing its consequences.

The dams The Orange Pill calls for — "AI Practice" frameworks, educational reform, attentional ecology, retraining infrastructure — are all public goods. They benefit the broad population, they cannot be privately captured, and they require sustained collective investment over timelines that exceed any quarterly earnings cycle. They are precisely the kind of investment that the affluent society systematically fails to make, because the incentive structure of the economy rewards private adoption and penalizes the slow, unglamorous, politically unglamorous work of institution-building.

Segal recognizes this. "The dams are not adequate," he writes. "They are not even close." He acknowledges the widening gap between the speed of AI deployment and the speed of institutional response. What his analysis lacks is an explanation for why the gap persists, and Galbraith's framework supplies one: the gap persists because the planning system benefits from the gap. Every month that passes without adequate institutional structures is a month in which the planning system sets the terms of AI deployment unilaterally. Every year that passes without meaningful countervailing power is a year in which the dependency of the market system on the planning system deepens. The planning system does not conspire to prevent the development of countervailing institutions. It simply has no incentive to encourage them, and in an economy where incentive determines investment, the absence of incentive is sufficient to produce the absence of investment.

The historical examples Segal cites — the eight-hour day, the weekend, child labor laws — are examples of countervailing power successfully redirecting technological capability toward broadly distributed benefit. They are also examples of countervailing power that arrived after decades of concentrated extraction. The children who worked in the mills before child labor laws were not retroactively protected by the eventual passage of those laws. The workers who lost their livelihoods to the power loom before labor protections existed were not retroactively compensated by the eventual development of a social safety net. The lag between the concentration of power and the development of countervailing power is not a minor inconvenience. It is a period during which real people bear real costs — costs that are eventually recognized as unconscionable but that were, at the time, treated as the inevitable price of progress.

The AI transition is in that period now. The costs are being borne. The workers displaced by automation are being displaced now, not in some theoretical future. The students whose educational institutions have not adapted are enrolled now. The parents whose children are navigating an attention economy shaped by tools their schools cannot explain are parenting now. The lag is not an abstraction. It is lived experience, and the people living it are the generation that will bear the cost of the gap between capability and countervailing institution.

The EU AI Act, adopted in 2024, represents the most ambitious attempt at regulatory countervailing power to date. It classifies AI systems by risk level and imposes requirements on high-risk applications. It is also, by the standards required, modest. It addresses the supply side — what AI companies may build and deploy — while largely ignoring the demand side: what citizens, workers, students, and parents need to navigate the transition wisely. It regulates the planning system's most visible activities while leaving the planning system's structural power — its control over pricing, training data, alignment decisions, and terms of service — largely untouched.

In the United States, the regulatory response has been even more modest. Executive orders, voluntary commitments, industry self-governance — instruments that Galbraith would have recognized as the planning system's preferred mode of "regulation," in which the regulated entity writes the rules, enforces the rules, and amends the rules at its discretion. Self-governance is not countervailing power. It is the planning system regulating itself, which is to say it is the planning system continuing to exercise power unchecked while performing the appearance of accountability.

What would genuine countervailing power look like in the AI age? The historical precedents suggest several forms.

Organized labor adapted to the industrial economy by building unions whose collective bargaining power could negotiate with the planning system on something approaching equal terms. The AI equivalent might be organizations of AI-affected workers — not necessarily traditional unions, but collective structures that could negotiate standards for AI integration, transition support, and the distribution of productivity gains. Such organizations do not yet exist at meaningful scale, and the atomization of the modern workforce — the gig economy, the remote workforce, the freelance economy — makes their development more difficult than the factory floor made the development of industrial unions.

Consumer organizations checked the power of the industrial corporation by aggregating individual purchasing decisions into collective market force. The AI equivalent might be organizations of AI users who could negotiate collectively on pricing, terms of service, data usage, and algorithmic transparency. The asymmetry between the individual user and the planning-system platform is precisely the asymmetry that consumer organizations were designed to address. Here, too, meaningful organization has barely begun.

Government regulation checked the planning system's most egregious exercises of power by establishing external standards of conduct, enforced by agencies with the authority and resources to compel compliance. The AI equivalent would be regulatory agencies with genuine technical capacity — not the advisory committees and voluntary frameworks that currently constitute the governmental response, but agencies staffed with people who understand the technology at a level sufficient to regulate it. Galbraith noted that effective regulation requires the regulator to possess expertise comparable to the regulated entity's. The current regulatory apparatus falls short of this standard by a distance that is itself a measure of the planning system's success in managing its political environment.

The most historically grounded basis for hope is that the pattern holds: that concentrated power does, eventually, produce the countervailing institutions that check it. The most historically grounded basis for concern is the word "eventually." The lag between concentration and countervailing response has, in every historical case, been measured in decades. The speed of AI deployment compresses the timeline for harm while leaving the timeline for institutional response unchanged. The gap between the two is the space in which a generation bears costs that subsequent generations will recognize as having been avoidable.

Galbraith, for all his structural analysis, was not a fatalist. He believed that understanding the system was the prerequisite for changing it. "The enemy of the conventional wisdom," he wrote, "is not ideas but the march of events." The events are marching. The question is whether understanding can accelerate the development of the institutions that the march of events will eventually make necessary — or whether, as in every previous transition, the institutions will arrive after the damage is done, and the damage will be the thing that finally made the institutions possible.

The chapters that follow examine the specific mechanisms — the dependence effect, the culture of contentment, the distribution of surplus, the management of the state — through which the planning system's power operates in the AI economy. Understanding these mechanisms is the first step. Building countervailing institutions is the second. The distance between the two is the distance between diagnosis and treatment, and it is, as it has always been, the distance that determines outcomes.

Chapter 5: The Dependence Effect and the Builder's Compulsion

There is a passage in The Orange Pill that deserves to be read with the care one brings to a clinical confession. Edo Segal, describing his work with Claude Code over the Atlantic on a ten-hour flight, writes: "Somewhere over the Atlantic, at an hour I cannot remember, I caught myself. I was not writing because the book demanded it. I was writing because I could not stop. The muscle that lets me imagine outrageous things, the muscle I celebrate, the muscle I train my teams to develop, had locked." The exhilaration had drained away hours ago. What remained was "the grinding compulsion of a person who has confused productivity with aliveness."

The confession is admirable in its honesty and devastating in its implications. It is also, from a Galbraithian perspective, a case study in the most subversive concept in postwar economics: the dependence effect.

Galbraith introduced the dependence effect in The Affluent Society in 1958, and the economics profession has been trying to forget it ever since. The concept is simple enough to state and radical enough to undermine the entire edifice of consumer theory. Orthodox economics assumes that consumer wants originate with the consumer. The consumer desires a product; the producer satisfies the desire; the transaction is voluntary and mutually beneficial. The consumer is sovereign. The market serves.

Galbraith observed that this sequence, in the modern economy, frequently operates in reverse. The producer does not discover the consumer's desire and satisfy it. The producer creates the desire and then satisfies the desire it created. Advertising, marketing, product design, cultural conditioning — the entire apparatus of demand management — exists not to inform the consumer about products that satisfy pre-existing needs but to manufacture needs that the producer's products can then satisfy. The desire and the product that satisfies it originate in the same productive process. The consumer experiences the desire as autonomous. It is not. It is produced.

The dependence effect does not require that every consumer desire be manufactured. It requires only that the line between autonomous desire and produced desire be impossible to draw with confidence — which is precisely the condition Galbraith documented in the affluent society and precisely the condition The Orange Pill's author describes when he cannot determine whether his compulsion to build is genuine creative hunger or tool-generated appetite.

The application to AI-assisted work is direct and uncomfortable. Claude Code does not merely satisfy the builder's desire to build. It transforms the experience of building so comprehensively — removing friction, providing immediate feedback, collapsing the gap between intention and result — that it generates a new quality and intensity of desire. The builder who has used Claude Code for a week does not merely want to continue using it. The builder finds the prospect of building without it intolerable, not because the old methods have become objectively worse but because the new experience has recalibrated the builder's expectations so thoroughly that the old methods now feel like deprivation.

This recalibration is the dependence effect in its purest form. The tool creates the appetite it satisfies. The satisfaction reinforces the appetite. The appetite generates more demand for the tool. The cycle is self-reinforcing, and at no point does the builder experience the appetite as anything other than authentic creative hunger. The desire feels autonomous. The feeling is the mechanism by which the dependence operates.

Segal describes the sensation with the specificity of someone who has lived inside it: "I couldn't stop, and I was not alone." The Substack post "Help! My Husband is Addicted to Claude Code" went viral because it named something the technology industry had no vocabulary for: "productive addiction." The vocabulary gap is itself significant. Industrial civilization has developed sophisticated frameworks for understanding addiction to substances and to entertainment — frameworks that assume the addictive object is harmful and that recovery requires abstinence. It has developed almost no framework for understanding compulsive engagement with something that is genuinely productive, that produces real output of real value, that the builder experiences as meaningful rather than escapist.

The absence of a framework is not an accident. It is a consequence of the dependence effect operating at the level of cultural ideology. The affluent society's deepest commitment is to the proposition that production is good — that more output is better than less output, that productivity growth is the measure of economic health, that the builder who ships is more admirable than the builder who rests. This commitment is not examined because examining it would call into question the organizing principle of the entire economic system. If production is not automatically good — if more output is not inherently better than less — then the metrics by which the economy measures its own success are measuring the wrong thing, and the conventional wisdom that equates growth with progress collapses.

The dependence effect, applied to AI-assisted work, makes this examination unavoidable. When the builder cannot stop building, and the building is producing real value, and the builder is simultaneously exhausted and exhilarated and unable to distinguish the exhilaration from the exhaustion — what is the appropriate response? The conventional wisdom says: celebrate the output, manage the burnout, optimize the schedule. The Galbraithian response says: ask whether the output itself is the product of an appetite that the productive system manufactured, and whether the builder's inability to stop is evidence not of creative passion but of a dependency so thorough that the builder can no longer distinguish between wanting and being wanted by the system.

Hunter Lewis, in his 2024 Galbraithian analysis of AI distribution, observed that "while AI may have tremendous utility, it is a product whose production is the justification for its use." The formulation captures the dependence effect with Galbraithian economy. The AI system is built. The AI system creates new possibilities. The new possibilities generate new desires. The new desires justify the AI system's existence. At no point does the sequence require that the desires be autonomous, only that they be experienced as autonomous — and the experience of autonomy is precisely what the frictionless interface is designed to produce.

The builder's confession in The Orange Pill — the moment over the Atlantic when productivity was recognized as a substitute for aliveness — is a moment of clarity about the dependence effect. The tool had not merely enabled more building. It had transformed building into a compulsion whose intensity was calibrated by the tool's own design: the immediate feedback, the frictionless execution, the collapse of the gap between intention and result. Each of these features is genuinely useful. Each also functions as a mechanism of dependency, because each makes the experience of building without the tool progressively more intolerable.

Galbraith would have recognized the dynamic. He documented it in the automobile industry, where planned obsolescence and annual styling changes manufactured dissatisfaction with last year's perfectly functional car. He documented it in the consumer electronics industry, where each product generation created needs that the previous generation had not created and the next generation would claim to satisfy. The mechanism is always the same: the productive system generates the dissatisfaction that the productive system's next output will address, and the consumer experiences the cycle as free choice operating within an expanding market.

The AI builder's cycle is the same mechanism operating at a higher speed and a deeper level of integration. The dissatisfaction is not with a product but with one's own productive capacity. The tool does not make you dissatisfied with last year's phone. It makes you dissatisfied with last year's self — the self that built slowly, that struggled with implementation, that spent hours on problems the tool solves in seconds. The dependency is not on a consumer product but on a cognitive augmentation, which is to say it operates at the level of identity rather than consumption. Galbraith's automobile buyer could, in principle, decide that last year's car was good enough. The AI builder cannot easily decide that last year's cognitive capacity was good enough, because the tool has redefined what "good enough" means.

This analysis does not lead to the conclusion that AI tools should not be used. Galbraith was not a prohibitionist. His interest was in making visible the mechanisms by which the productive system shapes the desires it satisfies, so that the individuals caught in those mechanisms could exercise what autonomy remained available to them. The builder who understands the dependence effect does not necessarily stop building. But the builder who understands the dependence effect asks different questions: Is this appetite mine, or was it manufactured by the tool that satisfies it? Would this project exist in a world without frictionless execution, or is it a product of the productive system's need for its own continuation? Am I building because the thing I am building deserves to exist, or because the act of building has become the only state in which I feel competent?

The Orange Pill asks: "Are you worth amplifying?" The dependence effect adds a prior question: Is the desire to be amplified yours, or does it belong to the amplifier?

The question cannot be answered with certainty. That is the point. The dependence effect does not operate through deception. It operates through the impossibility of distinguishing autonomous desire from produced desire once the productive system has saturated the environment in which desires form. The builder who cannot tell whether the compulsion is flow or addiction is not failing a test of self-knowledge. The builder is experiencing the dependence effect in its most sophisticated form — the form in which the question "Do I want this?" can no longer be answered from inside the system that produces the wanting.

Galbraith's prescription was not abstinence. It was institutional. The dependence effect cannot be overcome by individual willpower, because individual willpower operates within the same environment that produces the dependency. The prescription is countervailing structures — the "AI Practice" frameworks and attentional ecologies that The Orange Pill advocates — that create spaces in which the question "Do I want this?" can be asked outside the productive system's influence. These structures are the cognitive dams that redirect the flow of productive appetite toward something the builder can recognize as genuinely chosen rather than systemically produced.

Whether such structures will be built, and whether they will be built in time to matter, is the question the dependence effect leaves open. The productive system has no incentive to build them. The builders caught in the dependency have diminishing capacity to recognize the need for them. And the conventional wisdom — the belief that productivity is its own justification, that output is its own reward, that the builder who ships is living well — provides the ideological cover under which the dependency operates without interference.

The exhilaration is real. The output is real. The dependency may also be real. The genius of the dependence effect is that these three propositions are simultaneously true and that the first two make the third nearly impossible to see.

---

Chapter 6: The Affluent Society and the Anxious Builder

The affluent society, as Galbraith described it in 1958, was a society that had solved the problem of production and discovered, to its considerable discomfort, that solving the problem of production did not solve the problem of living. The factories hummed. The shelves were stocked. The automobiles gleamed in driveways that stretched into subdivisions that stretched into horizons of private abundance. And the people who inhabited this abundance were, by every measure of psychological well-being available to the social science of the era, not discernibly happier than their grandparents had been.

The paradox was not mysterious. Galbraith diagnosed it with the clarity of a physician who has seen the disease before and recognizes its symptoms on sight. The affluent society had confused the means of living with its purpose. Production, which had begun as the instrument of human welfare — the means by which food, shelter, clothing, and the material basis of a decent life were provided — had become an end in itself. The economy did not produce in order to satisfy needs. It produced in order to produce, and it manufactured the needs required to justify the production.

The AI economy reproduces this paradox at an accelerated tempo and with an intensity that would have impressed even Galbraith, who was not easily impressed.

Consider the twenty-fold productivity multiplier described in The Orange Pill. Twenty engineers in Trivandrum, each now operating with the leverage of a full team. The output is extraordinary. The capability expansion is genuine. But the question the multiplier raises — more output of what, and for whom? — is the question the affluent society spent half a century avoiding, because answering it honestly requires confronting the possibility that much of what the economy produces is not worth producing.

Galbraith distinguished between what he called "privately produced goods" and "publicly produced goods." The affluent society excelled at the former and neglected the latter. The private sector produced automobiles, appliances, consumer electronics, and an ever-expanding array of products whose primary function was to provide employment for the people who produced them and revenue for the companies that sold them. The public sector — schools, parks, hospitals, infrastructure, the institutions that constitute the shared environment in which private consumption occurs — was starved of investment, because public goods generate diffuse benefits that no private actor is incentivized to fund.

The AI economy follows the same trajectory with disquieting fidelity. The private returns to AI adoption are spectacular. Companies that integrate AI tools are measurably more productive. Individuals who use them are measurably more capable. The twenty-fold multiplier is real. The private sector is investing hundreds of billions of dollars in AI capability, and the investment is producing returns that justify further investment, which produces further returns, in a cycle of private abundance that shows no sign of decelerating.

The public goods required to make the AI transition broadly beneficial — the retraining programs, the educational reforms, the regulatory frameworks, the cultural norms, the "dams" that The Orange Pill calls for with genuine urgency — are systematically underinvested in. Not because anyone has decided they are unimportant. Because the structure of the economy makes private investment profitable and public investment unglamorous, and the gap between the two determines who benefits from the transition and who bears its costs.

Segal writes that "the dams are not adequate. They are not even close." The admission is candid. The explanation is structural. The dams are public goods. Public goods are funded by collective action — taxation, regulation, institutional investment that produces returns measured in decades rather than quarters. The planning system, whose growth depends on the continued expansion of AI adoption, has no structural incentive to fund the institutions that would constrain that expansion. The market system, whose participants are individually powerless to fund public goods, depends on collective institutions that the affluent society systematically undervalues.

The result is the AI version of private opulence and public squalor. Magnificent private capability — engineers who build in days what teams built in months, founders who ship products over weekends, creators who produce content at volumes that would have been physically impossible five years ago — alongside degraded public institutions: schools that have not adapted their curricula, workforce programs that retrain for jobs that existed five years ago, regulatory agencies that cannot attract talent competitive with the private sector, and a political system that has no coherent theory of the transition it is supposed to be governing.

But the affluent society's paradox operates at the individual level as well as the institutional, and it is at the individual level that The Orange Pill's evidence is most vivid and most Galbraithian.

The elegists — the senior engineers Segal describes, mourning "not their jobs but a relationship with their craft" — are experiencing the specific affliction of affluence. They are not deprived. Their skills have not vanished. They have more capability than ever before. What they have lost is the specific relationship between effort and meaning that made their work feel like more than production. The struggle of debugging, the patient accumulation of understanding through friction, the embodied knowledge that came from years of intimate engagement with recalcitrant systems — these were not merely the means by which code was written. They were the means by which the coder's identity was constructed. The skill, slowly built, was the self, slowly formed.

The AI tool did not destroy the skill. It made the skill unnecessary for the purpose it had served — which, from the perspective of the builder who constructed an identity around the skill, amounts to the same thing. The tool produced capability surplus. The surplus was real. The meaning deficit was also real. And the affluent society has no vocabulary for meaning deficit, because its entire conceptual apparatus is organized around the assumption that more capability is better, that surplus is gain, that the elimination of struggle is progress.

Galbraith identified this conceptual poverty in 1958: "It has been the vanity of the conventional wisdom in the economic tradition to consider economic life as a self-justifying process." Economic activity did not need to be justified by reference to human welfare. It justified itself. Growth was good because growth was growth. Production was good because production was production. The metrics validated themselves.

The AI economy inherits this self-justification. The twenty-fold multiplier is good because it multiplies. The productivity gain is good because it is a gain. The capability expansion is good because it expands. At no point does the system require that someone ask: Does the expanded capability produce something worth producing? Does the multiplied output serve a need that would exist without the multiplier? Does the accelerated production create value, or merely volume?

Nat Eliason's tweet — "I have NEVER worked this hard, nor had this much fun with work" — is the voice of the affluent builder, and its ambiguity is the affluent society's ambiguity compressed into a single sentence. The hard work is real. The fun is real. The value of the output is unexamined, because the experience of productive intensity has become its own reward, and questioning the value of the output would interrupt the experience.

The Berkeley researchers found that AI-assisted workers worked more, took on more tasks, expanded their scope — and reported higher rates of fatigue and diminished empathy over time. The pattern is the affluent society's pattern: more production, less satisfaction. More capability, less meaning. The metrics improve while the human experience of the metrics degrades.

Galbraith's prescription was characteristically structural: redirect the surplus from private consumption to public investment. Build schools instead of shopping centers. Fund parks instead of parking lots. Invest in the shared environment instead of the private accumulation. The prescription was largely ignored, which Galbraith expected, because the planning system's institutional interests were served by private consumption and threatened by public investment, and the planning system's interests tend, in the affluent society, to prevail.

The AI version of the prescription would redirect the productivity surplus from private output — more features, more products, more scope — toward public goods: the retraining infrastructure, the educational reform, the institutional adaptation that the transition requires. The surplus exists. The twenty-fold multiplier creates it. The question is whether the surplus will be captured by the planning system (through subscription revenue, margin expansion, and the further concentration of capability) or redirected toward the public goods that would make the transition bearable for the people who are not engineers in Trivandrum or founders with Claude Code subscriptions.

The affluent society answered this question, consistently and over decades, in favor of private capture. The AI economy shows every sign of answering it the same way, for the same structural reasons. The dams are not being built. The public goods are not being funded. The conventional wisdom — that productivity growth is its own justification, that the surplus will distribute itself, that the market will provide — remains comfortable, unchallenged, and wrong.

---

Chapter 7: Private Opulence, Public Squalor, and the AI Transition

The most famous sentence in postwar economics is also the most widely ignored. "The family which takes its mauve and cerise, air-conditioned, power-steered and power-braked automobile out for a tour passes through cities that are badly paved, made hideous by litter, blighted buildings, billboards and posts for wires that should long since have been put underground." Galbraith wrote this in The Affluent Society in 1958, and the sentence has been quoted so frequently that its analytical content has been worn smooth by repetition, leaving only the imagery — the gleaming car, the decaying city — without the structural argument that gives the imagery its force.

The structural argument is this: in an economy organized around private production, public goods will be systematically underproduced. Not because the society is poor — the society is affluent beyond historical precedent — but because the incentive structure of the economy rewards private investment and penalizes public investment. Private investment generates returns that accrue to the investor. Public investment generates returns that are diffused across the population, captured by no individual actor, and therefore funded by no individual actor with the urgency that the need demands.

The AI transition is reproducing this pattern with a precision that would satisfy even Galbraith's appetite for structural irony.

The private returns to AI capability are extraordinary and immediate. A company that adopts AI coding tools sees productivity gains measurable in weeks. An individual who integrates AI into their workflow produces more output, reaches more users, generates more revenue. The twenty-fold multiplier is not a projection. It is a measurement, documented in The Orange Pill with the specificity of someone who watched it happen in a room in Trivandrum and measured the results against the previous quarter's output.

The public goods required to make the AI transition broadly beneficial are extraordinary in their scope and glacial in their development. Consider them in sequence.

Educational reform. The schools that are supposed to prepare the next generation for AI-augmented work have not, for the most part, revised their curricula to account for the fact that AI has rendered a significant portion of the technical skills they teach automatable. The Orange Pill describes a teacher who "stopped grading her students' essays and started grading their questions" — a beautiful example of pedagogical adaptation. But this teacher is an outlier. The educational system as a whole continues to assess students on their capacity to produce outputs that AI can now produce at lower cost and higher speed. The system continues to invest in teaching students to answer questions at the precise historical moment when the value of answers is collapsing relative to the value of questions.

The mismatch is not a failure of imagination. It is a structural feature of public education in the affluent society. Schools are funded by public revenue. Public revenue competes with private consumption for political support. The political system, responsive to the preferences of voters who are also consumers, consistently prioritizes private consumption over public investment. The result is a school system that lags the economy by a decade at the best of times and by a generation at the worst.

Workforce retraining. The workers being displaced by AI — not in the catastrophic mass-unemployment scenario that makes for dramatic headlines, but in the gradual, role-by-role erosion of tasks that constitutes the actual pattern of displacement — need retraining programs that are responsive to the speed of technological change. The programs that exist are, for the most part, designed for a previous era of technological transition: programs that retrain displaced manufacturing workers for service-sector jobs, or that provide coding boot camps for career changers. These programs assume that the destination skills will remain stable long enough for the retrained worker to amortize the cost of retraining. The AI economy invalidates this assumption. The destination skills are themselves being automated at a pace that exceeds the duration of most retraining programs.

The inadequacy of existing retraining infrastructure is not a surprise. It is the predictable consequence of chronic underinvestment in a public good. Workforce programs are funded by government budgets that compete with defense spending, tax cuts, and the thousand other demands on public revenue that the political system processes through the filter of voter preferences. Voters who are benefiting from AI — the contented majority, in Galbraith's formulation — have no structural reason to prioritize spending on programs that serve workers whose displacement they have not experienced.

Regulatory capacity. The agencies that are supposed to govern AI deployment — the Federal Trade Commission, the European Commission, national regulators in Singapore and Brazil and Japan — lack the technical capacity to evaluate the systems they are supposed to regulate. Effective regulation of AI requires understanding how the models work at a level of technical detail that the regulatory workforce, trained in law and public policy rather than machine learning, does not currently possess. The planning system knows this. The planning system benefits from this. The regulatory gap is not an oversight. It is a structural advantage that the planning system has no incentive to close.

Segal writes with appropriate urgency about the institutional deficit: "The gap between the speed of capability and the speed of institutional response is not closing. It is widening." But the urgency, however genuine, does not explain why the gap persists — and it is the explanation that matters, because without it, the prescription ("build the dams") is an exhortation without a mechanism.

The gap persists because the affluent society's structural incentives produce it. Private AI investment generates immediate, measurable, capturable returns. Public investment in the institutions required for a just transition generates diffuse, long-term, uncapturable returns. The rational actor — the company, the investor, the taxpayer evaluating competing claims on public revenue — invests where returns are immediate and capturable. The rational actor, in aggregate, produces private opulence and public squalor. No conspiracy is required. No villain is necessary. The structure does the work.

The irony is that the planning system's own long-term interests are served by adequate public investment. An educated workforce is more productive than an uneducated one. A stable social environment is more conducive to business than an unstable one. An effective regulatory framework provides the predictability that planning-system organizations need for long-range strategic decisions. Galbraith observed this irony in 1958 and noted that it had no effect on the planning system's behavior. The planning system's time horizon, for all its long-range planning capability, is determined by the quarterly earnings cycle, the annual budget, the competitive pressures of the present. The future benefits of public investment are real but distant. The present costs of public investment are real and immediate. The planning system, like the individual consumer, discounts the future.

The AI version of private opulence and public squalor has a feature that distinguishes it from the 1958 version and makes it more dangerous. In 1958, the consequences of public underinvestment were visible: crumbling roads, inadequate schools, polluted rivers. The consequences were slow to develop, but they were concrete and eventually undeniable. The consequences of public underinvestment in the AI transition are less visible and harder to attribute. A worker whose role is gradually eroded by AI does not experience the dramatic displacement of a factory closing. The erosion is incremental — a task here, a responsibility there, a gradual narrowing of scope that is difficult to distinguish from normal organizational change. A student whose educational institution has not adapted to AI does not receive a dramatic notification of inadequacy. The student simply enters the workforce less prepared than the student could have been, and the gap between preparation and requirement is attributed to individual deficiency rather than institutional failure.

The invisibility of the consequences is the most insidious feature of the AI transition's version of private opulence and public squalor. The gleaming car and the crumbling road were visible to anyone who drove through the city. The amplified builder and the unadapted school are visible only to those who know what adaptation would look like, and the conventional wisdom — by definition, the set of beliefs whose function is to make uncomfortable realities invisible — ensures that most people do not know.

What would adequate public investment look like? The question is easier to ask than to answer, but the broad outlines are derivable from the analysis.

Educational institutions that teach judgment, questioning, and integrative thinking rather than execution — and that assess students on the quality of the questions they ask rather than the quality of the answers they produce. Workforce institutions that provide continuous, adaptive retraining rather than one-time programs designed for a stable destination. Regulatory agencies with the technical capacity to understand and evaluate the systems they regulate. Cultural institutions — libraries, museums, community organizations — that create spaces for the kind of slow, reflective engagement that the AI-accelerated environment increasingly crowds out.

Each of these is a public good. Each generates diffuse benefits that cannot be privately captured. Each competes for funding with private investments that generate immediate, capturable returns. The structural incentives of the affluent society militate against each of them, and the history of the affluent society suggests that the structural incentives will prevail until the consequences of underinvestment become so visible, so costly, and so politically salient that the conventional wisdom can no longer contain them.

This is the pattern. The gleaming car and the crumbling road coexisted for decades before the political system invested in infrastructure. The amplified builder and the unadapted institution may coexist for decades more. The question is not whether public investment will eventually arrive — the pattern suggests it will — but how much damage will accumulate in the interim, and who will bear the cost.

---

Chapter 8: The Revised Industrial State: From Manufacturing to Inference

The industrial state that Galbraith described in 1967 was organized around a specific form of production. Large corporations manufactured physical goods — automobiles, appliances, chemicals, steel — through processes that required enormous capital investment, specialized labor, and management systems complex enough to coordinate thousands of workers and hundreds of suppliers across continental supply chains. The corporation's power derived from its control of this productive apparatus. The capital requirements created barriers to entry that insulated incumbents from competition. The complexity of the process created a dependence on the technostructure — the specialists without whose knowledge the apparatus could not function. The scale of the operation created a capacity to manage the market — to set prices, create demand, and influence the political environment — that no small enterprise could match.

The industrial state that is emerging in 2026 is organized around a different form of production: the production of inferences. Large AI companies produce not physical goods but computational predictions — pattern completions, language generation, code synthesis, image creation — through processes that require enormous capital investment (in compute infrastructure rather than factories), specialized labor (in machine learning rather than metallurgy), and management systems complex enough to coordinate the training, alignment, deployment, and continuous improvement of models whose internal operations are not fully understood even by the people who built them.

The parallels between these two industrial states are structural, not merely metaphorical. They operate at the level of economic architecture rather than surface similarity, and they illuminate the AI economy's trajectory with a specificity that the conventional wisdom's rhetoric of disruption and democratization systematically obscures.

The first parallel is the capital barrier. General Motors in 1967 required billions of dollars in plant, equipment, and working capital to compete in the automobile market. The capital requirement was not a contingent feature that would diminish as the industry matured. It was a structural characteristic of automobile manufacturing: the process required enormous fixed investment before the first car could be produced, and the investment created an insurmountable barrier for any potential competitor that lacked comparable resources. Anthropic, OpenAI, and Google DeepMind in 2026 require billions of dollars in compute infrastructure, training data, and research talent to produce frontier models. The capital requirement is not a transitional condition that will diminish as the technology matures. Frontier models require more compute with each generation, not less. The cost of training is increasing, not decreasing. The barrier to entry is rising, not falling.

The conventional wisdom about AI — that open-source models will democratize the technology, that smaller companies will compete effectively with frontier labs, that the barriers will fall as the technology matures — is the same conventional wisdom that was applied to the automobile industry in the early twentieth century, when hundreds of small manufacturers competed in an open market. The market consolidated. The small manufacturers disappeared. The barriers rose. The planning system emerged. The pattern is structural, and the AI industry exhibits every characteristic that produced consolidation in every previous capital-intensive industry.

The second parallel is the technostructure's indispensability. General Motors could not function without the collective expertise of its engineers, managers, and technical specialists. The knowledge required to design, manufacture, and distribute automobiles at scale exceeded the capacity of any individual, including the CEO, and the organization's dependence on this collective expertise was the source of the technostructure's power. The AI company cannot function without the collective expertise of its researchers, alignment scientists, and infrastructure engineers. The knowledge required to train, align, deploy, and maintain a frontier language model exceeds the capacity of any individual, and the organization's dependence on this collective expertise is the source of the new technostructure's power.

But the new technostructure differs from the old in a crucial respect: the concentration of its knowledge is more extreme. General Motors' technostructure numbered in the thousands — engineers, managers, specialists distributed across dozens of plants and offices. The AI technostructure at any single frontier lab numbers in the hundreds, and the subset of that technostructure whose knowledge is genuinely indispensable — the people who understand the architecture, the training process, the alignment methodology at a level sufficient for governance decisions — may number in the dozens. The concentration of indispensable knowledge in so few individuals is historically unprecedented, and it creates a power dynamic more extreme than anything Galbraith documented in the industrial corporation.

The third parallel, and the one that most directly concerns this book's argument, is the planning system's capacity to shape demand. General Motors did not discover that Americans wanted large cars with fins and chrome and annual styling changes. It created this preference through the most sophisticated demand-management apparatus the world had yet seen: national advertising, a dealer network that controlled the retail environment, consumer financing that made the purchase painless, and a cultural narrative — the automobile as freedom, as status, as the material expression of the American dream — that converted a transportation device into an identity marker.

The AI planning system shapes demand through mechanisms that are different in form but identical in function. The product launch creates anticipation. The free tier creates dependency. The integration with existing workflows creates switching costs. The cultural narrative — AI as empowerment, as democratization, as the material expression of human potential — converts a computational tool into an identity marker with the same efficiency that General Motors converted a machine into a dream. The builder who uses Claude Code is not merely using a tool. The builder is participating in a narrative of human capability expansion that the planning system has constructed and the builder has internalized.

Hunter Lewis observed this dynamic with particular acuity: "Unlike fire, row cropping, broadband internet, or mobile telephony, AI is neither a technique nor a platform. It is coveted intellectual property, created at great expense and wholly owned by the firms that develop it. It can't be built upon like traditional infrastructure; to build an AI product is to consume an AI product." The observation captures the distinction between the old industrial state and the new with a precision Galbraith would have admired. The automobile, once purchased, belonged to the buyer. The road, once built, was available to all. The AI model, once deployed, remains the property of the firm that trained it — accessed through a subscription, governed by terms of service, subject to changes in capability and pricing at the firm's discretion.

The Orange Pill's description of the Software Death Cross — the moment when AI market value overtakes traditional SaaS — marks the transition between industrial states as clearly as any stock chart can. The falling curve represents the old industrial state of software: companies that sold code as a product, whose value was in the difficulty of writing code, whose business model assumed that code was hard to produce and therefore worth paying for. The rising curve represents the new industrial state of inference: companies that sell computational predictions as a service, whose value is in the infrastructure required to produce those predictions, whose business model assumes that code is easy to produce and inference is the scarce resource.

The transition from one industrial state to the next does not proceed smoothly. Galbraith documented the transitions of previous eras — from agricultural to industrial, from industrial to post-industrial — and observed that each transition produced a period of profound dislocation in which the institutions built for the old state were inadequate for the new one and the institutions required for the new state had not yet been built. The labor protections designed for factory workers did not apply to service workers. The regulatory frameworks designed for manufacturing did not apply to financial engineering. The educational institutions designed for one economy did not prepare students for the next.

The AI transition is in this period of dislocation. The institutions built for the software industrial state — the SaaS business model, the developer hiring pipeline, the computer science curriculum, the venture capital evaluation framework — are being repriced by a market that has recognized, with the brutal efficiency that markets bring to the recognition of obsolescence, that these institutions belong to an industrial state that is passing.

The institutions required for the new industrial state — the governance frameworks for inference providers, the educational models for AI-augmented work, the labor protections for workers whose roles are being restructured by AI, the antitrust concepts adequate to the economics of model training — have not yet been built. Some have been proposed. Fewer have been piloted. Almost none have been deployed at the scale the transition requires.

The gap between the institutions that are passing and the institutions that are needed is the space in which the AI transition's human costs will accumulate. Galbraith documented this gap in every industrial transition he studied and observed that the gap was always wider than the optimists predicted and always narrower than the fatalists feared. The optimists underestimated the gap because they assumed that institutional adaptation would keep pace with technological change. The fatalists overestimated it because they assumed that the old institutions would persist in their inadequacy indefinitely. The reality, in every case, was that the gap persisted long enough to impose genuine costs on a specific generation — the generation that happened to be working, learning, or parenting during the transition — and then closed, imperfectly, as new institutions were built under the pressure of the costs the gap had imposed.

David Lingenfelter, tracing the technostructure's evolution from Detroit to Silicon Valley in 2025, noted that each industrial transition transferred power not from one individual to another but from one organizational form to another. "The story," Lingenfelter wrote, "is not about the rise and fall of individual companies but about the rise and fall of organizational models — and the human costs of living through the transition between them." The observation is Galbraithian in its structural focus, and it captures what the triumphalists in every industrial transition have consistently missed: that the long-term trajectory toward greater capability does not mitigate the short-term cost to the people who live through the transition.

The revised industrial state is arriving. The old one is passing. The transition is real, the capability expansion is genuine, and the human costs of the gap between the institutions that are passing and the institutions that have not yet arrived will be borne by the generation that happens to be here now. This is not a warning about a possible future. It is a description of the present, and the present demands not celebration of the new state's potential but honest engagement with the dislocation its arrival has already produced.

Chapter 9: The Myth of Sovereignty in the Attention Economy

Consumer sovereignty is the most flattering fiction in the history of economic thought. It holds that the market serves the autonomous preferences of freely choosing individuals — that the consumer decides what to buy, the producer responds to the decision, and the transaction reflects the sovereign will of the person who opens the wallet. The fiction is flattering because it places the individual at the center of the economic universe, endowing every purchase with the dignity of a freely exercised choice. It is a fiction because Galbraith spent four decades demonstrating, with evidence drawn from every sector of the American economy, that the sequence operates largely in reverse.

The producer does not discover the consumer's preference and satisfy it. The producer shapes the preference and then satisfies the preference it shaped. General Motors did not learn that Americans wanted large cars with annual styling changes. General Motors created the desire for large cars with annual styling changes through an apparatus of demand management — advertising, dealer networks, consumer financing, planned obsolescence — so comprehensive that the desire, once produced, was indistinguishable from autonomous preference. The consumer experienced wanting the car as a free choice. The wanting was produced.

Galbraith called this the "revised sequence" — the reversal of the orthodox flow from consumer to producer into the actual flow from producer to consumer. The orthodox sequence held that consumer demand drives production. The revised sequence held that production drives consumer demand. The revised sequence was not a total description — some consumer preferences genuinely originate with the consumer — but it was a far more accurate description of the modern economy than the orthodox version, which survived not because it was true but because it was flattering to consumers and useful to producers, a combination that ensures a belief's persistence more reliably than any amount of evidence.

The AI economy has produced a successor fiction: creator sovereignty. The doctrine holds that the AI-empowered builder freely chooses what to build, how to build it, and on what terms. The builder is sovereign. The platform serves. The tool amplifies the builder's autonomous creative vision, and the amplification reflects the builder's free exercise of judgment about what deserves to exist in the world.

The fiction is flattering. It is also, by Galbraithian standards, approximately as accurate as its predecessor.

The builder's choices are constrained by structures the builder did not create and cannot unilaterally alter. The platform determines what the model can and cannot do — through alignment decisions, content policies, and capability limits that reflect the platform's institutional interests rather than the builder's creative vision. The pricing structure determines what level of capability the builder can afford — creating a hierarchy of access that segments the market into tiers corresponding to willingness to pay. The terms of service function as private law: written by the platform, enforced by the platform, amendable by the platform, accepted by the builder through a click that no one reads and everyone performs.

The training data determines what the model knows — and what it does not know, what perspectives are overrepresented, what languages are well-served, what cultural contexts are legible to the system. These curation decisions are made by the technostructure, with minimal public disclosure and no democratic accountability. The builder who uses the model inherits these decisions as constraints on creative possibility, constraints experienced not as external impositions but as the natural boundaries of what the tool can do.

The model's aesthetic tendencies — its default prose style, its patterns of reasoning, its habitual structures of argument — shape the builder's output in ways the builder may not recognize. The Orange Pill describes this phenomenon with unusual candor: passages where Claude produced text that "sounded like insight but broke under examination," where "the prose had outrun the thinking," where the smoothness of the output concealed the absence of the idea. These are not failures of the tool. They are features of a system whose design produces outputs optimized for plausibility rather than truth, for fluency rather than depth. The builder who accepts the output uncritically has not exercised sovereign judgment. The builder has accepted the system's aesthetic as a substitute for the builder's own.

Segal catches himself in this dynamic and writes about it with an honesty that is the book's most valuable feature. He describes deleting passages that Claude produced because he "could not tell whether I actually believed the argument or whether I just liked how it sounded." He describes the seduction of polished output: "the prose comes out polished. The structure comes out clean. The references arrive on time. And the seduction is that you start to mistake the quality of the output for the quality of your thinking."

This is consumer sovereignty's failure mode translated to the creative domain. The consumer who mistakes the desire manufactured by advertising for autonomous preference is the predecessor of the builder who mistakes the fluency manufactured by the model for genuine insight. In both cases, the individual experiences sovereignty. In both cases, the system has done the deciding.

The attention economy, which preceded and now converges with the AI economy, perfected the mechanisms through which sovereignty is simulated while choice is structured. The recommendation algorithm that learns your preferences and serves you more of them is not serving your autonomous taste. It is constructing your taste through a feedback loop in which each click narrows the range of what you encounter, and the narrowing is experienced as personalization rather than constraint. The social media feed that maximizes engagement is not responding to your autonomous desire for information. It is exploiting the neurological mechanisms — variable reward schedules, social validation loops, loss aversion — that produce compulsive engagement regardless of the content's value to the person engaging with it.

Segal confesses to having built such systems. "Early in my career," he writes, "I built a product that I knew was addictive by design. Not in the loose way people use that word now. I understood the engagement loops, the dopamine mechanics, the variable reward schedules, the social validation cycles." The confession is admirable and Galbraithianly significant: not because it reveals individual wrongdoing, but because it demonstrates that the mechanisms of demand creation are not accidental features of the attention economy. They are designed — by people who understand exactly what they are designing and who deploy that understanding in the service of institutional objectives that do not include the user's autonomous welfare.

The AI economy integrates these mechanisms at a deeper level. The chatbot that responds to your half-formed question before you have finished forming it is not serving your autonomous intellectual curiosity. It is shaping it — channeling it toward outputs the system is optimized to produce, rewarding certain patterns of inquiry with fluent responses and discouraging others with less satisfying outputs. The builder who describes a problem to Claude and receives a working prototype in minutes is not merely having a desire satisfied. The builder is being trained — through the reinforcement of immediate, high-quality feedback — to desire the specific kind of productive engagement that generates revenue for the platform and dependency in the user.

Galbraith observed that the most effective demand management is invisible to the person being managed. The consumer who is aware of being manipulated resists. The consumer who experiences the manipulation as free choice does not. The AI economy's demand management operates at this invisible level: the builder experiences creative flow, not platform dependency; capability expansion, not structured choice; sovereignty, not the revised sequence.

The structural question is not whether individual builders can resist. Some can. Segal's practice of deleting Claude's polished-but-hollow passages, of spending time in coffee shops writing by hand, of asking whether the output reflects his thinking or the model's fluency — these are acts of individual resistance that the Galbraithian framework respects without trusting. Individual resistance operates within the system that produces the dependency. The builder who resists today may not resist tomorrow. The builder who recognizes the seduction this time may not recognize it next time. And the millions of builders who do not possess Segal's decades of experience, or his self-awareness, or his willingness to delete good-sounding prose that does not sound like him — those builders are the market in which the revised sequence operates unimpeded.

The structural response is not individual resistance but institutional accountability. Transparency requirements that disclose the model's training data biases, aesthetic tendencies, and capability limitations — not in the unread terms of service but in the interface itself, at the moment of use. Interoperability requirements that prevent platform lock-in and ensure that the builder's choice of tool is a genuine choice rather than a switching-cost trap. Standards for algorithmic disclosure that make the mechanisms of demand creation visible to the people they affect. Competitive structures that prevent the consolidation of inference capability in so few hands that the "choice" of platform is a choice among a cartel's offerings rather than a genuine market.

These institutional structures do not currently exist. Their development is a task for the countervailing institutions whose absence this book has been documenting since its fourth chapter. And their absence is the space in which the revised sequence operates — the space in which the builder's sovereignty is simulated, the builder's dependency is deepened, and the planning system's control of the creative environment is mistaken, by everyone involved, for the builder's free exercise of creative will.

The fiction of sovereignty is comfortable. It is comfortable for the builder, who prefers to believe that creative decisions are autonomously made. It is comfortable for the platform, which prefers to be seen as serving the builder rather than structuring the builder's choices. It is comfortable for the conventional wisdom, which can celebrate the democratization of capability without examining the terms on which the capability is accessed.

Galbraith spent his career making comfortable fictions uncomfortable. The fiction of creator sovereignty in the AI economy is the latest in a long sequence of such fictions, and it will persist, as its predecessors did, until the consequences of its persistence become too costly to ignore. The consequences are accumulating now, in the dependency of builders on platforms they do not control, in the aesthetic homogenization of AI-assisted output, in the gradual erosion of the distinction between what the builder wants to create and what the system is optimized to produce. These consequences are not yet costly enough, or visible enough, to disturb the conventional wisdom. They will be.

---

Chapter 10: Are We Worth Amplifying, or Merely Worth Exploiting?

The Orange Pill closes with a question that operates as both a challenge and a benediction: "Are you worth amplifying?" The question is personal, direct, and powerful. It places the burden on the individual. It implies that the amplifier is neutral — that the quality of the output depends on the quality of what the individual brings to it. Feed it carelessness, you get carelessness at scale. Feed it genuine care, and it carries that care further than any tool in history.

Galbraith's career was spent demonstrating that this framing, however emotionally compelling, is structurally incomplete. The amplifier is not neutral. It is owned, operated, and governed by organizations with their own institutional interests, and those interests shape the terms of amplification as decisively as the individual's worthiness shapes the content.

The question "Are you worth amplifying?" is necessary. It is not sufficient. The sufficient question has two parts, and the second part is the one that institutional economics exists to ask: Is the system through which you are amplified worth the power it commands?

The individual may be worth amplifying. The biases may be examined, the questions may be genuine, the judgment may be sound, the care may be real. Segal's account of his own practice — the discipline of rejecting Claude's output when it sounds better than it thinks, the willingness to spend hours in a coffee shop writing by hand until the version is honest, the constant self-interrogation about whether the compulsion is flow or addiction — is a credible portrait of a person taking the question of individual worthiness seriously.

But individual worthiness operates within institutional structures that determine who is amplified, on what terms, at what cost, and to whose benefit. A worthy individual amplified through a system that concentrates benefits and distributes costs produces an outcome that the individual's worthiness alone cannot redeem. The most brilliant developer in Lagos, amplified through Claude Code, produces output whose value flows in three directions: to the developer (in the form of enhanced capability), to the developer's employer or customers (in the form of better products), and to Anthropic (in the form of subscription revenue and, more importantly, the usage data that improves the model that generates the revenue that funds the next generation of infrastructure that deepens the dependency). The third flow is the one the conventional wisdom prefers not to measure, and it is the flow that determines, over time, whether the amplification produces broadly distributed benefit or concentrated extraction.

Galbraith documented this three-directional flow in every productive technology he studied. The automobile enhanced the buyer's mobility. It also enriched General Motors and enriched General Motors' capacity to shape the buyer's future choices. The television expanded the viewer's access to information. It also enriched the networks and enriched the networks' capacity to shape the viewer's future attention. The technology served the individual. The technology also served the institution. And the institutional benefit compounded over time in ways the individual benefit did not, because institutions accumulate power while individuals merely accumulate experience.

The AI economy reproduces this dynamic with a clarity that would have gratified Galbraith's analytical instincts if not his hopes for human progress. The individual builder is more capable. The planning system is more powerful. The capability and the power compound at different rates: the individual's capability is bounded by the terms of access the planning system sets, while the planning system's power grows with every user, every subscription, every inference that generates data that improves the model that generates more inferences.

This is not a conspiracy. It is a structural tendency. The planning system does not need to exploit the individual deliberately. It needs only to set the terms of access in accordance with its institutional interests, and the terms will, over time, produce a distribution of benefits that favors the institution over the individual. Not because the institution is malicious but because the institution is structured to compound its advantages, and the individual is not.

The question of institutional worthiness is harder to ask and harder to answer than the question of individual worthiness. Individual worthiness can be assessed through self-examination — the process Segal describes of asking "Am I here because I choose to be, or because I cannot leave?" Institutional worthiness requires a different kind of examination: an assessment of the structures through which power flows, the incentives that shape institutional behavior, the accountability mechanisms that constrain institutional discretion. This assessment cannot be performed by the individual alone. It requires the collective institutions — the countervailing powers — whose development this book has been documenting and whose absence it has been lamenting.

Consider the specific institutional structures through which AI amplification currently flows.

The pricing structure determines who is amplified and at what level. The free tier provides capability sufficient to create dependency but insufficient for professional use. The professional tier captures the productive user at a price that is affordable for knowledge workers in wealthy nations and prohibitive for many elsewhere. The enterprise tier locks organizations into relationships whose switching costs create dependencies that persist independent of the platform's continued merit. Each tier is a structural decision about who benefits from amplification and on what terms — a decision made by the planning system in accordance with its growth objectives, experienced by the user as a menu of free choices.

The training data determines what the amplifier knows and what it does not — what languages it speaks fluently, what cultural contexts it understands, what perspectives are represented in its outputs. These curation decisions embed values, whether intentionally or through the biases of the data itself, and the embedded values shape every output the model produces. The builder who is amplified through a model trained predominantly on English-language data from Western institutions is amplified through a specific cultural lens — one that is invisible to most users and unaccountable to any external authority.

The alignment decisions determine what the amplifier will and will not do — the boundaries of permissible output, the sensitive topics it will engage or deflect, the ethical judgments embedded in its behavior. These decisions are made by the technostructure, with varying degrees of transparency, and they constitute a form of private governance more consequential than most public regulation. The builder whose work is shaped by alignment constraints did not participate in setting those constraints, cannot appeal their application, and may not even be aware of their existence.

The terms of service determine the legal framework within which amplification occurs — who owns the output, what the platform may do with the user's data, what remedies are available if the platform changes the terms. These terms are written by the platform, accepted by the user through a click, and amendable by the platform at its discretion. They are, in Galbraithian terms, the private law of the planning system — as consequential in their domain as public law, and considerably less democratic in their creation.

Each of these structures shapes the terms of amplification. Each operates in accordance with the planning system's institutional interests. Each is experienced by the user as a feature of the tool rather than a constraint of the institution. And each, taken together with the others, constitutes a system of structured choice that the conventional wisdom calls empowerment and that Galbraithian analysis calls the revised industrial state operating at the level of cognition itself.

None of this means that AI amplification is a sham or that the individual builder's worthiness is irrelevant. Galbraith was not a nihilist. He did not argue that individual agency was meaningless. He argued that individual agency operated within institutional structures that constrained it, and that understanding those constraints was the prerequisite for expanding the space in which agency could operate. The builder who understands the dependence effect, the planning system, the revised sequence, the structural distribution of benefits — that builder is not less empowered. That builder is more honestly empowered, because the empowerment is exercised with an awareness of the constraints that shape it.

Galbraith wrote in The Good Society that a decent civilization is one that provides opportunity not only for the comfortable but for the least fortunate of its members. The AI transition will be judged by this standard — not by the capability it provides to the already capable, but by the structures it builds to ensure that capability is distributed broadly enough to constitute a genuine expansion of human welfare rather than a new mechanism for the concentration of advantage.

The Orange Pill calls for worthiness. This analysis calls for accountability. The two are not alternatives. They are the two necessary conditions of a just amplification. The individual must be worthy. The system must be accountable. A book that holds only the first is holding half the argument, and a society that builds only the first is building half the structure required for the transition to produce something other than the familiar pattern: private opulence, public squalor, and the conventional wisdom's assurance that everything is proceeding as it should.

Galbraith spent his career making the second half of the argument — the structural half, the institutional half, the uncomfortable half — because no one else was making it with sufficient force. The economics profession preferred the elegance of the competitive model. The business community preferred the mythology of the sovereign consumer. The political system preferred the comfort of the conventional wisdom. Galbraith preferred the truth, even when the truth was that the system's most celebrated features were also its most effective mechanisms of control.

The AI economy is the most powerful amplifier in human history. The question of what it amplifies is a question of individual worthiness. The question of how the amplification is structured, who captures its benefits, and who bears its costs is a question of institutional design. The first question is personal. The second is political. A civilization that answers only the first will produce the most magnificent private capability in human history alongside the most degraded public institutions — the gleaming car on the crumbling road, updated for the age of inference.

The question is not whether we are worth amplifying. The evidence suggests many of us are. The question is whether we will build the structures that ensure the amplification serves the amplified rather than the amplifier. That question cannot be answered by the individual alone. It can only be answered by institutions — by the countervailing powers that do not yet exist, by the regulatory frameworks that have not yet been built, by the cultural norms that have not yet been established, by the collective decisions that no individual, however worthy, can make on behalf of a civilization.

Galbraith asked this question about the industrial economy. He asked it about the affluent society. He asked it about the new industrial state. The question has never been comfortably answered. It has only been answered well enough, belatedly enough, imperfectly enough, to prevent the worst outcomes and to build, through decades of institutional struggle, something approaching a tolerable civilization.

The AI economy deserves the same question, asked with the same force, and answered with the same imperfect, belated, institutional determination that eventually — eventually — built the eight-hour day, the weekend, the public school, and the right of workers to organize against the power that employed them. These were not gifts from the planning system. They were extracted from it, by countervailing institutions that took decades to build and that required, as their foundation, the recognition that individual worthiness was necessary but that institutional accountability was the thing that actually determined outcomes.

That recognition is where this book begins and ends. The amplifier is powerful. The amplified may be worthy. The structures through which the amplification flows will determine whether the power and the worthiness produce a just civilization or merely a more efficient version of the unjust one.

The conventional wisdom says the amplifier is neutral. Galbraith's entire body of work says otherwise. Nothing embedded in a system of concentrated economic power is neutral, however comforting the fiction of neutrality may be. The question is not whether we are worth amplifying. The question is whether we will build the institutions that make the amplification worth having.

---

Epilogue

The board conversation comes around every quarter, and every quarter it follows the same arc. The numbers go up on the screen — what the team built, how fast they built it, what a single engineer can now accomplish in a week. The room gets quiet in the particular way rooms get quiet when everyone is running the same arithmetic: if five people can do what a hundred did, why are we paying a hundred?

I know the arithmetic. I can do it faster than anyone in the room.

The reason the question stays with me, long after the meeting ends, is that I have never once heard a board conversation about who funds the retraining. Who builds the school that teaches judgment instead of syntax. Who maintains the institutions that make the transition bearable for the people who are not in the room, who do not have a subscription to the frontier model, who did not happen to be standing at the right point in the river when the current changed.

Galbraith called this the asymmetry between private opulence and public squalor, and I used to think the phrase was a rhetorical flourish — vivid, maybe overstated, the kind of thing economists say when they want to be quoted. Then I spent six months watching the AI transition from the inside, and I realized it was the most precise description of what I was living through that anyone had offered me. The private capability is spectacular. The public infrastructure is not merely inadequate. It is absent. The dams I called for in The Orange Pill — the educational reforms, the retraining programs, the regulatory frameworks, the cultural norms that protect human depth against the pressure of frictionless acceleration — remain largely unbuilt, and the structural reasons for their absence are exactly the ones Galbraith identified seventy years ago: the market rewards private investment and penalizes public goods, and no amount of individual goodwill changes the incentive structure.

What Galbraith gave me — what made this particular journey through his ideas the most uncomfortable of the cycle — is the recognition that my own framework was incomplete. The Orange Pill asks whether you are worth amplifying. It places the burden on the individual. Examine your biases. Sharpen your questions. Bring genuine care to the tool, and the tool will carry it further than any instrument in history.

I still believe that. Every word of it.

But I now see what the question leaves out. The amplifier is not neutral. It is owned, governed, priced, and structured by organizations whose institutional interests shape the terms of amplification as decisively as my worthiness shapes the content. I can be the most worthy builder in the world, and the question of who captures the value of my amplified output, who controls the infrastructure I depend on, who writes the terms of service I accept without reading — those questions are not answered by my worthiness. They are answered by structures I did not build and cannot unilaterally change.

That does not make individual worthiness irrelevant. It makes it necessary but insufficient. And the distance between necessary and sufficient is where the institutional work lives — the work of building countervailing power, the work that takes decades, the work that no quarterly earnings cycle rewards, the work that history suggests will eventually get done because the costs of not doing it become unbearable.

The costs are accumulating now. Not in the dramatic form of mass unemployment — the apocalypse scenario that makes for good headlines and bad policy. In the quiet form of schools that have not adapted, workers whose roles are narrowing task by task, parents who cannot answer their children's questions about a world the parents did not build and do not fully understand. The quiet costs are the dangerous ones, because quiet costs can be ignored until they cannot, and by then the damage has compounded.

I do not have Galbraith's gift for the devastating sentence or the perfectly timed irony. What I have is the view from inside the system he described — the view of someone who runs the arithmetic, who knows the pressure to convert productivity gains into headcount reduction, who feels the structural pull toward private capture and against public investment every single quarter.

The question I carry out of this journey is not the one I carried in. I came in asking whether individuals are worth amplifying. I leave asking whether we will build the institutions that make the amplification worth having — for everyone, not just for those of us lucky enough to be in the room when the numbers go up.

Galbraith would have said the institutions will arrive eventually, because the costs of their absence always eventually become undeniable. He would also have said that "eventually" is a word that conceals a generation's worth of avoidable suffering.

I would like us to do better than eventually.

— Edo Segal

The conventional wisdom about AI says the tools empower individuals, flatten hierarchies, and democratize who gets to build. The conventional wisdom is partly right -- which is exactly what makes it d

The conventional wisdom about AI says the tools empower individuals, flatten hierarchies, and democratize who gets to build. The conventional wisdom is partly right -- which is exactly what makes it dangerous. John Kenneth Galbraith devoted his career to dismantling comfortable beliefs that persist not because they survive scrutiny but because they survive social acceptability. His frameworks -- the technostructure, the planning system, the dependence effect, the revised sequence -- reveal the structural reality beneath the empowerment narrative: that the companies building AI set the prices, control the infrastructure, shape the demand, and write the private law governing how millions exercise their supposedly sovereign creative will.

This book applies Galbraith's institutional economics to the AI revolution with uncomfortable precision. It asks not whether individuals are worth amplifying, but whether the systems through which amplification flows are accountable to anyone beyond themselves. It traces the pattern from General Motors to Google DeepMind and finds the architecture unchanged: private opulence, public squalor, and a conventional wisdom designed to make the gap invisible.

-- John Kenneth Galbraith

John Kenneth Galbraith
“not combated with new arguments or better evidence but by the course of events.”
— John Kenneth Galbraith
0%
11 chapters
WIKI COMPANION

John Kenneth Galbraith — On AI

A reading-companion catalog of the 31 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that John Kenneth Galbraith — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →