By Edo Segal
Three dollars and forty-seven cents.
That is what it cost me, in API fees, to generate the first draft of a contract my lawyer would have billed four hours for. Twelve pages. Clean language. Standard clauses. My lawyer found two issues — a jurisdictional nuance and a liability provision that was technically correct but strategically unwise given the specific counterparty. Two catches out of twelve pages. Ninety-eight percent right, for less than the price of a coffee.
I did not fire my lawyer. But I understood, in the three minutes it took him to find those two problems, exactly what I was paying him for. Not the twelve pages. The two catches. The judgment. The thing the smooth output concealed.
That gap — between what anyone can now produce and what only expertise can identify — is the territory I have been trying to map since I took the orange pill. But mapping territory requires tools, and the tool I was missing was economic.
I am a builder. I think in products, in teams, in what ships and what breaks. When I watched a trillion dollars of market value vanish from software companies in early 2026, I understood what was dying — the old theory that code is the moat. What I could not articulate was the precise mechanism by which value migrates when production becomes cheap. I could feel it happening. I could not name the forces driving it.
Carl Shapiro could. He spent four decades studying exactly this: what happens in markets where information is the primary good, where network effects tip industries toward concentration, where switching costs accumulate through individually rational decisions that collectively produce captivity nobody chose. His framework, built for telephones and operating systems and search engines, maps onto the AI moment with a precision that made me uncomfortable — because it described my own lock-in before I had a word for it.
This book applies Shapiro's economics to the questions I carry into every boardroom. When my team achieves a twenty-fold productivity multiplier, who captures the surplus? When the surface quality of AI output conceals the presence or absence of real judgment, what happens to the market for expertise? When network effects compound at the speed of conversation, how narrow is the window for building the institutions that determine whether this moment enriches broadly or concentrates narrowly?
The river flows. The economics are real. And the forces Shapiro identified do not care about our intentions. They will produce their predicted outcomes with the indifference of gravity.
Understanding those forces is the first step toward building dams that redirect them.
-- Edo Segal ^ Opus 4.6
1955–
Carl Shapiro (1955–) is an American economist specializing in industrial organization, antitrust policy, and the economics of information markets. A professor at the Haas School of Business at the University of California, Berkeley, Shapiro co-authored Information Rules: A Strategic Guide to the Network Economy (1999) with Hal Varian, a foundational text on how network effects, switching costs, lock-in, and versioning shape competition in technology markets. His earlier theoretical work with Michael Katz, including "Network Externalities, Competition, and Compatibility" (1985), formalized the economics of network effects that now underpin the analysis of platform markets worldwide. Shapiro served as Deputy Assistant Attorney General for Economics in the Antitrust Division of the U.S. Department of Justice under both the Clinton and Obama administrations, shaping competition policy during two pivotal eras of technology industry consolidation. His scholarship on merger analysis, intellectual property licensing, and the competitive dynamics of standards-setting has influenced antitrust enforcement across multiple technology generations. He continues to consult on competition matters and write on the economics of market power in digital industries.
In 1999, Carl Shapiro and Hal Varian published a single sentence that would prove more durable than the entire dot-com bubble inflating around them: "Technology changes. Economic laws do not."
The sentence appeared in Information Rules: A Strategic Guide to the Network Economy, a book written at the precise moment when the conventional wisdom held that the internet had repealed the laws of economics. Stock valuations had detached from revenue. Companies with no profits commanded market capitalizations larger than General Motors. A new vocabulary had emerged — "eyeballs," "stickiness," "first-mover advantage" — that sounded like economics but operated more like incantation. The premise was that the rules governing industrial economies simply did not apply to information economies, that something so fundamentally new had arrived that the old analytical tools were obsolete.
Shapiro and Varian's response was not to deny the newness. The internet was genuinely transformative. What they denied was the premise that transformation requires new economics. The forces shaping information markets — network effects, switching costs, lock-in, the peculiar cost structure of goods that are expensive to produce and nearly free to reproduce — were not inventions of the 1990s. They were features of any market in which information was the primary good. The telephone market of the 1890s exhibited them. The railroad network of the 1870s exhibited them. What the internet did was make these forces faster, larger, and more visible. It did not make them new.
Twenty-seven years later, the same error is being committed with greater enthusiasm and higher stakes. The arrival of large language models that produce working software through natural conversation, that collapse what Edo Segal calls the "imagination-to-artifact ratio" to the length of a conversation, has generated a discourse remarkably similar to the one Shapiro and Varian confronted in 1999. The vocabulary has changed — "prompting," "agentic workflows," "the orange pill" — but the underlying claim is identical: something so fundamentally new has arrived that the old analytical tools are obsolete.
The claim is wrong in the same way and for the same reasons.
The economics of artificial intelligence are the economics of information goods, applied to a new and extraordinarily powerful category of information good. The cost structure is the same: enormous fixed costs of development (billions of dollars in training compute, data acquisition, and research talent), near-zero marginal costs of distribution (serving an additional user costs pennies relative to the development investment). The market dynamics are the same: network effects that cause markets to tip toward dominant platforms, switching costs that accumulate with each interaction, lock-in that transfers bargaining power from users to platform providers. The strategic questions are the same: Who captures the value? How is the surplus distributed? What institutional structures determine whether the technology serves broad public welfare or concentrates economic power in the hands of a few?
These are the questions that Shapiro's framework was built to answer. That the framework was constructed for a previous generation of technology is not a limitation. It is the point. The forces are the same. Only the application has changed.
To understand why the application matters so profoundly, consider what has actually happened to the cost structure of software production. Segal describes it vividly in The Orange Pill: a Google principal engineer sat down with Claude Code, described a problem in three paragraphs, and received a working prototype of her team's system in one hour — a system her team had spent the past year trying to build. An engineer in Trivandrum who had never written frontend code built a complete user-facing feature in two days. A non-technical founder prototyped a revenue-generating product over a weekend.
Strip the narrative excitement from these accounts and what remains is an economic fact of extraordinary consequence: the first-copy cost of software has collapsed. In classical information economics, the first-copy cost is the cost of producing the initial version of an information good — the years of development, the accumulated expertise, the iterative refinement through testing and user feedback. Every subsequent copy is nearly free, because information can be reproduced at negligible cost. The ratio between first-copy cost and marginal cost is what defines the economics of information goods and distinguishes them from physical goods.
What AI has done is compress the first-copy cost toward the marginal cost. The engineer in Trivandrum did not eliminate the marginal cost of software distribution — that was already near zero. She eliminated the first-copy cost of software production. The expertise that previously took years to accumulate, the iterative development that previously consumed teams of engineers for months, the translation from human intention to working code that previously required deep technical skill — all of this has been compressed into a conversation.
The economic consequence follows with the regularity of gravity. When the first-copy cost collapses, the good becomes a commodity. When a good becomes a commodity, its price falls toward marginal cost. When the price falls toward marginal cost, the value migrates to adjacent layers — to whatever remains scarce after the commoditized layer becomes abundant.
This is precisely the pattern Segal documents in his chapter on the Software Death Cross. A trillion dollars of market capitalization vanished from software companies in the first weeks of 2026. Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. The market was not panicking irrationally. The market was repricing the software industry according to a new theory of value — one in which the code itself, the thing that SaaS companies had sold for two decades, was no longer the scarce resource that justified the price.
Shapiro and Varian's framework predicted this repricing with uncomfortable precision. In Information Rules, they identified the characteristic pattern of information-good commoditization: "The cost of producing the first copy of an information good may be substantial, but the cost of producing (or reproducing) additional copies is negligible. This cost structure leads to economies of scale: the more you produce, the lower your average cost of production." When a new technology drives the first-copy cost toward zero, the economies of scale that sustained the producer's pricing power collapse, because the cost advantage of large-scale production disappears when production itself is nearly free.
The parallel to previous information revolutions is structural, not merely analogical. When the printing press commoditized text reproduction, value migrated from the act of copying to the act of composing and distributing. Scribes became obsolete; authors and publishers became powerful. When digital photography commoditized image capture, value migrated from the physical photograph to curation and distribution. Stock photography agencies that built businesses on the scarcity of images found themselves competing with platforms offering millions of images for free. When digital distribution commoditized music reproduction — a history Segal knows intimately from his work at Napster — value migrated from the recorded artifact to live performance, licensing, and platform economics.
In each case, the pattern is identical: commoditization of production, migration of value to adjacent scarce layers, concentration of economic power in whoever controls those layers. The entities that captured value after each commoditization were not the producers of the commoditized good. They were the controllers of the complementary assets that the commoditized good required: distribution networks, curatorial authority, platform infrastructure, institutional trust.
The software industry is following this trajectory at a speed that compresses what previous information revolutions accomplished over decades into a timeline measured in months. Segal recognizes the destination — his analysis of the Death Cross correctly identifies the ecosystem, not the code, as the durable source of value. But the speed of the compression creates economic consequences that the original analysis does not fully develop.
When commoditization occurs over decades, markets have time to adjust. Workers retrain. Business models evolve. Institutional structures develop to manage the transition. When commoditization occurs in months, the adjustment period is brutally compressed. The senior engineer whom Segal describes — the one who felt like a master calligrapher watching the printing press arrive — does not have the luxury of a generational transition. The market is repricing his expertise in real time, and the institutional structures that might support his transition (retraining programs, transitional employment support, professional development pathways from old skills to new) have not yet been built.
Shapiro's framework identifies a further consequence of rapid commoditization that Segal's narrative of individual transformation does not capture: the distribution of the surplus. When a product is expensive to produce, the surplus — the difference between what consumers are willing to pay and what the product costs to produce — is divided between producers and consumers in a ratio determined by market structure. Producers capture surplus through pricing power. Consumers capture surplus through competition among producers. When production becomes cheap, the total surplus expands dramatically, because the gap between willingness to pay and cost of production widens. But the distribution of the expanded surplus depends entirely on the market structure of the adjacent layers to which value has migrated.
If those adjacent layers are competitive — if multiple providers offer the scarce complementary assets — consumers capture most of the expanded surplus. If those layers are concentrated — if a small number of platform companies control the data, integrations, and institutional relationships that make software valuable — the platform companies capture the surplus, and consumers, despite gaining access to cheaper software, may find themselves paying more for the ecosystem services that the software requires.
Segal's twenty-fold productivity multiplier is a measure of surplus expansion. Twenty engineers, each producing the output of twenty, represent an enormous widening of the gap between the value created and the cost of creating it. But who captures that surplus — the individual builder, the employer, the AI platform company, or the end consumer — is determined not by the productivity gain itself but by the market structure of the layers that remain scarce. The economics of information goods predict that the surplus will flow to whoever controls the scarce complementary assets. The question of who that is — and whether institutional structures can ensure that the surplus is distributed broadly — is the central economic question of the AI transition.
Technology changes. Economic laws do not. The law that governs this moment is the law of commoditization and value migration: when production becomes cheap, value concentrates in whatever remains scarce. The AI revolution has made software production cheap. The scarce resources — data, judgment, institutional trust, ecosystem integration, and the platform infrastructure that connects them — are being contested right now, by a small number of firms whose strategic choices will determine the economic structure of the intelligence age.
Shapiro spent a career studying how information markets allocate value, concentrate power, and respond to institutional intervention. His framework does not need to be updated for AI. It needs to be applied. The forces are the same. The stakes are immeasurably higher.
---
In 1985, Michael Katz and Carl Shapiro published "Network Externalities, Competition, and Compatibility" in the American Economic Review, a paper that formalized what telephone engineers and railroad executives had understood intuitively for a century: that some goods become more valuable as more people use them. The paper established the theoretical foundation for understanding how network effects shape technology adoption, market structure, and competitive dynamics. It showed, with mathematical precision, that network externalities can create winner-takes-all dynamics, tipping markets toward a dominant platform from which displacement becomes progressively more difficult.
The theory identified two canonical forms. Direct network effects arise when the value of a product to each user increases directly with the number of other users on the same network. The telephone is the textbook case: one telephone is useless; a million telephones are a communication infrastructure. The value scales with adoption, creating a positive feedback loop — more users make the network more valuable, which attracts more users, which makes it more valuable still — that drives the market toward consolidation.
Indirect network effects arise when a larger installed base attracts more producers of complementary goods, which in turn makes the platform more valuable to users. The Windows operating system exemplified this dynamic for two decades: more users attracted more software developers, more software attracted more users, and the self-reinforcing cycle drove Microsoft to a dominance that antitrust litigation could constrain but not reverse. The platform's value to any individual user depended not on how many other users existed (she did not communicate with them through the operating system) but on how many software developers had been attracted by the installed base. The network effect was indirect — mediated through the complementary goods market — but no less powerful for being mediated.
These two forms of network effects have organized the study of technology markets for four decades. They explain the consolidation of telecommunications, the dominance of Windows, the rise of Facebook, the tipping of the search market toward Google. They are among the most validated theoretical constructs in applied economics.
AI platforms exhibit both forms. The direct network effect operates through professional communities: the value of being on the same AI platform as colleagues, collaborators, and industry peers creates adoption pressure analogous to the telephone network effect. The indirect network effect operates through the developer ecosystem: as the author of The Orange Pill documents, engineers are building tools, plugins, integrations, and workflows that depend on specific AI platforms. Each complementary tool increases the platform's value to users, which attracts more users, which attracts more developers. The MCP integration framework that Segal references is precisely the kind of complementary goods infrastructure that Katz and Shapiro's theory predicts will form around a platform exhibiting increasing returns.
But AI platforms also exhibit a third form of network effect that has no precise precedent in the history of information goods and that the Katz-Shapiro framework, for all its power, did not anticipate. This third effect arises from the relationship between usage and model improvement.
Every interaction with a large language model generates signal. Not merely the explicit feedback of user ratings, but the implicit information encoded in which responses are accepted, which are modified, which are rejected, which prompts produce useful output and which do not. This behavioral data feeds into the refinement process — through reinforcement learning from human feedback, through the identification of capability gaps, through the accumulation of domain-specific interaction patterns that inform future model development. The more people use the platform, the more signal it accumulates, the more capable the model becomes, which attracts more users, which generates more signal.
This data network effect is structurally distinct from both direct and indirect network effects. In a direct network effect, each user adds value by being reachable or present on the network. In an indirect network effect, each user adds value by attracting complementary goods producers. In the data network effect, each user adds value by teaching the model. The product itself improves as a function of usage, creating a feedback loop in which the act of consumption simultaneously improves the good being consumed.
The distinction matters because the data network effect creates a competitive advantage that is cumulative, compounding, and extraordinarily difficult to replicate. A platform with a billion user interactions has a model refined by a billion interactions' worth of behavioral signal. A new entrant with zero interactions begins with whatever capability its initial training provides. The quality gap between the incumbent and the entrant widens with every interaction on the incumbent's platform, creating a barrier to entry that grows rather than diminishes over time. This is the opposite of what occurs in most markets, where incumbent advantages erode as competitors learn and improve. In the data network effect, the incumbent's advantage compounds, because the learning itself is the advantage, and the incumbent is learning faster by virtue of having more users from whom to learn.
Hal Varian — Shapiro's co-author on Information Rules and later Google's chief economist — identified this dynamic in his 2018 NBER working paper "Artificial Intelligence, Economics, and Industrial Organization," a chapter originally conceived as a joint project with Shapiro before Shapiro withdrew due to other commitments. Varian's chapter considers "how machine learning availability might affect the industrial organization of both firms that provide AI services and industries that adopt AI technology," and identifies data access and returns to scale as central forces shaping AI market structure. The paper that Shapiro helped conceive but did not complete thus became one of the earliest formal economic analyses of exactly the dynamics now playing out in the AI platform market.
The interaction of all three network effects — direct, indirect, and data — creates competitive dynamics more powerful than any single effect alone. Consider the positive feedback loops operating simultaneously in the Claude ecosystem that Segal describes. The direct network effect: as more engineers adopt Claude Code, the professional value of fluency in Claude's specific capabilities increases, creating adoption pressure on engineers who have not yet adopted. The indirect network effect: as the installed base grows, more developers build complementary tools — IDE integrations, workflow automations, domain-specific plugins — that increase the platform's value to existing users and attract new ones. The data network effect: every interaction across every user generates signal that refines the model, improving the quality of output for all users, which attracts more users, which generates more signal.
The three loops reinforce each other. A better model (from the data network effect) attracts more users (strengthening the direct network effect), which attracts more complementary goods developers (strengthening the indirect network effect), which makes the platform more valuable to users (attracting more users), which generates more training signal (further strengthening the data network effect). The compound feedback loop is self-accelerating: each circuit through the three effects makes the next circuit faster and stronger.
Shapiro's theoretical work on market tipping predicted exactly this kind of self-reinforcing dynamic. In his 1994 paper with Katz, "Systems Competition and Network Effects," he showed that positive feedback in network markets creates tipping points — thresholds beyond which the leading platform's advantage becomes self-sustaining and the market converges on a single dominant standard. The question for AI platform markets is not whether tipping will occur — the economic logic makes tipping the default outcome in any market exhibiting strong network effects — but when, and with what consequences for competition, innovation, and the distribution of economic surplus.
The adoption data that Segal provides in The Orange Pill offers evidence about the timing. ChatGPT reached fifty million users in two months — an adoption rate that compressed into weeks what previous network goods achieved over years. Claude Code's run-rate revenue crossed $2.5 billion by February 2026, a growth curve steeper than any developer tool in history. These figures are not merely measures of product quality. They are network effects in action: the speed of adoption reflects the self-reinforcing dynamics of a network good in which each user's adoption simultaneously improves the product for every subsequent user.
But the speed creates a specific economic problem. Shapiro warned in congressional testimony that "determining when and whether to intervene in dynamic industries can be especially difficult in the presence of switching costs, network effects, and other factors that can cause a market to 'tip' towards one supplier or one technology in a lasting manner." The window for effective intervention in a tipping market is narrow: once the market has tipped, the dominant platform's position is self-reinforcing, and the costs of reversing the outcome through regulatory intervention increase dramatically. "A snapshot of market shares may suggest effective competition between two or more firms," Shapiro observed, "yet if one firm has a sizeable market share that is rapidly growing, that firm may come to dominate the market in a manner that will be difficult to reverse."
The AI platform market is in exactly this pre-tipping state. Multiple firms — Anthropic, OpenAI, Google, Meta — are competing for the platform position, and the market has not yet consolidated around a single dominant standard. But the three-way network effect is compounding. With each passing month, the leading platforms accumulate more data, more complementary goods, more professional community adoption. The window during which the market structure remains contestable is closing at a rate determined by the speed of the compound feedback loop.
The interaction between global and local network effects adds complexity. The data network effect is global: every user's interaction, regardless of location or profession, contributes to model improvement that benefits all users. But the professional network effect is local: the value of being on the same platform as industry peers depends on adoption patterns within specific professional communities, not the total user base. An architect benefits from being on the platform that other architects use, because the model's capabilities in architectural domain knowledge improve as more architects interact with it.
This creates the possibility of market fragmentation along professional and geographic lines. The AI platform market may not produce a single global monopoly but rather a set of regional and professional oligopolies, each exhibiting local network effects within its domain while a small number of foundational model providers capture the global data network effect. Chinese AI platforms, governed by different regulatory frameworks and trained on different data, may dominate the Chinese market while Western platforms dominate their respective geographies. Medical AI may consolidate around one platform while legal AI consolidates around another.
This fragmentation would change the competitive dynamics but not the underlying economic logic. Within each segment, the three-way network effect would still drive toward consolidation. The question of market power, lock-in, and surplus distribution would persist at the segment level even if no single platform achieves global monopoly. Shapiro's framework applies whether the market is global or segmented; what changes is the unit of analysis, not the forces at work.
The practical consequence for every participant in the AI economy — from the individual builder to the enterprise adopter to the policymaker — is that the network effects are accumulating now, the competitive positions are being established now, and the window during which intervention can shape the market structure is open now. It will not remain open indefinitely. The economics of network effects are patient but directional: they drive toward concentration with the steady pressure of compound interest. The question is whether the institutional response will arrive before the tipping point, or after — when the cost of intervention has multiplied and the market structure has hardened into a form that serves the platform's interests regardless of whether it serves the public's.
---
Lock-in is the economic mechanism by which voluntary adoption becomes involuntary dependence. Shapiro's career-long investigation of this phenomenon — from the theoretical foundations laid with Katz in the 1980s through the strategic analysis of Information Rules in 1999 to the antitrust policy applications of the 2010s and 2020s — constitutes perhaps the most sustained and rigorous examination of how information markets transform user choice into user captivity.
The mechanism is deceptively simple. A user adopts a technology. The adoption generates investments — in learning, in data, in workflows, in complementary goods — that are specific to the chosen technology and cannot be transferred to alternatives without cost. These investments accumulate with each interaction. Each individual investment is small enough to feel inconsequential: learning one keyboard shortcut, building one workflow, accumulating one month of conversation history. But the investments compound, and the compound total eventually exceeds the benefit of switching to any alternative, regardless of how superior the alternative might be.
At that point, the user is locked in. Not by a contract. Not by coercion. By the accumulated weight of her own rational decisions.
Shapiro and Varian identified this dynamic as one of the defining features of information markets. "Switching to incompatible products is difficult," they wrote in Information Rules, "so customers can get 'locked in' once they have made an investment in information goods based on a given technology." The lock-in transfers bargaining power from the user to the platform provider. The locked-in user cannot credibly threaten to leave, because the cost of leaving exceeds the benefit. The platform provider, knowing this, can raise prices, reduce quality, change terms of service, or redirect the platform's development away from the user's interests — all without losing the installed base that generates the platform's network effects and revenue.
The AI ecosystem is generating lock-in at a speed and depth that Shapiro's earlier analyses of enterprise software, operating systems, and telecommunications did not anticipate. Not because the mechanism is different — it is structurally identical — but because the investments that generate lock-in are accumulating faster, across more dimensions, and with less visibility to the users making them.
Four distinct sources of lock-in are operating simultaneously in AI platform markets, and their interaction produces a compound effect stronger than any individual source.
The first is data lock-in. Every interaction with an AI platform generates a conversation history — a record of problems posed and solutions accepted, of intellectual dead ends explored and abandoned, of collaborative patterns that proved productive and those that did not. This history represents accumulated intellectual capital. Segal describes spending months working with Claude on The Orange Pill — "developing ideas, testing arguments, refining prose, building a collaborative relationship that produced insights neither human nor machine could have generated alone." That corpus of interaction is stored on Anthropic's infrastructure. It is not portable. A competing platform cannot import the collaborative context — not merely the text of conversations, but the implicit model of the user's thinking patterns, intellectual preferences, and productive working rhythms that the platform has developed through sustained interaction.
This is analogous to the data lock-in that has characterized enterprise software for decades, but it is stickier. Enterprise data is relational: tables, records, fields that can in principle be extracted and reformatted. AI interaction data is conversational: its value resides not in the raw text but in the accumulated model of collaborative dynamics that the platform has built through sustained engagement. The model exists implicitly in the platform's refined behavior toward this specific user. It cannot be exported as a file or migrated as a database. It is embedded in the relationship itself.
The second source is workflow lock-in. Segal documents engineers who rebuilt their entire professional methodologies around AI-augmented capabilities in the space of a single week. A backend engineer started building user interfaces. A designer started implementing features end to end. These transformations occurred within a specific platform ecosystem, using specific tools, developing specific interaction patterns that depend on the specific capabilities and limitations of a particular AI system.
The engineer who has learned to describe problems to Claude in a particular way — who has developed intuitive understanding of what the system handles well and where it requires more explicit guidance, who has built an entire working methodology around the specific rhythm of human-AI collaboration that Claude facilitates — has invested significant cognitive capital in a platform-specific skill set. The switching cost is measured not in dollars but in the disruption of established cognitive patterns: the cost of unlearning one set of productive habits and building another from scratch. Shapiro's research identified cognitive switching costs as the most durable and the most resistant to competitive pressure. Financial switching costs can be overcome by a competitor willing to subsidize the transition. Technical switching costs can be overcome by investment in migration tools. But cognitive switching costs are borne entirely by the user, cannot be subsidized by a competitor, and increase over time as the existing habits become more deeply embedded.
The third source is complementary goods lock-in. The developer ecosystem building tools, plugins, integrations, and workflow extensions around specific AI platforms creates a layer of platform-specific complementary goods that increase the platform's value while raising the cost of departure. An engineer whose workflows depend on Claude-specific integrations faces not merely the cost of switching the AI platform but the cost of replacing every complementary tool in the ecosystem. This is the indirect network effect translated into a lock-in mechanism: the same complementary goods that make the platform more valuable also make leaving it more expensive.
The fourth source is the most subtle and potentially the most consequential: identity lock-in. Segal describes engineers who "recalculated everything they thought they knew about their own capability" within days of adopting Claude Code. The senior engineer who discovered that his remaining twenty percent was "everything" had not merely adopted a tool. He had undergone a professional identity transformation — a reconception of what his expertise meant and where his value resided. This transformation was platform-specific: the particular form of his professional reinvention was shaped by the particular capabilities and limitations of the platform that catalyzed it. Switching platforms would require not merely relearning a tool but renegotiating the professional identity that formed around the tool.
What makes AI lock-in qualitatively different from previous forms is the speed at which these four sources accumulate and the invisibility of the accumulation to the user experiencing it. Segal's Trivandrum training illustrates the dynamic with inadvertent precision. On Monday, the engineers began learning Claude Code. By Wednesday, they had rebuilt their working methods. By Friday, the transformation was "measurable, repeatable reality." In five days, four sources of lock-in — data, workflow, complementary goods, and identity — had accumulated to a level that would have taken months or years to develop in previous platform markets.
The engineers did not choose lock-in. They chose productivity. They chose capability expansion. They chose the exhilaration of building things they could not previously build. Each individual decision — learn this prompt pattern, develop this workflow, integrate this tool, reconceive this professional identity — was rational, beneficial, and voluntary. The lock-in was not any single decision. It was the compound consequence of all of them.
This is Shapiro's central insight about lock-in, applied to a market that makes the mechanism operate at unprecedented speed: users accumulate commitments through a series of individually rational decisions, each too small to trigger strategic deliberation, that collectively produce a commitment too large to reverse. The path from first use to deep lock-in is paved with micro-decisions, each one rational, each one harmless in isolation, each one contributing to a cumulative outcome that no individual decision produced and no individual decision can undo.
The strategic consequences of AI lock-in follow directly from Shapiro's analysis of lock-in in previous platform markets. Lock-in transfers bargaining power. The platform provider who has locked in an installed base can raise prices, because the locked-in user's cost of switching exceeds the cost of the price increase. The platform can reduce quality on dimensions the user values, because the user cannot credibly exit. The platform can redirect development toward objectives that serve the platform's interests rather than the user's, because the user's voice is muted by the absence of a viable exit option.
The policy implications are equally direct. Shapiro has argued consistently — in academic papers, congressional testimony, and policy briefs — that the appropriate response to lock-in in information markets is institutional: portability standards that allow data to move between platforms, interoperability requirements that allow complementary goods to function across ecosystems, transparency mandates that make the accumulation of switching costs visible to users who are accumulating them.
These interventions are familiar from previous platform markets. Data portability requirements in telecommunications. Interoperability mandates in financial services. The economic logic is consistent: lock-in is a market failure that concentrates bargaining power in the hands of the platform provider and reduces the competitive pressure that drives innovation and protects consumer welfare. The remedy is to reduce switching costs to the level at which the user's exit option is credible, restoring the competitive dynamics that lock-in erodes.
But the cognitive dimension of AI lock-in creates a challenge that previous interventions did not face. Data portability requirements can mandate that conversation histories be exportable. Interoperability standards can mandate that complementary tools work across platforms. Neither can address the cognitive switching cost — the cost of unlearning one set of productive habits and building another — because that cost is borne inside the user's mind, invisible to regulators, non-transferable by any technical mechanism, and resistant to any institutional intervention that does not change the fundamental architecture of human cognition.
The user who has spent six months developing a productive collaborative relationship with Claude — who has learned, through trial and error, the specific patterns of prompting and interaction that produce the best results — carries that investment in neural pathways, not in data files. No portability standard can transfer the investment. No interoperability mandate can make it platform-agnostic. The cognitive lock-in persists even when every other form of lock-in has been addressed, and it persists because it is a feature of human cognition rather than a feature of market structure.
Shapiro's framework identifies the problem. The framework does not, by itself, solve it. The solution requires institutional innovations that the existing policy toolkit does not fully contain — innovations that address the cognitive dimension of lock-in through mechanisms designed specifically for AI platform markets. What those mechanisms might look like — structured platform-switching exercises, cognitive portability training, multi-platform literacy programs — is a question the economics can frame but cannot, by itself, answer. The framing, however, is essential. Without it, the lock-in accumulates invisibly, the bargaining power transfers silently, and the market structure hardens around a competitive outcome that users never consciously chose.
---
In 1970, George Akerlof published "The Market for 'Lemons,'" a paper that earned him the Nobel Prize and permanently altered how economists think about markets in which one party knows more than the other. The insight was deceptively simple. In a used car market, the seller knows whether the car is a lemon. The buyer does not. The buyer, aware that lemons exist but unable to identify them, discounts the price she is willing to pay for any used car. The discount drives sellers of high-quality cars out of the market, because they cannot get a fair price. Only the lemons remain. The market collapses — not because of fraud but because of information asymmetry. The buyer cannot observe the quality she is paying for, and the inability to observe quality destroys the market's ability to reward it.
The aesthetics of the smooth, which Segal explores through the philosophical lens of Byung-Chul Han in The Orange Pill, creates a new form of information asymmetry that is structurally analogous to Akerlof's lemons problem. But the market in question is not used automobiles. It is professional expertise. And the lemon is not a defective product. It is a polished output that conceals the absence of the human judgment it appears to embody.
The mechanism operates as follows. AI-generated output is smooth. Not smooth in Han's philosophical sense of frictionless experience, though that too. Smooth in the specific sense that the surface quality of the output — its grammar, structure, apparent coherence, and professional polish — is indistinguishable from output produced through genuine expertise and careful judgment. A legal brief drafted with extensive AI assistance looks identical to one drafted through deep independent analysis. A consulting report generated with AI support is indistinguishable, on its surface, from one produced through months of domain-specific research. A software architecture proposed by an AI-augmented engineer appears no different from one designed through years of accumulated systems intuition.
The smoothness is the problem. The surface quality of AI-augmented output is high regardless of whether the human in the loop exercised genuine judgment in producing it. The attorney who reviewed every AI-generated citation for accuracy and relevance produces a brief that looks the same as the brief produced by the attorney who accepted the AI's output with minimal review. The consultant who exercised independent analytical judgment produces a report indistinguishable from the one produced by the consultant who deferred to the AI's framing. The engineer who applied twenty years of systems intuition to evaluate the AI's architectural proposal produces a design that looks identical to one accepted uncritically.
The information asymmetry is between the producer and the evaluator. The producer knows the degree to which the output reflects genuine human judgment — the hours of review, the independent verification, the domain-specific evaluation that separates a high-quality professional product from a merely plausible one. The evaluator — the client, the manager, the professor, the end user — cannot observe the judgment. The evaluator observes only the output. And the output, polished by AI to a uniform surface quality, conceals the presence or absence of the expertise it appears to embody.
Segal captures this dynamic with inadvertent precision in The Orange Pill when he describes catching Claude produce a passage that "sounded like insight but broke under examination" — the Deleuze reference that was "wrong in a way obvious to anyone who had actually read Deleuze." The passage worked rhetorically. It was well-structured, confident, and thematically appropriate. It was also wrong. And the smoothness of the prose concealed the error so effectively that Segal himself almost missed it. "Claude's most dangerous failure mode is exactly this," he writes: "confident wrongness dressed in good prose."
If the author of a book about AI — a person with decades of experience and a specific motivation to catch exactly this kind of error — almost failed to detect confident wrongness dressed in good prose, what chance does the typical evaluator have? The manager reviewing an AI-augmented report does not have the domain expertise to verify every claim. The client evaluating an AI-assisted legal brief is paying for the attorney's judgment precisely because the client cannot exercise that judgment independently. The professor grading an AI-influenced essay is evaluating a student's understanding, not just the essay's surface quality, but the smooth output makes the two indistinguishable.
The market consequences predicted by Akerlof's framework are precise and testable. If evaluators cannot distinguish between judgment-rich and judgment-poor output, they will discount the price they are willing to pay for all professional work. The discount drives high-judgment professionals — those whose work is genuinely worth more because they invest more in verification, evaluation, and independent analysis — toward lower compensation, because the market cannot observe the quality differential that justifies the premium. Meanwhile, low-judgment professionals, whose production costs are lower because they invest less in the hard work of genuine expertise, earn similar compensation for inferior work that is indistinguishable on its surface.
The adverse selection spiral follows. As the market fails to reward judgment, high-judgment professionals reduce their investment in judgment (because the investment is not compensated) or exit the market (because the premium that attracted them has been competed away). The average quality of professional output declines. The evaluator's discount deepens. More high-judgment professionals exit. The market converges on a low-judgment equilibrium in which everyone uses AI to produce polished output and no one invests in the deep expertise that the polish was supposed to represent.
This is the senior engineer's fear made economic. When Segal's conference interlocutor worried that "something beautiful was being lost, and that the people celebrating the gain were not equipped to see the loss, because the loss was not quantifiable," he was describing the information asymmetry problem in the vocabulary of craft. The loss is not quantifiable precisely because the market cannot observe it. The understanding that accumulated through thousands of hours of patient debugging — the geological layers of intuition that the author describes — is invisible in the output. The output looks the same whether the layers are there or not. And the market, unable to see the layers, cannot pay for them.
The economic literature identifies three classical mechanisms for resolving information asymmetry: signaling, screening, and reputation.
Signaling occurs when the party with superior information takes a costly action to credibly communicate that information. In labor markets, educational credentials function as signals: the effort required to obtain a degree communicates the worker's ability, even when the degree's specific content is not directly relevant to the job. In the market for AI-augmented professional work, signaling would require the professional to take a costly action that credibly demonstrates the exercise of genuine judgment.
Process transparency is one possible signal: providing not just the final output but a documented record of the analytical process — the original AI generation, the revisions, the judgment calls, the points where the professional overrode the AI's suggestion. This is analogous to the "showing your work" requirement in mathematics, and it has the advantage of making the exercise of judgment directly observable. The disadvantage is cost: documenting the process consumes time that could otherwise be spent producing output, and the documentation itself can be gamed (a professional could fabricate a process narrative that implies more judgment than was actually exercised).
Screening occurs when the party lacking information designs a mechanism to induce revelation. Segal provides an elegant example from education: the teacher who stopped grading essays and started grading questions. The shift from evaluating output to evaluating the quality of the inquiry that precedes output is a screening mechanism. It reveals the degree of genuine intellectual engagement that the smooth essay surface conceals. A student who has genuinely wrestled with the material produces different questions than a student who has outsourced the thinking to AI, and the difference is legible in the questions in a way that it is not legible in the essays.
The screening mechanism generalizes beyond education. A law firm that evaluates attorneys not on their briefs but on their identification of issues the AI missed. A consulting firm that evaluates analysts not on their reports but on their critiques of AI-generated analyses. A software company that evaluates engineers not on their code but on their identification of architectural risks the AI did not flag. In each case, the evaluation shifts from the output — where AI has eliminated the quality differential — to the meta-cognitive work of evaluating the output — where human judgment remains the scarce input.
Reputation is the third mechanism: the accumulation of a track record that credibly communicates quality over time. In markets with repeat interactions, reputation serves as a proxy for unobservable quality. The professional who consistently produces work that proves durable — that withstands scrutiny, generates value over time, avoids the costly errors that confident wrongness dressed in good prose eventually produces — builds a reputation that signals genuine expertise. The professional who produces smooth but shallow work builds a different reputation, as the shallowness surfaces through downstream consequences: the legal brief whose unchecked citations are challenged in court, the consulting recommendation whose unverified assumptions produce a failed strategy, the software architecture whose unexamined AI-generated design fails under load.
Reputation works, but slowly. It requires the time for downstream consequences to materialize, and during the interval between production and consequence, the market cannot distinguish between the expert and the surface. In a market moving as fast as the AI-augmented professional economy, the interval may be long enough to cause significant damage — to individual careers, to professional standards, and to the clients who depend on professional judgment they can no longer verify.
The institutional structures required to resolve this information asymmetry do not yet exist at scale. Credentialing systems designed for a pre-AI world certify that a professional possesses a body of knowledge; they do not certify that the professional exercises that knowledge independently rather than outsourcing it to a tool. Quality assurance frameworks designed for pre-AI output evaluate the product; they do not evaluate the degree of human judgment embedded in the product's creation. Performance evaluation systems designed for pre-AI workplaces measure output; they do not measure the cognitive investment that distinguishes output produced through genuine expertise from output produced through sophisticated delegation.
Building these structures is among the most urgent institutional tasks of the transition. Not because AI output is inferior — it is often excellent — but because the market for professional expertise cannot function efficiently when the quality of judgment is unobservable. The market that cannot observe judgment cannot reward it. The market that cannot reward judgment will not sustain the investment in deep expertise that produces it. And the society that loses the investment in deep expertise will discover, over time, that the polished surfaces concealed an erosion of the foundations they were built on.
Shapiro's framework identifies the market failure with precision. The resolution requires institutional design that the framework can inform but not, by itself, produce. Signaling mechanisms that make judgment visible. Screening mechanisms that reveal the depth beneath the surface. Reputation systems calibrated to the specific dynamics of AI-augmented work. The economics identifies the problem and constrains the solution space. The construction of the actual institutions falls to the practitioners, the regulators, and the professional communities whose quality is at stake.
The lemons problem of polished output is not a theoretical concern. It is a market failure operating in real time, in real professional markets, with real consequences for the distribution of economic value and the sustainability of genuine expertise. The smooth output that Segal's philosopher diagnoses as a cultural pathology is, in Shapiro's framework, an information asymmetry that distorts the market for the very quality that the AI transition has made most scarce: the exercise of human judgment in a world of abundant machine production.
Shapiro and Varian devoted an entire section of Information Rules to a strategy so elementary it seems beneath the dignity of economic theory: charge different prices to different customers. The insight that earned the strategy its analytical weight was not that price discrimination exists — every airline and movie theater practices it — but that information goods make price discrimination both uniquely feasible and uniquely consequential. When the marginal cost of serving an additional user is near zero, any positive price generates a positive contribution. The economically efficient outcome is to serve every customer who values the product at any price above zero, extracting from each the maximum she is willing to pay. The mechanism for achieving this in information markets is versioning: offering different versions of the same underlying good at different price points, allowing customers to self-select into the version that matches their willingness to pay.
The IBM LaserPrinter E remains the canonical illustration. It was physically identical to the full-price LaserPrinter but contained a chip that artificially slowed its printing speed. IBM invested engineering effort and manufacturing cost to produce a deliberately degraded product, because a lower price without quality degradation would have cannibalized sales of the premium version. The degradation was not a cost-saving measure. It was a strategic choice to segment the market — to prevent high-willingness-to-pay customers from purchasing the low-price version and thereby capturing consumer surplus that the firm could otherwise extract.
Every major AI platform practices versioning with a sophistication that makes the LaserPrinter E look primitive. Anthropic offers Claude in a free tier with constrained capability, a professional tier at twenty dollars per month, and a Max tier at one hundred dollars per month with the highest-capability model and the fewest usage restrictions. OpenAI follows an analogous structure. Google, Meta, and every other significant AI provider have adopted variants of the same approach. The tiers differ in model capability, context window, response speed, and usage volume — but the underlying system, the model itself, is a single good being strategically versioned to capture different segments of the market.
The versioning structure would be unremarkable — the standard playbook for information goods, applied predictably — except for a feature of AI tools that distinguishes them from every previous versioned information good. Segal's central metaphor in The Orange Pill is that AI is an amplifier, and the most powerful one ever built. The quality of the output depends on the quality of the input: "Feed it carelessness, you get carelessness at scale. Feed it genuine care, real thinking, real questions, real craft, and it carries that further than any tool in human history."
If AI is an amplifier, then the versioning of AI is the versioning of amplification itself. The free tier amplifies less. The premium tier amplifies more. The relationship between a person's cognitive investment and the value of the output she produces is mediated by which version of the amplifier she can afford. Two builders of identical capability, exercising identical judgment, asking equally good questions, will produce different-quality output if one is using the free tier and the other the Max tier. The difference is not in the human input. It is in the amplification.
This creates an economic wedge between capability and outcome that has no precise precedent. In a world without AI, a professional's economic return is a function of her skill, her effort, and market demand for her work. In a world with versioned AI, her return is a function of those factors plus the version of the amplifier she can afford. The additional variable — amplifier quality — introduces a source of economic inequality that is independent of human capability and determined by purchasing power.
The consequence is that versioning creates a stratified amplification market in which the return on human investment varies by tier. A developer exercising genuine craft with the free tier will produce output inferior to a developer exercising comparable craft with the premium tier — not because the first developer is less skilled, but because her amplifier is weaker. The market, evaluating the output rather than the input, will reward the premium-tier developer more, reinforcing the economic advantage that purchasing power conferred in the first place.
Shapiro and Varian would recognize this as a standard feature of versioning: the premium version is always better, and the customers who can afford it capture more value. But the AI application introduces a moral dimension that printers and software licenses did not. When the versioned good is a productivity tool, the stratification affects income. When the versioned good is a cognitive amplifier — a tool that determines the reach and impact of human thought itself — the stratification affects something closer to the capacity for intellectual participation. The twelve-year-old in The Orange Pill who asks "What am I for?" deserves an answer that does not depend on which version of the amplifier her parents can afford. The economics of versioning suggest that independence is not guaranteed by the technology. It must be constructed by institutional design.
The historical trajectory of information goods pricing offers some reassurance. Information goods tend to commoditize from the top down. The most advanced features of today's premium version become standard in tomorrow's free version. Google Search in 2025 is vastly more powerful than Google Search in 2005. The free tier of AI in 2030 will likely exceed the capability of the Max tier in 2026. The democratization that Segal envisions is economically plausible over a sufficient time horizon.
But the time horizon is precisely the problem. The developer who cannot afford the premium tier today does not benefit from the price decline that arrives in three years. The competitive advantages accumulated during the transition period — skills developed with superior tools, portfolios built at higher quality, reputations established through more impressive output — compound over time. The developer who built her career on the premium tier during 2026-2029 arrives at 2030 with a compounding advantage that the developer who waited for the free tier to catch up cannot retroactively match.
This is the temporal inequality of versioned amplification. The technology democratizes. The versioning stratifies. And the stratification, concentrated in the transition period when competitive positions are being established, produces durable advantages that persist long after the pricing differential has closed. The developer in Lagos whom Segal invokes as the moral argument for AI's democratizing potential faces not merely a price barrier but a temporal barrier: by the time the price falls to a level she can afford, the competitive landscape has been shaped by builders who had access to the premium version during the window that mattered most.
The economic mechanisms for mitigating temporal inequality are well understood in the information economics literature. Geographic price discrimination — charging lower prices in markets with lower purchasing power — could make premium access affordable in Lagos without sacrificing revenue in San Francisco. The marginal cost of serving an additional user is near zero, which means that any price the Lagos developer can afford generates positive revenue for the platform. The economic logic favors price discrimination: it expands the market, generates revenue, and accelerates the data network effect by adding users whose interactions improve the model for everyone.
The practical barriers to geographic price discrimination are real but not insuperable. Arbitrage — users in high-price markets circumventing geographic restrictions to access low-price versions — can be managed through payment verification and usage monitoring. Cannibalization risk — the possibility that the existence of a low-price tier will pressure the premium segment — can be mitigated through version differentiation that goes beyond mere price to include features, support, and integration capabilities that the professional segment values and the price-sensitive segment does not.
Subsidized access programs offer an alternative mechanism: governments, philanthropies, or development institutions pay the difference between market price and affordable price, ensuring that the premium version reaches populations whose purchasing power would otherwise exclude them. Educational licensing, in which platforms provide free or discounted access to students and institutions in underserved markets, builds the next generation of builders before the market price declines. Open-source models — freely available alternatives that provide a baseline of capability independent of any platform's pricing strategy — ensure that the floor of capability rises regardless of what happens at the premium tier.
No single mechanism is sufficient. The temporal inequality of versioned amplification requires a portfolio of approaches, tailored to specific markets and populations, designed to ensure that the competitive advantages of the transition period do not harden into permanent stratification. The economic analysis identifies the problem and constrains the solution space. The institutional will to implement the solutions is a political question, not an economic one.
Segal argues in The Orange Pill that the democratization of capability is "the most morally significant feature of this technological moment." Shapiro's framework does not dispute the moral significance. It reveals the economic conditions under which the moral claim holds — and the conditions under which it collapses into aspiration. The technology makes democratization possible. The versioning strategy determines whether democratization is realized. And the realization depends on decisions being made now, in pricing offices and strategy meetings and legislative chambers, by people who may or may not understand that the version of the amplifier available to the developer in Lagos is not a product feature. It is an economic policy choice with consequences that will compound for a generation.
The amplifier does not care who holds it. But the market that distributes amplifiers cares very much, and the distribution it produces — shaped by versioning, pricing, and the institutional structures that govern access — will determine whether the intelligence age fulfills its democratic promise or reproduces, at a higher technological level, the inequalities that every previous information revolution has taken a generation of institutional struggle to partially overcome.
---
By February 2026, a trillion dollars of market capitalization had vanished from software companies. The market had named the phenomenon with characteristic bluntness: the SaaSpocalypse. Workday had fallen thirty-five percent. Adobe had lost a quarter of its value. Salesforce had dropped twenty-five percent. When Anthropic published a demonstration of Claude's ability to modernize COBOL, IBM suffered its largest single-day stock decline in more than a quarter century. The chart that captured the market's anxiety showed two curves crossing — the declining SaaS valuation index and the rising AI market — at a point the analysts called the Death Cross.
The panic and the dismissal of the panic were both wrong. The panic said AI would eat software — that every SaaS product could be rebuilt in a weekend by a competent builder with Claude Code, and that the subscription model powering a three-trillion-dollar industry was collapsing. The dismissal said the market was overreacting, that valuations would recover, that nothing fundamental had changed. The economic reality was more precise than either narrative.
What happened was not death. It was value migration — the standard consequence of commoditization in information markets, accelerated to a speed that made the standard process feel like catastrophe.
Shapiro and Varian's framework provides the formal structure for what Segal captured intuitively: the value of a SaaS company can be decomposed into two components, and the market had been conflating them for twenty years. The first component is code value — the market's assessment of the cost of reproducing the software the company provides. The second is ecosystem value — the market's assessment of the data, integrations, institutional knowledge, workflow dependencies, compliance infrastructure, and switching costs the company has accumulated over years of enterprise deployment.
Before AI, the distinction was academic. The code was expensive to produce, so code value was high. The ecosystem was built on top of the code, so the two were inseparable in practice. A company like Salesforce was valued as a unit — code and ecosystem together — because the code could not be separated from the ecosystem without destroying both.
AI separated them. When Claude Code can reproduce the software layer in hours — the CRM logic, the pipeline management, the reporting dashboards, the workflow automation — the code value collapses toward the marginal cost of AI-assisted reproduction, which approaches zero. The ecosystem value — the twenty years of customer data, the institutional integration, the compliance certifications, the workflow assumptions embedded in the muscle memory of every sales organization trained on the platform — remains intact. It may even increase, because the ecosystem becomes more valuable relative to the now-commoditized software it sits on.
The trillion-dollar loss reflected the market's recognition that the code component of SaaS valuations had been destroyed. But the market, in its characteristic overcorrection, had not yet learned to price the ecosystem component independently. The repricing overshoots — destroying market capitalization that represents genuine ecosystem value alongside the capitalization that represented code value — because the market lacks the valuation framework to distinguish between the two.
The economic consequence is that the Death Cross is a repricing event, not a death event. The companies whose value was primarily in their code — thin applications solving singular problems through clever implementation — will not recover, because their code value was their only value, and that value has been destroyed by commoditization. The companies whose value was primarily in their ecosystems — deep enterprise platforms with extensive data layers, regulatory compliance infrastructure, and institutional integration that took decades to build — will recover once the market develops the framework to price ecosystem value independently of code value.
The historical parallel is IBM's transition from mainframe hardware to enterprise services. The mainframe business generated extraordinary margins, and when the personal computer threatened the mainframe's market position, IBM's stock price collapsed. The market was pricing IBM as a hardware company. The repricing destroyed hundreds of billions in market capitalization. But the company that emerged was organized around a different theory of value — services, consulting, institutional integration — that generated durable returns for investors who endured the transition. The hardware was the commodity. The institutional relationships were the moat.
The SaaS companies facing the Death Cross are in the analogous position. The code is the commodity. The ecosystem is the moat. The market has not yet completed the repricing — it has recognized the commodity but not yet valued the moat — and the completion will determine which companies survive and which do not.
Shapiro's antitrust work adds a dimension to the Death Cross analysis that neither the panicked nor the dismissive narrative captures: the competitive structure of the post-repricing market. When value migrates from the code layer to the ecosystem layer, the competitive dynamics of the industry change fundamentally. Code-layer competition was relatively accessible: a startup with talented engineers could build software that competed with incumbents on features and price. Ecosystem-layer competition is vastly less accessible: a startup cannot replicate twenty years of enterprise data, institutional integration, and regulatory compliance in any timeframe, regardless of how capable its AI tools are.
The migration of value to the ecosystem layer therefore implies a concentration of competitive advantage in the firms that built the deepest ecosystems during the code era. The SaaS incumbents that survive the Death Cross will emerge with stronger competitive positions, not weaker ones, because the barrier to entry has shifted from the ability to write code (which AI has made universally accessible) to the possession of institutional ecosystem assets (which cannot be AI-generated and cannot be replicated on any short timeline).
This concentration has implications that the triumphalist narrative of the Death Cross — the celebration of disruption, the excitement about startups that can now build in a weekend — systematically obscures. The weekend startup can build the software. It cannot build the ecosystem. The software without the ecosystem is a feature, not a business. The ecosystem without the software is a platform waiting for its next application layer. The competitive advantage belongs to whoever holds the ecosystem, and the ecosystem holders are, overwhelmingly, the incumbents.
For startups, the strategic implication is stark: building a software product is no longer a defensible business strategy. Code that ships today can be replicated tomorrow. The only defensible startup strategy is to build an ecosystem — a community of users, a data advantage, a network of integrations, a depth of institutional trust — before the code that attracted the initial users becomes commodity. The race is no longer to build the best software. It is to build the deepest ecosystem before the software becomes irrelevant.
For enterprise buyers, the implication is equally direct: procurement criteria must change. The old criteria evaluated software on features, performance, and cost. The new criteria must evaluate the ecosystem — the depth of data integration, the breadth of complementary goods, the strength of switching costs, the durability of institutional relationships, and the platform's readiness for AI agent integration. A software product with superior features but no ecosystem is a depreciating asset in a world where features can be replicated overnight. A product with adequate features embedded in a deep ecosystem is a durable investment whose value is protected by the very forces — switching costs, data lock-in, institutional integration — that Shapiro has spent a career analyzing.
For policymakers, the Death Cross raises the concentration question in its most acute form. When value was distributed across thousands of code-producing SaaS companies, the market structure was relatively competitive. When value concentrates at the ecosystem layer, it consolidates in a smaller number of platform companies whose accumulated advantages create conditions approaching natural monopoly. The firms that control the data, the integrations, the institutional relationships — the firms that built the deepest dams, in Segal's terminology — may exercise market power that exceeds anything the SaaS industry exhibited at the height of its valuations.
The Death Cross is not an ending. It is a phase transition in the competitive structure of the software industry — a migration of value from a layer where competition was relatively open to a layer where competition is structurally constrained. The economic forces driving the migration are the same forces Shapiro identified in every previous information market transition: commoditization of the reproducible layer, concentration of value in the scarce complementary layer, and accumulation of market power by whoever controls the scarce resource.
The sticks wash away in the current. The question of who built the deepest pool — and whether the pool will irrigate broadly or benefit only its builder — is being answered by the competitive dynamics that the Death Cross has set in motion.
---
Herbert Simon, writing in 1971, formulated what may be the most prescient economic observation of the twentieth century: "In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention."
Fifty-four years later, the poverty has deepened to destitution. AI has not merely increased the supply of information. It has made the production of information nearly costless while leaving the human capacity to attend to it biologically unchanged. The gap between information supply and attentional capacity, already wide in Simon's era of mimeographed reports and broadcast television, has become an abyss.
The economic implications of this abyss extend far beyond the technology sector. Attention is not merely a psychological resource. It is an economic input — the scarce factor of production in an information economy, the bottleneck that determines the value of every information good produced and consumed. When the supply of information increases and the supply of attention does not, the price of attention rises. And the entity that captures and directs attention captures economic value proportional to the scarcity it controls.
This is the economic logic that produced the most valuable companies of the pre-AI era. Google, Facebook, TikTok, Netflix — each built multi-hundred-billion-dollar businesses on the fundamental insight that attention is the scarce resource and that the entity controlling its allocation controls economic value. The attention economy, as an economic structure, was well-established before AI. What AI has done is intensify the scarcity on both the supply and demand sides simultaneously.
On the supply side, AI can generate information at a volume and speed that dwarfs human production capacity. Reports, analyses, code, designs, proposals, creative content — all can be produced in minutes. The information flood that Simon anticipated has become a deluge of historically unprecedented volume. On the demand side, the information requires human attention to evaluate, integrate, and act upon. The AI-generated report still needs a human to decide whether its recommendations are sound. The AI-drafted brief still needs a lawyer to verify its citations. The AI-produced code still needs an engineer to assess whether the architecture will hold under load. The human capacity to attend — to evaluate, to exercise judgment, to make decisions — has not increased. It is biologically fixed at roughly the same level it occupied when Simon wrote his observation in 1971.
The Berkeley study that Segal cites in The Orange Pill — the eight-month empirical investigation by Ye and Ranganathan at UC Berkeley's Haas School of Business — provides the microeconomic evidence for what Simon's macroeconomic observation predicts. The researchers found that AI did not reduce work. It intensified it. Workers who adopted AI tools "worked faster, took on more tasks, and even expanded into areas that had previously been someone else's domain." The boundaries between roles blurred. Work seeped into previously protected cognitive spaces — lunch breaks, elevator rides, the micro-pauses that had informally served as moments of cognitive recovery. The freed time did not stay freed. It filled instantly with additional tasks that happened to be available.
Shapiro's framework translates this behavioral finding into economic structure. The opportunity cost of attention has changed. When AI makes every minute potentially productive at twenty times the previous rate — Segal's productivity multiplier from the Trivandrum training — the economic cost of not producing in any given minute has increased proportionally. Thirty minutes of unstructured thought, the kind of idle cognition that neuroscience associates with creative insight and long-term cognitive development, now carries an opportunity cost twenty times higher than it did before AI. The worker who chooses reflection over production is forgoing twenty times more output than she previously would have.
The economic pressure to be productive in every available minute has intensified not because anyone mandated it but because the opportunity cost structure changed. The incentive to produce is a price signal, and the signal is accurate: the cost of stopping really is higher than it used to be. The builder who works until three in the morning is not behaving irrationally. She is responding to a genuine change in the relative price of her time.
But the price signal is incomplete. It captures the cost of foregone production. It does not capture the cost of foregone rest, reflection, and the unstructured cognition that produces long-term value. These costs are real — the Berkeley researchers documented increased burnout, decreased empathy, the specific grey exhaustion of a nervous system running too hot for too long — but they are not priced by the market. The market observes and rewards productive output. It does not observe and cannot price the cognitive regeneration that sustains the capacity to produce over time.
This is a market failure in the technical economic sense: a systematic divergence between private incentives and social welfare caused by the inability of the market to price all relevant costs. The worker who depletes her cognitive capacity through continuous AI-augmented production imposes a cost on her future self (reduced long-term capability), on her employer (reduced quality of judgment over time), and on the broader economy (reduced innovation capacity). None of these costs appear in the quarterly productivity metrics. All of them are real.
Elinor Ostrom's Nobel Prize-winning work on common-pool resources provides the economic framework for addressing this market failure. Ostrom demonstrated that commons can be managed sustainably when the communities that depend on them establish rules for their use — clearly defined boundaries, proportional allocation of costs and benefits, collective decision-making about usage norms, monitoring mechanisms, and graduated sanctions for overuse. The framework was developed for fisheries and forests, but its logic applies wherever a shared resource is subject to overexploitation because individual users bear only a fraction of the cost their usage imposes.
Attention, in an AI-saturated work environment, is a common-pool resource. Each worker's attention is individually owned but collectively consequential — the quality of one person's judgment affects the team, the organization, and the downstream users of whatever the organization produces. The AI tools that make every minute potentially productive create an incentive structure analogous to the fishing fleet that makes every trip potentially profitable: each individual fishing expedition is rational, but the collective effect is overfishing that depletes the stock and destroys the long-term productivity of the fishery.
The "AI Practice" framework that the Berkeley researchers proposed — structured pauses, sequenced rather than parallel work, protected time for human-only cognitive activity — is, in Ostrom's terms, a governance structure for the attentional commons. It establishes boundaries (protected time), proportional allocation (structured sequencing), and monitoring (organizational norms that make cognitive rest visible and valued rather than hidden and stigmatized).
The developmental dimension amplifies the urgency. Segal's twelve-year-old who asks "What am I for?" is not merely posing a philosophical question. She is an economic agent whose attention is the most valuable form of attention in the economy — not because of its current productive capacity but because of its developmental trajectory. The attention a child devotes to wrestling with a problem, developing the questioning capacity that Segal identifies as uniquely human, is an investment whose returns compound over a lifetime. The developmental process — the formation of neural pathways through sustained cognitive effort — cannot be outsourced to AI without forgoing the investment itself.
When a student uses AI to produce an essay, the essay is produced but the cognitive development is not. The opportunity cost is invisible in the short term — the student's grade may be identical or better — but it compounds over years. The student who outsources cognitive effort at twelve arrives at eighteen without the cognitive infrastructure the effort would have built. This is what economists call a merit good problem: a good whose consumption the market undervalues because the consumer lacks the information to appreciate its full long-term value. The institutional response to merit goods is precisely the kind of structured intervention — age-appropriate restrictions, developmental curricula, assessment methods that evaluate process rather than output — that the attentional ecology framework prescribes.
The economics of attention in the intelligence age are not merely about the allocation of a scarce resource among competing uses. They are about the formation of the resource itself — the development of the cognitive capacity that makes human judgment possible and human questions worth asking. The market will not protect this formation, because the market does not price developmental processes. The protection falls to institutional design: educational structures that preserve the friction necessary for cognitive development, organizational norms that value reflection alongside production, and regulatory frameworks that recognize attention as a resource too consequential to be governed by the market alone.
Simon saw it in 1971. The wealth of information creates a poverty of attention. AI has made the information wealthier and the attention poorer. The economic question is whether the institutions can be built to govern the scarcity before the scarcity governs us.
---
Carl Shapiro spent the better part of four decades arguing a single proposition with increasing urgency: that antitrust enforcement in information markets must arrive before the market tips, because after tipping, the dominant platform's position is self-reinforcing and the costs of intervention multiply with each passing quarter. In congressional testimony, academic papers, and policy briefs spanning from the Microsoft era to the present, the argument has been consistent: "Greater vigilance is needed to prevent dominant firms, including the tech titans, from engaging in exclusionary conduct." The question was always timing. Intervene too early and you risk constraining a market whose competitive dynamics have not yet crystallized. Intervene too late and you face a market whose competitive dynamics have hardened into a structure that serves the incumbent's interests regardless of regulatory intent.
The AI platform market has compressed the window for effective intervention to a degree that renders the traditional antitrust timeline — investigation, analysis, litigation, remedy — dangerously inadequate. The three-way network effect described in Chapter 2 is compounding at a rate that previous information markets did not exhibit. Data network effects improve the model with each interaction. Indirect network effects attract complementary goods developers with each addition to the installed base. Direct network effects create professional adoption pressure with each colleague who joins the platform. The three loops reinforce each other, and each circuit through the compound loop makes the next circuit faster.
Shapiro warned in testimony before the House Small Business Committee that "a snapshot of market shares may suggest effective competition between two or more firms, yet if one firm has a sizeable market share that is rapidly growing, that firm may come to dominate the market in a manner that will be difficult to reverse." The AI platform market in 2026 appears competitive — Anthropic, OpenAI, Google, and Meta are all investing billions, shipping products, and capturing users. The snapshot looks healthy. But the trajectory is what matters, and the trajectory points toward tipping. The compound network effect is doing what compound network effects always do: accelerating the leading platforms' advantages while progressively disadvantaging smaller competitors and potential entrants.
The specific antitrust conditions that Shapiro's framework identifies as triggers for regulatory concern are present in the AI platform market with a completeness that is almost diagnostic.
Increasing returns to scale. The fixed cost of developing a frontier large language model is measured in billions. The marginal cost of serving an additional user is measured in cents. The ratio produces increasing returns: the larger the user base, the lower the average cost per user, and the greater the cost advantage over any potential entrant who must amortize the same fixed cost across a smaller base. The barrier to entry grows with the incumbent's scale, not despite it.
Significant switching costs. Chapter 3 documented the four-dimensional lock-in — data, workflow, cognitive, and complementary goods — that accumulates with each interaction. The switching costs are rising faster than in any previous platform market because the investments are cognitive as well as technical, and cognitive investments are the stickiest form of lock-in. They cannot be subsidized by competitors, mandated by regulators, or transferred by technical mechanisms. They reside in the user's neural pathways, and they compound with every productive hour spent on the platform.
Network effects favoring incumbents. The three-way network effect creates a quality advantage that compounds over time. The platform with the most users generates the most training signal, which produces the best model, which attracts the most users. The loop is self-reinforcing. A new entrant must not only match the incumbent's current model quality but overcome the data advantage that the incumbent accumulates with every interaction the entrant does not have.
Potential for leveraging. The dominant AI platform company occupies a position from which it can extend its market power into adjacent markets. A company that controls the foundational model can advantage its own applications built on that model. A company that controls the integration framework can favor its own complementary goods. A company that controls the data layer can use insights derived from user interactions to enter markets that users previously served independently. The leveraging potential is especially acute because the AI platform sits beneath virtually every digital activity — the platform layer is not adjacent to other markets but foundational to them.
The standard antitrust toolkit — merger review, monopolization enforcement, interoperability mandates — is theoretically adequate but practically strained by the speed of AI market dynamics. Merger review operates on timelines of months to years. The AI market is establishing competitive positions in weeks. A merger that appears competitively neutral at the time of review may become anticompetitive by the time the review concludes, because the market has moved in the interim. Monopolization cases require demonstration of market power and exclusionary conduct, both of which are difficult to establish in a market that is still nominally competitive — multiple firms, falling prices, rising quality — even as the underlying dynamics drive toward concentration.
Shapiro himself has navigated the tension between the standard toolkit and the speed of information markets throughout his career. In his 2019 Journal of Economic Perspectives paper "Protecting Competition in the American Economy," he argued for "more vigorous antitrust enforcement" while cautioning against both the Chicago School's permissiveness and the Neo-Brandeisian school's structural interventionism. His position — "a moderate tightening of merger enforcement based on strong empirical and theoretical foundations" — reflects a calibrated approach to markets whose dynamics are clear but whose competitive outcomes are not yet determined.
The calibration faces a specific challenge in AI markets: the regulatory framework must distinguish between concentration that arises from superior quality and concentration that arises from self-reinforcing market structure. A platform that dominates because its model is genuinely better — because its researchers made better architectural choices, its training data was more carefully curated, its alignment process produced more useful and reliable output — is dominating through competition on the merits. A platform that dominates because its installed base generates network effects that no competitor can match, regardless of the competitor's quality, is dominating through market structure. The first form of dominance is the market working as intended. The second is the market failing in the specific way that Katz and Shapiro's theory predicts.
In practice, the two forms are nearly impossible to disentangle, because the data network effect converts quality advantage into structural advantage. A platform that is initially better because of superior engineering attracts more users, which generates more data, which makes the model better, which attracts more users. The initial quality advantage — earned through competition — becomes a structural advantage — maintained through network effects — through a process that is continuous and gradual. At no single point does the advantage transition from earned to structural. The transition is the process itself.
The interoperability and portability interventions discussed in Chapter 3 address part of the problem. Data portability requirements would ensure that users can export their interaction histories. Interoperability mandates would ensure that complementary tools work across platforms. Both would reduce switching costs and preserve competitive pressure. But neither addresses the data network effect — the incumbent's quality advantage derived from the accumulated learning of its entire user base — because that advantage is embedded in the model itself, not in the user's data or the complementary goods ecosystem. A user who exports her conversation history to a competing platform does not bring the model improvement that her interactions generated with her. The improvement stays with the incumbent's model, widening the quality gap even as the switching costs are reduced.
Addressing the data network effect requires a more structural intervention: either mandating that training data or model improvements derived from user interactions be shared with competitors (a form of essential facilities doctrine applied to AI training signal), or ensuring that the foundational model layer is competitive through support for open-source alternatives that prevent any single proprietary model from becoming the irreplaceable infrastructure of the digital economy.
Shapiro's career-long commitment to evidence-based enforcement rather than structural presumption suggests a measured approach: monitor the market dynamics, identify the point at which quality advantage has converted to structural dominance, and intervene with the least restrictive remedy that restores competitive pressure. The challenge is that the monitoring itself must operate at the speed of the market. Traditional antitrust monitoring — market studies, economic analyses, public comment periods — operates on an annual cycle. AI market dynamics operate on a quarterly cycle at most, weekly at the frontier. The gap between monitoring speed and market speed is the gap in which competitive outcomes are determined without regulatory oversight.
In his 2021 paper "Antitrust: What Went Wrong and How to Fix It," Shapiro argued for faster enforcement mechanisms and greater reliance on preliminary remedies that preserve competitive options while longer-term analysis proceeds. The argument applies with particular force to AI markets, where the window for effective intervention is narrow and closing. A preliminary interoperability mandate — requiring AI platforms to support standardized data export and complementary goods compatibility while the longer-term competitive analysis proceeds — would preserve the option of competition without requiring the premature determination of market structure that structural interventionism demands.
The political economy of AI regulation adds a final dimension. Shapiro noted in his commentary on regulating Big Tech that "Congress seems to be drafting legislation based on preconceived views about the Big Tech companies rather than following the evidence in a balanced manner." The observation applies with equal force to AI regulation. The public discourse oscillates between existential risk narratives (AI will destroy humanity) and economic triumphalism (AI will democratize everything), neither of which provides a sound foundation for competition policy. What is needed is the kind of evidence-based, economically grounded analysis that Shapiro has advocated throughout his career — analysis that identifies the specific market dynamics producing concentration, evaluates the specific welfare consequences of that concentration, and designs the specific interventions most likely to preserve competition without sacrificing the genuine benefits of scale.
Shapiro's own public silence on AI — his withdrawal from the NBER chapter he began with Varian, his absence from the public discourse on AI competition policy — represents a gap in the intellectual leadership that this moment demands. The economist who wrote the foundational theory of network effects and market tipping, who spent decades applying that theory to the competitive dynamics of information markets, who served as the Department of Justice's chief economist for antitrust during a period of intense technology competition, has not publicly applied his framework to the market that most urgently needs it. His consulting relationships with the very technology companies now building AI platforms may explain the reticence. The reticence does not reduce its cost. The debate over AI competition policy is proceeding without the analytical voice best equipped to inform it, and the market is tipping while the analysis goes unperformed.
The window is open. The economics are clear. The intervention must match the speed of the dynamics it addresses, or the dynamics will produce an outcome that intervention can no longer reverse. Shapiro's framework provides the map. The question is whether the mapmaker's silence will be broken before the territory it describes has been irreversibly claimed.
The twenty-fold productivity multiplier that Segal documents in The Orange Pill — twenty engineers in Trivandrum, each operating with the leverage of a full team, at one hundred dollars per month per person — is, stripped of its narrative excitement, a measurement of surplus expansion. The same human input produces twenty times more output. The gap between the value created and the cost of creating it has widened by a factor of twenty. The economic pie has grown.
The question that every previous information revolution has answered — and that the AI revolution has not yet answered — is how the larger pie gets divided.
Surplus distribution in competitive markets follows a predictable logic. When a new technology increases productivity, the resulting surplus is divided among four claimants: the workers who operate the technology, the employers who deploy it, the consumers who purchase the output, and the technology providers who supply the tools. The division is determined not by justice or intention but by bargaining power, which is itself determined by market structure — by who has alternatives and who does not, who can credibly threaten to walk away and who is locked in.
Shapiro's framework, applied to the AI surplus, identifies the bargaining positions of each claimant with uncomfortable clarity.
The workers — Segal's engineers, the developers worldwide, the knowledge workers whose productivity AI has multiplied — are in the weakest bargaining position. The same technology that multiplied their individual output has also multiplied the output of every other worker with access to the same tool. When one engineer can do the work of twenty, the employer needs fewer engineers. The scarcity that previously supported the engineer's wage — the limited supply of people who could write the code, design the system, build the product — has been dramatically reduced. The engineer is more productive but less scarce, and in labor markets, scarcity determines bargaining power more reliably than productivity.
The arithmetic is straightforward and merciless. If each engineer produces twenty times more output, and demand for the output has not increased twentyfold, the employer can produce the same output with fewer engineers. The engineers who remain may capture some of the surplus through higher wages, especially if their judgment and direction — the "remaining twenty percent" that Segal identifies as "everything" — is genuinely scarce. But the engineers who are displaced capture none of it. And the threat of displacement — the knowledge that the employer could reduce headcount and maintain output — weakens the bargaining position of even the engineers who remain.
Segal confronts this arithmetic directly in The Orange Pill. He describes the boardroom conversation where the headcount reduction was proposed: "If five people can do the work of one hundred, why not just have five?" He chose to keep and grow the team. But he acknowledges that the choice was costly, that the investor's arithmetic was clean and seductive, and that the pressure to convert productivity gains into headcount reduction would return next quarter. The choice was a values decision, not an economic inevitability. A different leader, facing the same arithmetic, would have made the opposite choice — and in most boardrooms across the economy, the opposite choice is being made.
The employers are in a stronger bargaining position, but their position is complicated by competition in the product market. If every firm in an industry adopts AI and achieves similar productivity gains, the competition among firms will drive product prices down, transferring the surplus from employers to consumers. The firm that reduces prices fastest captures market share; the firm that maintains prices loses customers to competitors whose AI-augmented workforce produces at lower cost. In competitive product markets, the surplus flows downstream to consumers as lower prices, better products, or both.
But product markets are not always competitive, and the firms best positioned to capture AI surplus are precisely those with market power — the dominant firms in concentrated industries whose pricing is not fully disciplined by competition. A firm with market power can absorb the productivity gain without reducing prices, converting the surplus directly into profit. The concentration dynamics described in Chapter 8 — the tendency of AI platform markets to tip toward dominant firms — suggest that market power in the post-AI economy may be greater than in the pre-AI economy, which would shift the surplus distribution toward employers and away from consumers.
The AI platform providers — Anthropic, OpenAI, Google, and the handful of other firms that supply the foundational models — are in the strongest bargaining position of any claimant. The platform sits beneath every productive activity that uses AI. Every engineer who achieves a twenty-fold productivity gain achieves it through a platform subscription. The platform captures a toll on every unit of surplus generated through its tools. And the network effects, switching costs, and lock-in dynamics documented in Chapters 2 and 3 ensure that the toll is durable — users cannot easily switch to a competing platform, which means the platform can maintain or increase its pricing without losing its installed base.
The hundred dollars per month that Segal pays for Claude Max is, in the context of a twenty-fold productivity multiplier, extraordinarily cheap. The surplus generated by the subscription — the value of twenty engineers' output minus the cost of one engineer's salary and one subscription — is enormous. The platform captures a tiny fraction of the surplus it enables. But the fraction is collected from millions of subscribers, and the network effects ensure that the installed base grows while the switching costs ensure it does not shrink. The platform's share of the total surplus is small per user but enormous in aggregate, and the aggregate grows with every user who joins and every quarter that switching costs compound.
Shapiro's analysis of surplus distribution in information markets predicts that the platform layer will capture a disproportionate share of the AI surplus over time. The mechanism is the same one that made Google, Apple, and Facebook the most valuable companies in the world: the platform that sits beneath a diverse ecosystem of productive activity captures a toll on every activity, and the network effects that sustain the platform's position prevent competitive pressure from eroding the toll.
The distribution of the AI surplus has implications that extend beyond economics into the political economy of the transition. If workers capture little of the surplus because their scarcity has been reduced, the productivity gains that Segal celebrates will not translate into broadly shared prosperity. The economy will grow — more output, more value, more GDP — while the workers who produce the output experience stagnant or declining wages. The surplus will flow to employers (as profit), consumers (as lower prices), and platform providers (as subscription revenue and data assets), while the labor share of income — the fraction of total economic output that goes to workers — declines.
This is not a hypothetical concern. The labor share of income in advanced economies has been declining for four decades, and the decline accelerated during the period of rapid technological change from the 1990s onward. AI threatens to deepen the decline by simultaneously increasing output per worker (expanding the total surplus) and reducing the number of workers required to produce it (weakening labor's bargaining position). The macroeconomic consequence would be an economy that is more productive and more unequal — more output and more wealth, concentrated in fewer hands.
The institutional responses that could alter this distribution are familiar from previous technological transitions but have not been implemented for the AI transition. Progressive taxation of AI-derived corporate profits could redistribute some of the surplus from employers and platform providers to the public. Wage subsidies or earned income supplements could support workers whose bargaining power has been weakened by the technology. Investment in public goods — education, infrastructure, research — funded by the expanded surplus could ensure that the productivity gains translate into broadly shared capability rather than narrowly held wealth.
Segal's decision to keep and grow his team rather than convert productivity gains into margin is the individual-level expression of a distributional choice that must ultimately be made at the institutional level. The Trivandrum engineers kept their jobs. Their capabilities expanded. Their professional identities were enriched rather than diminished. But this outcome was the result of one leader's values, not of any institutional structure that would ensure the same outcome across the economy. In the absence of such structures, the default distribution will be determined by bargaining power — and bargaining power, in the AI-augmented economy, favors the platform over the employer, the employer over the consumer, and the consumer over the worker.
The economic pie is larger. The question is whether the institutions that divide the pie will ensure that the expansion benefits the people whose productivity created it, or whether the surplus will concentrate in the hands of whoever holds the strongest bargaining position. The economics predicts the default outcome. Only institutional design can alter it.
The distributional question intersects with the temporal inequality analyzed in Chapter 5 in ways that compound the urgency. The surplus generated during the transition period — the years when productivity multipliers are largest and institutional responses are least developed — is being distributed according to bargaining positions established in the pre-AI economy. Workers whose bargaining power derived from technical scarcity find that scarcity evaporating. Platform providers whose bargaining power derives from network effects and lock-in find that power compounding. The distribution established during the transition period will shape the distribution for a generation, because the wealth accumulated during the transition funds the lobbying, the regulatory capture, and the institutional design that determine the rules of the post-transition economy.
The window during which institutional design can shape the distribution is the same window during which the competitive dynamics described in Chapter 8 are being determined. The two processes — competitive tipping and surplus distribution — interact. A market that tips toward a monopoly platform concentrates surplus in the platform provider. A market that remains competitive distributes surplus more broadly. The antitrust intervention that preserves competition in the platform market simultaneously preserves a more equitable distribution of the surplus that the platform enables.
The economics is clear on the forces at work. The economics is silent on the values that should govern the distribution. Whether the twenty-fold productivity gain enriches the engineers who achieved it, the employers who deployed it, the consumers who benefit from it, or the platform providers who enabled it is not an economic question. It is a political question, answered by the institutional choices that societies make during the narrow window when the choices still matter.
The surplus has expanded. The choices are being made. And the default outcome — concentration at the platform layer, profit capture at the employer layer, stagnation at the worker layer — is not an economic law. It is the outcome that obtains when no one builds the institutions to produce a different one.
---
Every major technology transition in the history of information markets has produced the same sequence: capability expansion, market concentration, institutional lag, and then — if the institutions arrive in time — a negotiated settlement between the technology's power and the society's values. The settlement is never clean. It is never permanent. It is always contested. But the quality of the settlement determines whether the technology's benefits are broadly shared or narrowly captured, whether the transition enriches a society or fractures it.
Shapiro's career spans four such settlements: the telecommunications breakup, the Microsoft antitrust case, the rise of platform monopolies in social media and search, and now the emergence of AI as cognitive infrastructure. Each settlement taught lessons. Each lesson applies to the current moment with an urgency that the compressed timeline of the AI transition makes acute.
The first lesson is that institutional design must match the speed of the market dynamics it governs. The telecommunications settlement took decades — from the initial antitrust filing against AT&T in 1974 to the Telecommunications Act of 1996. The market moved slowly enough that the institutional response, though leisurely, arrived before the competitive dynamics had hardened irreversibly. The Microsoft case moved faster — filed in 1998, settled in 2001 — but still operated on a timeline of years, during which the browser market that was the case's proximate subject had already been decided by market forces. By the time the remedy arrived, the competitive harm had been inflicted and partially absorbed. The platform monopoly cases of the 2010s and 2020s moved slower still relative to the market dynamics they addressed — regulatory proceedings that took years to conclude in markets that were tipping in quarters.
The AI market is moving faster than any of its predecessors. Competitive positions that took years to establish in previous platform markets are being established in months. Lock-in that accumulated over years in enterprise software is accumulating in weeks. The compound network effect documented in Chapter 2 is compressing what was historically a multi-year tipping process into a timeline that the traditional regulatory machinery cannot match. Shapiro's advocacy for faster enforcement mechanisms — preliminary remedies that preserve competitive options while analysis proceeds — is not merely sensible in this context. It is the minimum necessary response to a market whose dynamics outpace every existing institutional process.
The second lesson is that institutional design must be platform-agnostic. The antitrust remedies directed at AT&T were specific to telecommunications. The Microsoft consent decree was specific to operating systems. Each was effective within its domain and irrelevant outside it. The AI platform market requires institutions that are not specific to any individual platform — not designed around Claude's architecture or GPT's capabilities or Gemini's integration strategy — but general enough to govern the competitive dynamics that all AI platforms share. Interoperability standards must work across platforms. Data portability requirements must be platform-independent. The educational frameworks that develop human judgment must not be optimized for any single tool's capabilities.
The platform-agnostic principle extends to the governance structures themselves. Shapiro has argued consistently that antitrust enforcement should be based on "strong empirical and theoretical foundations" rather than "preconceived views" about specific companies. The argument applies with particular force to AI, where the temptation to regulate based on the current competitive landscape — to design rules around today's market leaders — risks creating regulatory structures that serve the incumbents' interests by raising barriers to entry for future competitors. The institutions must govern the dynamics, not the firms. The dynamics — network effects, switching costs, data advantages, tipping — are consistent across firms and across time. The firms will change. The dynamics will not.
The third lesson concerns the demand side. Every previous institutional settlement focused primarily on the supply side — on what technology companies could build, how they could compete, what conduct was permissible. The demand-side question — what citizens, workers, students, and parents need to navigate the transition wisely — has been systematically neglected. Segal identifies this gap in The Orange Pill: "We are so busy building guardrails for the companies that the people those policies are supposed to protect remain wholly exposed."
Shapiro's framework, applied to the demand side, identifies specific institutional needs. Workers need transitional support that matches the speed of the displacement — not the multi-year retraining programs designed for industrial transitions but rapid-cycle professional development that can operate on the quarterly timescale at which AI is commoditizing skills. The senior engineer's dilemma described in Chapter 3 — the investment calculus under genuine uncertainty, with cognitive and identity switching costs that no financial mechanism can subsidize — requires institutional responses that address the non-financial dimensions of the transition: mentoring programs, professional communities, identity-reconstruction support that acknowledges the real psychological cost of abandoning a career's worth of expertise.
Educational institutions need reform that matches the depth of the transformation. The attentional ecology that Segal advocates — curricula that develop questioning over answering, assessment methods that evaluate process over product, structured integration of AI tools that preserves the developmental friction necessary for cognitive growth — represents a fundamental reorientation of educational purpose. The economics of merit goods provides the justification: students cannot evaluate the long-term developmental cost of outsourcing cognitive work to AI, which means the institutional framework must make the evaluation on their behalf. The specifics of this framework — age-appropriate AI access policies, assessment redesign, teacher training for AI-augmented pedagogy — must be developed and deployed at the speed of the technology's adoption, not at the pace of educational reform's traditional multi-decade cycle.
Consumers need information structures that resolve the lemons problem of polished output documented in Chapter 4. Credentialing systems that certify not just knowledge but the exercise of independent judgment. Quality assurance frameworks calibrated to AI-augmented production. Transparency standards that make visible the degree of AI involvement in professional work — not as a stigma but as an informational signal that allows the market to price judgment accurately. The economic logic is Akerlof's: the market cannot reward what it cannot observe, and the smooth output makes judgment unobservable. The remedy is to make judgment observable through institutional design.
The fourth lesson is perhaps the most uncomfortable: the institutions must be built by people who understand the technology, and the people who understand the technology have economic incentives not to build them. Shapiro's disclosure statement reveals consulting relationships with Apple, Google, Cisco, and other technology companies — the very firms whose competitive behavior the institutions must govern. The pattern is general: the expertise required to design effective AI governance resides primarily in the firms and individuals whose economic interests are served by the absence of governance. The regulatory agencies lack the technical expertise. The academic institutions lag the frontier. The policymakers depend on the very industry they are attempting to regulate for the information needed to regulate effectively.
This is not a problem that can be solved by excluding the technologically sophisticated from the governance process. Their expertise is necessary. The solution is structural: governance processes designed to surface and manage conflicts of interest, independent technical expertise funded by public investment rather than industry consulting, and transparency requirements that make the economic interests of governance participants visible to the public whose welfare the governance serves.
Segal calls for stewardship. Shapiro's framework specifies what stewardship requires in economic terms: institutions that preserve competition in the platform market, ensure equitable distribution of the AI surplus, protect the cognitive development of the next generation, resolve the information asymmetries that the smooth output creates, and operate at the speed of the market dynamics they govern. The institutions must be platform-agnostic, demand-side as well as supply-side, and designed by people whose expertise is real and whose conflicts are managed.
The fifth and final lesson: the institutions are never finished. The beaver's dam requires constant maintenance because the river constantly pushes against it. Shapiro's career demonstrates this through four decades of revisiting the same competitive dynamics in successive technology markets. The forces are the same. The applications change. The institutions that governed telecommunications were inadequate for operating systems. The institutions designed for operating systems were inadequate for social media. The institutions currently being designed for social media are already inadequate for AI. The institutional project is not a destination. It is a practice — ongoing, adaptive, attentive to the specific dynamics of the current market while grounded in the enduring economic principles that govern all information markets.
The AI surplus is being generated now. The competitive positions are being established now. The lock-in is accumulating now. The developmental attention of the next generation is being shaped now. The window during which institutional design can influence these outcomes is open now.
Shapiro and Varian wrote in 1999: "Technology changes. Economic laws do not." The corollary for institutional design is equally direct: technologies change, but the need for institutions that govern the distribution of their benefits does not. The AI transition requires institutions calibrated to its specific dynamics — the three-way network effect, the cognitive lock-in, the smooth output, the compressed timeline. But the purpose of the institutions is the same purpose that has motivated institutional design since the first commons was enclosed: ensuring that the power of the new technology is directed toward broad human welfare rather than narrow economic concentration.
The economic laws are patient. They will produce their predicted outcomes — concentration, lock-in, surplus capture — with or without institutional intervention. The institutions are what determine whether those outcomes serve the few or the many. The economics provides the map. The institutions are the territory. And the territory is being shaped right now, in legislative chambers and corporate boardrooms and classroom curricula and pricing meetings, by people who may or may not understand the forces they are directing.
Shapiro's framework does not prescribe values. It identifies the forces that values must govern. The prescription falls to the societies that choose what kind of intelligence age they wish to inhabit — and whether the institutions they build will match the ambition of the technology they have created.
---
Three dollars and forty-seven cents.
That is what it cost me, in Claude API fees, to generate the first draft of a contract that my lawyer would have billed four hours for. The contract was competent. The clauses were standard. The language was clean. My lawyer, when I showed it to him, found two issues — one a jurisdictional nuance the model missed, the other a liability provision that was technically correct but strategically unwise given the specific counterparty. Two issues out of twelve pages. Ninety-eight percent right, for less than the price of a coffee.
I did not fire my lawyer. I will not fire my lawyer. But I understood, in the three minutes it took him to find those two issues, exactly what I was paying him for — and it was not the twelve pages. It was the two catches. The judgment. The thing the smooth output concealed.
That gap — between the twelve pages anyone can now produce and the two catches that only expertise can identify — is the entire territory of this book. Carl Shapiro has spent forty years mapping how information markets allocate value when production becomes cheap, and every map he has drawn leads to the same coordinates: value migrates to whatever remains scarce. The scarce thing is never the output. It is always the judgment about whether the output is right.
What shook me in Shapiro's framework was not what I expected. I expected the network effects analysis to feel abstract — academic economics applied at arm's length to a world I experience with my hands. Instead, it described my own captivity with a precision that made me uncomfortable. I have been building with Claude for months. My workflows are Claude-shaped. My thinking rhythms are Claude-rhythms. I have accumulated exactly the kind of cognitive lock-in that Chapter 3 describes — not because anyone trapped me, but because every individual decision to go deeper with the tool was rational, productive, and slightly irreversible. I chose every step. I chose none of the destination. That is what lock-in means, and I did not understand it until an economist named it.
The lemons problem keeps me up at night. Not for myself — I know when I have exercised judgment and when I have deferred to the machine. But I think about the next generation of professionals, the ones who will grow up inside the smooth output from the start. How will they know the difference between genuine understanding and sophisticated pattern-matching? How will the market know? If the surface is indistinguishable, and the market rewards surfaces, then the slow erosion of the depth beneath the surface becomes invisible until something breaks — a legal argument that collapses under scrutiny, an architecture that fails under load, a medical recommendation that kills someone. The market failure is not dramatic. It is quiet, cumulative, and concealed by the very polish that makes AI output so seductive.
The surplus question is the one I carry into every boardroom. When my team achieves that twenty-fold multiplier, where does the value go? To the engineers whose skills made it possible? To the company whose products reach further? To the customers who get better experiences? To Anthropic, whose platform sits beneath everything, collecting its toll? I chose to invest the surplus in growing my team's capability rather than shrinking its headcount. But I know that choice is fragile — contingent on my values, my board's patience, and a market that could punish me for it any quarter. The institutions that would make that choice structural rather than personal do not yet exist. Shapiro's framework tells me they need to exist. It does not tell me they will.
What I take from this economist's work is a discipline I lacked. The discipline of asking not just "what can we build?" but "who captures the value of what we build, and what happens to everyone else?" The discipline of seeing my own lock-in clearly. The discipline of understanding that democratization is not an inevitable consequence of cheap tools — it is a possible consequence, contingent on pricing strategies, access policies, and institutional structures that someone has to build on purpose.
The river flows. The economics are real. And the institutions that will determine whether this moment enriches broadly or concentrates narrowly are being designed right now — or, more precisely, are not being designed right now, which is itself a design choice with consequences.
The economic laws are patient. They do not care about our intentions. They will produce their predicted outcomes with the indifference of gravity. The only variable is us — the institutions we build, the distributions we choose, the structures we maintain against the constant pressure of a market that optimizes for concentration the way water optimizes for the lowest point.
We are building the future right now. The economics tells us what we are building toward if we do nothing. The choice is whether to do something.
Build the institutions. Build them now. Build them at the speed the market demands. And build them for everyone — including the people who are not yet in the room.
-- Edo Segal
Every rational decision you made — learning the tool, building the workflow, going deeper — was a step into a lock-in that no one forced and no one can reverse. The economics of your captivity were written decades before the first prompt.
AI has made software nearly free to produce. A trillion dollars in market value has already evaporated as the old economy reprices around that fact. But cheap production does not mean broadly shared prosperity — it means the surplus flows to whoever controls what remains scarce. Carl Shapiro spent forty years mapping exactly these forces: network effects that tip markets toward monopoly, switching costs that convert voluntary adoption into involuntary dependence, and information asymmetries that let polished surfaces conceal the erosion of genuine expertise. This book applies his framework to the AI revolution with forensic precision, revealing who will capture the value of the twenty-fold productivity gain — and what institutions must be built, now, before the window closes, to ensure the answer is not just the platforms.

A reading-companion catalog of the 21 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Carl Shapiro — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →