The AI economy is distributed in ways that are less consequential than they appear and concentrated in ways that matter more than is commonly recognized. The internet infrastructure through which AI is accessed is genuinely global. The applications built on AI are geographically distributed. But the core of the AI stack — the training corpora that encode the knowledge base, the model weights that represent the trained capability, the inference infrastructure that makes real-time use possible — is controlled by a small number of corporations. A handful of organizations make decisions about what texts enter the training data, how to weight different sources, what constraints to place on outputs, and what capabilities to expose through public APIs. These decisions shape the intellectual environment of every AI user, without requiring the users' consent or allowing their scrutiny. Concentration at the core of the AI commons inverts the distributed pattern that gave print culture its resilience, and the implications are consequential.
There is a parallel reading where concentration appears structural but is actually fragile — vulnerable to the same forces that broke previous technological monopolies. The semiconductor analogy breaks down precisely where it matters most: chips require decade-long capital cycles and irreplaceable fabrication expertise, while AI models can be replicated by anyone who can rent compute for a few months. The moats look formidable until you notice that Llama 3 runs on consumer hardware, that Mistral achieved frontier performance with a fraction of the capital, that the entire frontier shifted when attention mechanisms became public knowledge.
The concentration we observe in 2026 reflects a specific historical moment — the transition from research to productization — not a stable equilibrium. Every previous computing platform that looked permanently concentrated (mainframes, minicomputers, enterprise software) eventually fragmented when the underlying cost structure shifted. The cloud providers control infrastructure, but infrastructure has always been the layer that commoditizes first. The real question is whether model training costs continue their exponential climb or whether we are approaching the regime where algorithmic efficiency matters more than raw scale — the regime where a university lab can match a corporate cluster through better methods. If the latter, concentration is a temporary phenomenon, a brief window between "only researchers can do this" and "anyone can do this."
Eisenstein's analysis of print's distributed character emphasized that it was the structural foundation of the commons's resilience. Printing shops in two hundred European cities by 1500, each operating independently, meant that no single institution could control what circulated. A text banned in one territory could be printed in another. A publisher's error in one edition could be corrected in another. No central decision determined what entered the commons or what was preserved in it. The distribution of control produced both diversity and durability.
The AI stack inverts this pattern at its most consequential layers. The training data for frontier models is controlled by the companies that collect or license it. The models themselves are proprietary — even open-source releases typically lag the frontier by months or years. The inference infrastructure runs in corporate data centers whose details are not public. A handful of organizations — Anthropic, OpenAI, Google, Meta, and a few others — make the decisions that shape what AI systems can and cannot do, what they know and what they do not know, how they respond and how they refuse.
The concentration has immediate consequences for users. When a company retrains a model with different data, the model's outputs shift in ways that affect every user. When a company changes its content policies, the capability users can access changes accordingly. When a company decides to deprecate a model, work built on that model's specific behavior may break. Users have no mechanism to evaluate, contest, or reverse these changes. They often do not know changes have occurred until they observe differences in output.
The consequences extend beyond individual user experience to the structure of cumulative knowledge. If the AI commons becomes the primary infrastructure through which knowledge is produced, preserved, and accessed, then the corporations controlling the core of that infrastructure effectively control the conditions of intellectual life — in a way that no single institution has controlled since the medieval monastic monopoly. The Eisenstein framework suggests this is a return to a pre-print condition dressed in post-print technology. Whether alternative institutions can emerge to distribute control — open-source models, cooperative training efforts, regulated access regimes — is among the most consequential questions of the AI transition.
The concentration of the AI stack is a consequence of specific technical and economic factors. Training a frontier large language model requires computational resources costing billions of dollars, access to massive text corpora, and teams of specialized researchers. The capital requirements have so far concentrated frontier development in a handful of well-funded organizations. The economic dynamics — network effects, data feedback loops, talent clustering — tend to reinforce concentration rather than erode it.
The analytical recognition of concentration as a structural problem emerged in the early 2020s as AI systems scaled. Scholars including Shoshana Zuboff, Kate Crawford, and others have extended surveillance-capitalism and platform-capitalism frameworks to analyze the specific patterns of AI-era concentration. The framing in this volume — as the inversion of Eisenstein's distributed print commons — is intended to provide historical perspective on what is structurally at stake.
Core layers are concentrated. Training data, model weights, and inference infrastructure are controlled by a handful of corporations.
Peripheral layers are distributed. Internet access and applications are global, but this distribution is less consequential than concentration at the core.
Inverts print's pattern. Print distributed control across thousands of printers; AI concentrates it in a few organizations.
Decisions shape all users. A single corporate decision can affect millions of users without their consent or scrutiny.
Return to pre-print conditions. Concentration at the core of the knowledge commons resembles the medieval monastic monopoly more than the post-Gutenberg distributed system.
Governance questions are open. Whether alternative institutions can distribute control is among the most consequential questions of the AI transition.
Whether AI concentration is transient or stable is contested. Open-source advocates argue that sufficiently capable open models will eventually match proprietary ones, restoring distribution to the AI stack as prices fall and scale requirements decrease. Skeptics argue that the scale requirements for frontier capability continue to grow, that network effects entrench incumbent positions, and that meaningful distribution requires institutional intervention — regulation, public infrastructure, cooperative ownership — that current political economies are unlikely to deliver. The Eisenstein framework does not resolve this question but sharpens it: what institutions would have to exist to produce the distributed AI commons that print's distributed pattern made possible?
The truth depends on which layer you examine and which timeline you consider. At the infrastructure layer, Edo's account is nearly complete (85%) — cloud concentration is real, capital requirements are escalating, and dependency relationships constrain most actors. The contrarian view holds only weakly here (15%) — infrastructure has commoditized before, but the pace of change matters, and regulatory capture moves faster than markets.
At the model layer, the weighting flips. The contrarian reading captures something essential (60%) — trailing capability does democratize, open-source models do narrow the gap, and algorithmic improvements do reduce capital requirements. But Edo's framework still holds (40%) because frontier capability continues to concentrate even as commodity capability spreads. This is precisely the two-tier structure the entry predicts: leading-edge consolidation plus commodity fragmentation, not one or the other.
The synthesis requires reframing concentration not as permanent or temporary but as phase-dependent. We are watching a phase transition where the system oscillates between concentration (when scaling dominates) and fragmentation (when efficiency dominates). The regulatory question is not whether to preserve or break concentration, but how to govern a system that moves between these states faster than institutions can adapt. The geopolitical question is not whether oligopoly is stable, but what happens during the transitions — when control is most contestable and most vulnerable.