The stratified divergence is the basin of attraction produced when the AI transition democratizes production but concentrates the development of judgment. A small population of practitioners develops the higher-order capacities — judgment, taste, strategic intelligence, the ability to direct AI tools toward outcomes that serve genuine needs — and commands a premium for those capacities. A much larger population operates at the level of tool competence, producing adequate output directed by the upper tier, cycling between successive waves of tool adoption and retraining without developing deeper capacities. The configuration is self-reinforcing through the economics of skill development and carries distinctive political and normative consequences.
There is a parallel reading that begins from the material substrate AI requires rather than the capabilities it enables. The stratified divergence Segal describes may be less about judgment development and more about infrastructure control. Those who own the compute, the data centers, the model weights, and the distribution channels determine not just who can produce but what counts as production. The division isn't between those with taste and those without — it's between those who set the parameters within which all judgment operates and those who operate within pre-set parameters.
This reading suggests the two-tier system emerges not from differential access to mentoring and time for reflection, but from the fundamental asymmetry between platform owners and platform users. The "judgment" the upper tier develops may be less about genuine discernment and more about fluency with the particular constraints and affordances of proprietary systems. What appears as superior taste may actually be insider knowledge of how the tools work, what they optimize for, and how to game their hidden logics. The lower tier isn't failing to develop judgment — they're developing judgment within a sandbox whose rules they neither set nor fully understand. Public investment in judgment development, while valuable, cannot address this more fundamental asymmetry. Without public alternatives to private infrastructure, without transparency in model development, without genuine user control over the tools of production, the stratification reproduces itself through the material conditions of AI deployment rather than through differential skill development.
The mechanism is the economics of upper-tier capability development. Judgment, taste, strategic intelligence require time, mentoring, and institutional support to develop — resources available to those who already occupy positions of sufficient economic security to invest in long-term capability development, and unavailable to those whose economic position requires the continuous production of saleable output.
The stratification is different in kind from the conservation-phase division between specialists and generalists. It is a division between those who judge and those who execute — a division with different political implications because it concentrates the capacity to shape outcomes among those already positioned to shape them.
The democratization of capability, which The Orange Pill celebrates, is real. Anyone with AI tools can produce competent output. But competent output is not excellent output, and if the capacity to produce excellent output remains concentrated among a small population, democratization of production may coexist with stratification of value. Everyone can build. Only some can build well. The premium accrues entirely to those who build well.
Prevention requires public investment in judgment development that is available independent of economic position — the educational, institutional, and cultural infrastructure that treats depth not as a private luxury but as a public good.
Described in On AI as one of three candidate basins for the AI reorganization, extending analyses of stratification dynamics in prior technology transitions.
Two tiers. A small judgment tier, a large execution tier, with limited mobility between them.
Economic gatekeeping. Upper-tier capacities require resources unavailable to those most dependent on lower-tier income.
Democratization plus stratification. Production becomes universal; value remains concentrated.
The question of what drives stratification in the AI transition depends fundamentally on which layer of the system we examine. At the capability development layer, Segal's analysis dominates (75/25) — judgment, taste, and strategic intelligence do require time and mentoring that economic precarity prevents. The workshops, residencies, and experimental spaces where deep practice develops remain luxuries. But at the infrastructure layer, the contrarian view carries more weight (80/20) — control over compute, models, and platforms creates hard boundaries that no amount of judgment can cross.
The synthesis emerges when we recognize these aren't competing explanations but interacting dynamics. Infrastructure control shapes what kinds of judgment can develop and matter; differential judgment development determines who gets recruited into infrastructure control. The upper tier isn't monolithic — it includes both those who develop genuine discernment and those who merely occupy positions of platform power. The lower tier experiences both kinds of exclusion simultaneously: locked out of the time needed to develop taste and locked out of the systems that would make their taste matter.
The intervention this suggests is more radical than either view alone implies. Public investment must target both judgment development (through education, mentoring, supported practice) and infrastructure alternatives (through public compute, open models, transparent systems). The ratio matters: 60% of the solution lies in democratizing the capacity for judgment as Segal suggests, but the remaining 40% requires confronting the material conditions that determine whose judgment counts. Without both, the stratified divergence reproduces itself through the interaction of economic and technical gatekeeping.