Versioning is the pricing strategy of offering different versions of the same underlying good at different price points, allowing customers to self-select into the version matching their willingness to pay. Shapiro and Varian devoted an entire section of Information Rules to the strategy, demonstrating that information goods make price discrimination both uniquely feasible (because marginal costs are near zero) and uniquely consequential (because the pricing architecture shapes the distribution of consumer and producer surplus). AI platforms practice versioning with sophistication that raises a specific moral question absent from previous applications: when the versioned good is a cognitive amplifier, the stratification affects not merely productivity but intellectual capacity.
The canonical illustration is the IBM LaserPrinter E, physically identical to the full-price LaserPrinter but containing a chip that artificially slowed its printing speed. IBM invested engineering effort to produce a deliberately degraded product because a lower price without quality degradation would have cannibalized the premium version's sales. The degradation was not a cost-saving measure but a strategic choice to segment the market — preventing high-willingness-to-pay customers from capturing consumer surplus the firm could otherwise extract.
AI platforms now practice versioning across capability, context window, response speed, and usage volume. Anthropic offers Claude in free, twenty-dollar professional, and hundred-dollar Max tiers. OpenAI and every significant AI provider follow analogous structures. The tiers differ in model capability and usage restrictions; the underlying system is a single good strategically versioned to capture different market segments.
The distinction from previous versioned goods is qualitative. Segal's metaphor of AI as amplifier means that versioning AI is versioning amplification itself. The free tier amplifies less; the premium tier amplifies more. Two builders of identical capability, exercising identical judgment, produce different-quality output depending on which version they can afford. The difference is not in human input but in the amplification — an economic wedge between capability and outcome that has no precise precedent.
The historical trajectory of information goods offers some reassurance. Premium features commoditize from the top down: today's premium version becomes tomorrow's free tier. Google Search in 2025 is vastly more powerful than Google Search in 2005. The free AI tier in 2030 will likely exceed the Max tier of 2026. But the time horizon is precisely the problem. The developer who cannot afford the premium tier today does not benefit from the price decline arriving in three years. Competitive advantages accumulated during the transition period compound, producing durable inequalities long after pricing differentials close.
Price discrimination has been studied in economics since Pigou's 1920 distinction between first-, second-, and third-degree discrimination. Versioning as a specific application to information goods was developed in Shapiro and Varian's 1999 treatment, drawing on decades of work on multi-product pricing and product line design.
Versioning segments markets along willingness to pay. By offering multiple versions at different prices, firms extract more surplus than single-price models allow.
Degradation can be strategic. Deliberately inferior versions exist to prevent cannibalization of premium offerings, not to save production costs.
AI versioning stratifies amplification. Unlike versioning of printers or software features, AI versioning creates differential cognitive reach across users of identical capability.
Temporal inequality compounds. Advantages accumulated during transition periods when pricing is most differentiated persist long after prices equalize.
Whether versioning AI tools represents efficient market segmentation or problematic stratification of cognitive capacity depends on values external to economic analysis. The framework identifies the mechanism; the normative evaluation depends on how much one values equality of cognitive reach relative to the revenue that sustains continued AI development.