Lock-in arises when users accumulate investments — in learning, data, workflows, and complementary goods — that are specific to a chosen platform and cannot be transferred to alternatives without cost. The investments compound with each interaction, each individually small enough to feel inconsequential, until the compound total exceeds the benefit of switching to any alternative. At that point, the user is locked in — not by contract or coercion but by the accumulated weight of her own rational decisions. The mechanism transfers bargaining power from user to platform provider, because the locked-in user cannot credibly threaten exit. AI platforms are generating lock-in at speed and depth unprecedented in the history of information goods.
Shapiro and Varian identified lock-in as a defining feature of information markets in Information Rules: Switching to incompatible products is difficult, so customers can get 'locked in' once they have made an investment in information goods based on a given technology. The lock-in transfers bargaining power from user to provider. The locked-in user cannot credibly threaten to leave because the cost of leaving exceeds the benefit. The platform provider, knowing this, can raise prices, reduce quality, change terms, or redirect development away from user interests without losing the installed base.
The AI ecosystem generates four distinct sources of lock-in operating simultaneously. Data lock-in: conversation histories and interaction patterns accumulated over months of collaborative work, stored on platform infrastructure and not meaningfully portable. Workflow lock-in: the cognitive habits, prompting patterns, and productive methodologies engineers develop around specific platforms. Complementary goods lock-in: the ecosystem of tools, plugins, and integrations built around specific platforms. Identity lock-in: the professional self-conception that forms around a tool that catalyzes a transformation in how practitioners understand their own capabilities.
What makes AI lock-in qualitatively different from previous forms is the speed and invisibility of accumulation. Segal's Trivandrum training illustrates the dynamic with inadvertent precision: in five days, four sources of lock-in accumulated to levels that would have taken months or years to develop in previous platform markets. The engineers did not choose lock-in. They chose productivity. Each individual decision was rational. The lock-in was the compound consequence of all of them.
The strategic consequences follow directly from Shapiro's analysis of previous platform markets. Lock-in transfers bargaining power. The platform provider who has locked in an installed base can raise prices because switching costs exceed the price increase. The platform can reduce quality because users cannot credibly exit. The platform can redirect development toward its own interests because user voice is muted by the absence of a viable exit option.
The concept of switching costs in economics originated in work by Paul Klemperer and Joseph Farrell in the 1980s. Shapiro's application to information markets extended and operationalized the framework across successive generations of technology, from enterprise software through operating systems to platform markets.
Lock-in is cumulative and gradual. Individual decisions to deepen engagement with a platform are each rational; the compound total produces a commitment too large to reverse.
The four dimensions operate simultaneously. Data, workflow, complementary goods, and identity lock-in compound in AI markets faster than in any previous platform.
Cognitive switching costs are irreducible. Portability mandates and interoperability standards address data and complementary goods lock-in but cannot transfer the cognitive investment embedded in neural pathways.
Lock-in transfers bargaining power. Once users cannot credibly threaten exit, platforms can extract surplus and redirect development toward their own interests.
The traditional policy response — data portability, interoperability mandates, transparency requirements — was designed for pre-AI platforms and addresses only part of the AI lock-in problem. The cognitive dimension, embedded in user neural pathways, is beyond the reach of any existing regulatory mechanism. Whether new institutional mechanisms can address this dimension is among the central open questions of AI competition policy.