The Geoffrey Moore — On AI volume extends the technology adoption lifecycle to nations, arguing that countries sit at different positions on the AI adoption curve and that their strategic prospects depend less on generic product capability (models, chips, research) than on whole product infrastructure (education systems, retraining programs, regulatory frameworks, institutional trust, cultural narratives about human value). The United States leads in visionary-phase investment but lags in pragmatist-population readiness. The European Union builds regulatory infrastructure but may over-optimize the productivity zone at the expense of the incubation zone. China attempts to force-march through the chasm via state direction, producing breadth without depth. The nations that lead the AI era will be the ones that build the best whole products for their citizens, not the ones that build the best models.
The framework reframes the geopolitical AI competition. The standard narrative treats the race as a technology race — which nation produces the most capable frontier model, secures the most advanced chips, attracts the most elite research talent. Moore's framework suggests this narrative is misdirected. The model is the generic product. The nation's institutional infrastructure is the whole product. The competitive advantage resides in whether citizens can use AI wisely, resist AI-enabled manipulation, and direct the capability toward genuine flourishing — not in who builds the technology first.
The United States has invested enormously in generic product capability. American AI companies produce the most capable frontier models. American developers adopt AI coding tools at rates that dwarf other national populations. American venture capital flows into AI at unprecedented scale. But the whole product for the American citizen is among the weakest in the developed world relative to technology capability. Retraining infrastructure is rudimentary. Curricula have not adapted. The regulatory framework is fragmented. The gap between what American AI can do and what the average American is prepared to do with it is wider than in any comparable nation.
The European Union presents the inverse profile. The EU AI Act is the most comprehensive AI regulatory framework globally. The whole product infrastructure — consumer protection, transparency requirements, institutional oversight — is being built deliberately. But the EU risks over-optimization: raising compliance costs beyond what startups can absorb, ensuring that the economic benefits of AI flow to jurisdictions with lower regulatory friction, and building infrastructure for capabilities that may not be commercially available domestically.
China pursues a third model: state-directed force-march through the chasm. Massive state investment, strategic priority designation, directed deployment across the economy. Moore's framework predicts this produces adoption breadth but not adoption depth. State-mandated deployment generates compliance, not capability. The pragmatist psychology that produces genuine integration cannot be forced. The tools are used because they must be, not because users have developed the judgment to use them well.
The nation-as-market framework is an extension of Moore's concepts beyond corporate strategy — an extension the Geoffrey Moore — On AI volume articulates explicitly, drawing on broader discussions of AI geopolitics in the work of Dario Amodei, Francis Fukuyama, and others.
Nations are markets at different lifecycle positions. They contain different proportions of adopter segments and require different strategies.
The generic product is the model; the whole product is institutional infrastructure. Competitive advantage resides in the latter.
The United States leads in visionary investment but lags in pragmatist readiness. The gap between capability and citizen preparation is wide.
The EU builds regulation at the cost of commercialization. The whole product without the generic product produces infrastructure that cannot be filled.
State direction produces breadth without depth. Mandated adoption generates compliance, not the judgment that makes adoption productive.