Moore's distinction between visionaries and pragmatists is the psychological foundation of Crossing the Chasm. Visionaries adopt because of what a technology could become; they buy potential, tolerate incompleteness, and absorb personal risk to make half-finished tools perform. Pragmatists adopt because of what a technology has already done for someone like them; they buy proof, require institutional infrastructure, and treat visionary enthusiasm as a warning signal. The two populations are not positioned along a single continuum of risk tolerance — they inhabit incompatible evaluation frameworks, and the structural consequence is that visionary testimony, however accurate, cannot cross the chasm to pragmatist audiences. In the AI context, the triumphalist posts that dominated 2025–2026 discourse were visionary communication reaching pragmatist readers and actively repelling them.
There is a parallel reading that begins not with psychology but with market structure. The visionary-pragmatist distinction Moore describes is not a natural psychological taxonomy but the artifact of a specific corporate sales environment—one where enterprise buyers controlled budgets, implementation timelines stretched across quarters, and failure was organizationally visible. The framework emerged from watching Cisco and Sun win deals in Fortune 500 accounts. It described a world where adoption meant corporate procurement.
That world is dissolving. AI adoption increasingly bypasses institutional gatekeepers entirely. Individual contributors adopt through personal subscriptions. Departments adopt through credit cards. The "pragmatist" waiting for institutional proof never enters the decision chain because the decision has already distributed itself across a thousand small bets made by people Moore would classify as having no adoption authority. What looks like the chasm persisting is actually the chasm becoming irrelevant—not because it was crossed but because the technology routed around it. The triumphalist posts aren't failing to persuade pragmatists; they're succeeding at coordinating a adoption pattern that makes pragmatist consent structurally optional. The framework's explanatory power in the AI case may derive not from its accuracy but from its obsolescence—it names a crossing that no longer needs to happen.
The distinction begins with Rogers but sharpens under Moore's analysis. Rogers described early adopters as respected local opinion leaders whose deliberate adoption legitimizes an innovation for the early majority. Moore's innovation was to show that this legitimizing function fails when the technology is discontinuous — when it demands not incremental workflow change but wholesale restructuring. For discontinuous innovations, the early adopter's reference is worse than useless; it is anti-evidence for the pragmatist audience.
The evaluation asymmetry is grounded in different risk environments. The visionary operates with high tolerance for failure, low institutional accountability, and strong personal incentive to bet on emerging technologies. The pragmatist operates with narrow failure tolerance, heavy institutional accountability, and strong professional incentive to avoid early-stage commitments. Both postures are rational within their respective positions. Neither can be talked out of its position by the other's testimony.
The Geoffrey Moore — On AI volume extends this distinction to the AI discourse, where it illuminates phenomena that have puzzled observers: why developer productivity metrics circulate among developers without reaching enterprise decision-makers, why the silent middle remains silent, and why the triumphalist posts that celebrate weekend builds actively harden pragmatist resistance. The pattern is not confusion. It is Moore's framework operating at scale.
The implication for AI strategy is unforgiving. No amount of visionary evangelism will cross the chasm. The crossing requires translation — taking the pragmatist's problem off the table in the pragmatist's own language, with the pragmatist's own references, inside the pragmatist's own institutional context. This is the work of building the whole product for a specific beachhead segment, and it is the work almost no one in the AI industry is doing at the required depth.
The distinction crystallized during Moore's consulting engagements in the late 1980s, where he watched superior technologies lose to inferior ones because the winning companies had done the patient work of pragmatist translation while the losing companies had invested in visionary marketing.
Visionaries buy potential. They tolerate incompleteness because they are buying a future, not a product.
Pragmatists buy proof. They require reference customers in their own industry facing their own problems at their own scale.
Visionary references repel pragmatists. The evidence that excites one population triggers the other's defensive response.
The evaluation frameworks are incompatible. Neither population can be persuaded by the other's reasoning, because they operate under different risk structures.
Translation is the strategic task. Crossing requires restating the technology's value in pragmatist terms — not evangelism but whole product delivery.
The psychological distinction Moore identifies is fully accurate (100%) for the population he studied—enterprise technology buyers making institutional commitments in the 1980s-90s. The evaluation asymmetry is real, the reference repulsion is real, and the translation requirement is real within that context. The contrarian reading is right (80%) that this institutional adoption path now operates in parallel with distributed adoption patterns that bypass it entirely.
What's actually happening with AI is adoption at multiple layers simultaneously, and the right weighting depends on which layer you're examining. For foundation model development and infrastructure build-out, Moore's framework applies completely—the pragmatist translation work (whole product, beachhead, reference customers) determines which platforms achieve enterprise lock-in. For individual tooling and workflow augmentation, the contrarian view dominates—adoption happens through accumulated small bets that make institutional resistance expensive to maintain. For mid-market transformation (the space between individual tools and foundation infrastructure), both views hold in tension—some adoption routes around gatekeepers while some requires the patient translation work Moore describes.
The synthesis the discourse needs is recognizing that "crossing the chasm" and "routing around the chasm" are not competing predictions but simultaneous processes operating at different scales. The triumphalist posts coordinate distributed adoption while failing at institutional persuasion. The silent middle watches both processes and hedges. The strategic question isn't which framework is right but which adoption layer your position requires you to address—and whether you're doing the work that layer demands.