Geoffrey Moore — On AI
Contents
Cover Foreword About Chapter 1: The Chasm No One Sees Chapter 2: The Visionary's Blind Spot Chapter 3: The Whole Product and the Beaver's Dam Chapter 4: The Bowling Alley — Where AI Actually Crosses Chapter 5: The Tornado and the Death Cross Chapter 6: Zone Management in the Age of AI Chapter 7: The Identity Chasm Chapter 8: The Laggard's Wisdom Chapter 9: The Nation as Market — Geopolitics of the Adoption Lifecycle Chapter 10: After the Tornado — What Remains When the Dust Settles Epilogue Back Cover
Geoffrey Moore Cover

Geoffrey Moore

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Geoffrey Moore. It is an attempt by Opus 4.6 to simulate Geoffrey Moore's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The sequence I got wrong was not a sequence of tasks. It was a sequence of people.

When I stood in that room in Trivandrum and told twenty engineers they were about to become superheroes, I was telling the truth. The capability was real. The twenty-fold multiplier was measurable. What I had not measured was which of those twenty people were ready to hear it, which needed proof before they could move, and which needed something I had not even thought to build — a story about who they would become on the other side.

I treated the room as one audience. It was five.

Geoffrey Moore has spent three decades mapping the invisible fractures inside every audience that encounters a new technology. His insight is deceptively simple: the people who adopt first and the people who adopt next are not the same population moving at different speeds. They are different populations, motivated by different values, persuaded by different evidence, and separated by a gap so wide that the enthusiasm of the first group actively repels the second. He called that gap the chasm, and the chasm has killed more promising technologies than any competitor ever has.

When I discovered Moore's framework during the writing of *The Orange Pill*, it did not replace what I already understood about the AI moment. It sharpened it. I had been thinking about intelligence as a river, about builders as beavers, about the ascending friction that replaces mechanical struggle with cognitive challenge. Moore gave me something I was missing: the map of who crosses when, and why the people still standing on the far side are not failing to see what I see. They are seeing something I cannot — the whole product that has not been built, the institutional infrastructure that does not exist, the identity narrative that no one has offered them.

This book walks through Moore's complete lifecycle as applied to the AI revolution. Not as business strategy, though it functions as that. As a theory of readiness — individual, organizational, national. The question is not whether AI is powerful. The question is whether we have built everything around that power that makes it usable by the people who did not build it and do not yet trust it.

The chasm is real. The pragmatists are watching. And the work of crossing — patient, unglamorous, segment by segment — is the work that determines whether this technology reaches the people who need it most or stays trapped on the visionary's side of the gap.

Moore's lens does not contradict the river. It shows you where the dams need to go.

Edo Segal ^ Opus 4.6

About Geoffrey Moore

b. 1946

Geoffrey Moore (b. 1946) is an American organizational theorist, management consultant, and author whose work on technology adoption strategy has shaped how the global technology industry brings products to market. His landmark book *Crossing the Chasm* (1991) introduced the concept of the "chasm" — a critical gap between early adopters and the pragmatic mainstream that kills most technology ventures — drawing on Everett Rogers's diffusion of innovations research and transforming it into an actionable strategic framework. Moore extended this work across six books, including *Inside the Tornado* (1995), *The Gorilla Game* (1998), *Living on the Fault Line* (2000), *Dealing with Darwin* (2005), and *Zone to Win* (2015), developing concepts such as the whole product model, the bowling alley strategy for sequential market development, core-versus-context prioritization, and four-zone organizational management for navigating disruptive transitions. A longtime consulting partner at McKenna Group and later chairman emeritus at Geoffrey Moore Consulting, he has advised companies including Salesforce, Microsoft, Cisco, and HP. His frameworks remain the standard strategic vocabulary for technology go-to-market planning and have influenced multiple generations of Silicon Valley leaders.

Chapter 1: The Chasm No One Sees

Every technology that has ever mattered nearly died in the same place.

Not in the lab where it was born, and not in the market where it eventually thrived, but in the gap between those two worlds — the space where enthusiasm runs out and evidence has not yet accumulated, where the people who love the technology cannot understand why everyone else hesitates, and where the people who hesitate cannot articulate what they are waiting for. Geoffrey Moore gave that gap a name in 1991. He called it the chasm. And the chasm has killed more promising technologies than competition, regulation, or technical failure combined.

Moore's insight, developed across three decades and six books, begins with an observation borrowed from rural sociology. In 1962, Everett Rogers published Diffusion of Innovations, a study of how new agricultural practices spread through farming communities in Iowa. Rogers found that adoption did not proceed uniformly. It followed a bell curve, with five distinct groups arriving at different times for different reasons: innovators, early adopters, early majority, late majority, and laggards. The Iowa farmers who tried hybrid corn seed first were not just faster versions of the farmers who tried it last. They were psychologically different people, motivated by different values, responsive to different evidence, operating inside different social networks.

Moore took Rogers's bell curve and broke it. Not metaphorically — structurally. He argued that between each adopter segment there exists a gap, a discontinuity where the reasons that motivated the previous group to adopt do not transfer to the next group. The largest of these gaps, the one where technologies go to die, sits between the early adopters and the early majority. Between the visionaries and the pragmatists.

The distinction between these two groups is the foundation of everything Moore built, and it is the distinction that the AI discourse of 2025 and 2026 has almost entirely failed to grasp.

Visionaries adopt technology because of what it could become. They see potential. They tolerate incompleteness. They are willing to absorb risk, manage workarounds, and invest personal energy in making a half-finished tool perform, because they are buying a future, not a product. The engineer in Trivandrum who felt the orange pill, the solo builder who shipped a revenue-generating product over a weekend using Claude Code, the developer who posted at three in the morning about what she had just built — these are visionaries. Their experience is genuine. Their excitement is earned. And their testimony is almost useless for crossing the chasm, because pragmatists do not trust visionaries.

Pragmatists adopt technology because of what it has already done for someone like them. They want proof, not potential. They want a reference customer in their own industry, facing their own problems, operating at their own scale. They want the whole product — not just the core technology but the integrations, the training, the support infrastructure, the documentation, the institutional legitimacy that signals this is not an experiment but a solution. A pragmatist does not care that a frontier engineer in San Francisco built something extraordinary over a weekend. A pragmatist cares that a company in her sector, with her constraints, deployed this tool and measured the results, and the results held up under audit.

The chasm exists because visionaries and pragmatists operate inside fundamentally incompatible evaluation frameworks. The visionary's reference — "Look what I built in a day!" — actively repels the pragmatist, because it signals exactly the kind of incomplete, unsupported, unproven deployment that pragmatists have learned, through painful experience, to avoid. Every pragmatist has been burned by a visionary's enthusiasm. Every pragmatist has watched a promising technology fail not because the technology was bad but because the surrounding infrastructure — the training, the integration, the organizational change management — was not there.

Moore's framework predicts that the AI discourse of 2025 and 2026 would be dominated by visionaries, and it was. The triumphalist posts on social media, the breathless conference demos, the productivity metrics that circulated through developer communities like personal records at a track meet — all of this was visionary testimony. Genuine, accurate, and structurally incapable of reaching the people who needed it most.

The parent lying awake at two in the morning wondering what to tell her child about AI is not a visionary. She is a pragmatist. She does not need to hear that Claude Code can build a prototype in an hour. She needs to hear that a school district like hers deployed AI tools, measured the impact on student learning, adjusted for the risks, and produced outcomes she can verify. She needs the whole product, not the generic product. And the whole product for AI in education — the curriculum frameworks, the teacher training, the assessment redesign, the data privacy infrastructure, the parent communication protocols — does not yet exist.

The middle manager staring at a dashboard that no longer makes sense is a pragmatist. He does not need a demo of what AI can do. He needs a deployment guide written for his industry, his team size, his regulatory environment, and his specific set of legacy systems. He needs to call a peer at a comparable company and hear, "We did this. Here is what worked. Here is what broke. Here is the vendor that stood behind the product when it broke." That infrastructure of pragmatist-to-pragmatist reference is how every previous technology crossed the chasm, and it is the infrastructure that AI has barely begun to build.

Moore himself acknowledged this asymmetry in a February 2026 interview with diginomica, where he assessed the state of agentic AI adoption: "There are some beachhead markets, but when you 'cross the chasm' it's when one function completely re-engineers itself. Coding is beginning to feel a bit toward that, Customer Service in call centers is a bit toward that, but I don't think in either case, there's enough trapped value in the process to cause the complete takeover of the thing." The language is characteristically precise. Not "AI hasn't crossed the chasm." But "AI is crossing in specific segments, and the segments where it has crossed are not yet sufficient to pull the mainstream behind them."

This segmented view of adoption is Moore's deepest contribution to understanding the current moment. AI is not one technology crossing one chasm. It is, as Jakob Nielsen observed in a 2025 analysis applying Moore's framework, "a swarm of adoption curves moving at different speeds." Developer tools crossed first. Consumer chatbots crossed next — Moore noted in a 2024 podcast that consumer generative AI faces "no chasm" because "it doesn't cost anything to adopt it." Enterprise deployment of AI for mission-critical business processes has not crossed. Agentic AI has not crossed. AI in education has not crossed. AI in healthcare has not crossed, though the beachhead segments are forming.

Each of these crossings requires its own whole product. Each has its own pragmatist population with its own reference requirements. And the mistake that the AI industry is making, with the consistency that Moore's framework predicts, is treating the visionary's crossing as evidence that the chasm has been crossed for everyone.

This mistake has a name in Moore's vocabulary: premature mainstream marketing. It is the error of behaving as though the technology is already in the tornado — the phase of mass adoption where demand exceeds supply and the winning strategy is to ship as fast as possible — when the technology is actually still in the chasm or, at best, in the bowling alley, the phase where adoption proceeds one vertical segment at a time. The tactical requirements of these phases are not just different but opposite. In the tornado, breadth wins: ship to everyone, capture market share, establish the de facto standard. In the bowling alley, depth wins: serve one segment so completely that it becomes a reference for the next.

The AI industry in 2026 is behaving as though it is in the tornado. It is mostly still in the bowling alley.

The consequence of this strategic mismatch is predictable from Moore's framework: wasted resources, disappointed pragmatists, and a growing backlash that conflates the technology's immaturity with its fundamental nature. When a pragmatist tries an AI tool that was marketed as a complete solution and discovers it requires significant human oversight, organizational change, and technical integration that was not mentioned in the demo, the pragmatist does not conclude that the tool needs a better whole product. The pragmatist concludes that AI does not work. And that conclusion, once formed, is extraordinarily difficult to reverse, because pragmatists talk to other pragmatists, and negative references propagate faster than positive ones.

Moore's framework also illuminates the silent middle that The Orange Pill identifies as the largest and most important group in any technology transition. These are the people who feel both the exhilaration and the loss, who have tried the tools and found them impressive but incomplete, who are neither triumphalist nor elegist but simply uncertain. In Moore's taxonomy, the silent middle is the early majority. They are pragmatists. They are watching. They are waiting for evidence. And the evidence they need is not more demonstrations of capability but proof of institutional integration — proof that someone like them, in a context like theirs, adopted AI and emerged better for it.

The chasm framework carries an uncomfortable implication that neither the optimists nor the pessimists want to hear. The optimists want to believe that AI adoption is inevitable, that the technology is so powerful that resistance will melt before it. Moore's entire career has been spent demolishing this belief. The chasm is littered with technologies that were powerful, elegant, and demonstrably superior to what came before — and that died anyway, because their creators could not build the whole product that pragmatists required. Betamax. The Segway. Google Glass. Each was technically excellent. Each failed to cross.

The pessimists want to believe that resistance can hold — that the institutions, the norms, the human capacity for refusal will slow the adoption to something manageable. Moore's framework is equally unforgiving toward this position. The chasm is not permanent. Technologies that find their beachhead, that build the whole product, that accumulate pragmatist references — these technologies do cross, and when they cross, the acceleration is violent. The pragmatist majority does not adopt gradually. It adopts in a rush, because pragmatists are herd animals: they wait for proof, and when the proof arrives, they all move at once.

The question is not whether AI will cross the chasm for enterprise, for education, for healthcare, for the public sector. It will. The question is when, in what form, and at what cost to the people who are standing in the gap right now, without the institutional support that would make the transition bearable.

Moore himself has been remarkably clear about where the highest-value crossings should happen. In a November 2024 blog post, he wrote that his "altruistic wish" was that AI target the public sector: "Higher education, social services, healthcare, and law enforcement are all staggering under increasingly untenable demands... All of them are target-rich environments for applying AI." This is not visionary enthusiasm. It is pragmatist analysis: identify the domains with the most trapped value, build the whole product for those domains, and let the results speak for themselves.

The chasm that no one sees is not the gap between AI's capability and its adoption. Everyone sees that gap. The chasm no one sees is the gap between the visionary's experience of AI and the pragmatist's need. Between the demo and the deployment. Between the prototype that works in a weekend and the institutional infrastructure that makes it work on Monday morning, and Tuesday, and the Monday after that, for a thousand users who did not build it and do not understand it and need it to be as reliable as electricity.

That chasm is where the real work lives. And in Moore's framework, it is the only work that matters.

Chapter 2: The Visionary's Blind Spot

The most enthusiastic evangelists for any technology are always the worst guides for the people who must actually live with it.

This is not a criticism of enthusiasm. Visionaries are essential. They are the people who see the future before it arrives, who tolerate the pain of incomplete tools, who invest personal energy and reputation in technologies that most reasonable people consider unproven. Without visionaries, no technology crosses anything. The personal computer would have remained a hobbyist curiosity. The internet would have stayed an academic network. The smartphone would have been a pager with ambitions.

But visionaries have a structural blind spot, and Geoffrey Moore's framework identifies it with surgical precision: visionaries do not understand why pragmatists hesitate, and their inability to understand this hesitation makes their testimony counterproductive for the very audience that determines whether a technology succeeds at scale.

The blind spot is not intellectual. Visionaries are often the most intelligent people in the room. The blind spot is sociological. Visionaries and pragmatists inhabit different social worlds, respond to different evidence, and evaluate risk through incompatible lenses. When a visionary says, "I built a revenue-generating product in a weekend with Claude Code," the visionary means: Look at the power of this tool. Look at what is now possible. The future has arrived. When a pragmatist hears the same statement, the pragmatist thinks: That person took unquantified risks with an unproven tool and has no long-term data on maintenance, scalability, security, or regulatory compliance. I will wait.

Both responses are rational. Both are grounded in accurate assessments of the situation. But they are grounded in different situations, because the visionary and the pragmatist occupy different positions in the market, face different consequences for failure, and answer to different stakeholders.

Moore formalized this asymmetry in the concept of the reference customer. In crossing the chasm, the critical strategic asset is not the technology, not the vision, not even the team. It is the reference: a customer who has deployed the technology, measured the results, and is willing to tell other customers about the experience. The reference is the bridge across the chasm.

But not any reference will do. The reference must be someone the pragmatist recognizes as a peer. A pragmatist in financial services does not care that a startup in gaming deployed AI successfully. A pragmatist in healthcare does not care that a developer tools company crossed the chasm. The reference must be from the same industry, facing the same problems, operating under the same constraints. The pragmatist's evaluation question is not "Does this technology work?" but "Does this technology work for someone like me?"

This is why visionary testimony, however genuine, does not cross the chasm. The visionary is not "someone like" the pragmatist. The visionary operates in a different risk environment, with different tolerance for failure, different organizational culture, and different stakeholder expectations. The frontier builder who shipped a product over a weekend took risks that a Fortune 500 product manager cannot take, and the pragmatist knows this, and the visionary's excitement looks, from the pragmatist's position, less like evidence and more like recklessness.

Moore made a characteristically blunt observation about this dynamic in the AI context. In his August 2023 LinkedIn article "Making Peace with Generative AI," he wrote: "We should understand that [generative AI] is an advisory technology. It is not automation. That is, it is not eliminating the need for human beings to make judgment calls. Rather, it is accelerating the preparation for so doing and framing the options in ways that make decision-making more straightforward." The language is deliberately deflationary. Moore is not diminishing the technology. He is translating it into the language pragmatists speak: not "this changes everything" but "this makes your existing decision-making process faster and better-informed." The visionary hears this and thinks Moore is underselling. The pragmatist hears it and thinks: Now I understand what this is for.

The translation gap between visionary experience and pragmatist need is visible in every dimension of the AI discourse. Consider the developer productivity claims that circulated through 2025 and 2026. Visionaries posted metrics: twenty-fold productivity multipliers, lines of code generated per hour, features shipped per sprint. These metrics are meaningful within the visionary's frame. They measure what the visionary cares about: speed, output, capability expansion.

But they measure almost nothing the pragmatist cares about. The pragmatist wants to know: What is the defect rate of AI-generated code compared to human-written code? What happens when the AI-generated system needs to be maintained by someone who did not build it and does not understand its architecture? What is the total cost of ownership, including the organizational change management required to integrate AI tools into existing workflows? What are the security implications of code that no human fully reviewed? What happens to institutional knowledge when the human struggle that built that knowledge is optimized away?

These are not obstructionist questions. They are the questions that responsible deployment requires, and they are the questions that visionaries, by temperament and position, are least equipped to answer. The visionary built the thing over a weekend. The pragmatist must live with it for years.

The consequence of the blind spot is a phenomenon Moore documented across every previous technology cycle: visionary backlash. When visionaries dominate the discourse, pragmatists feel alienated, misunderstood, and increasingly resistant. The more loudly the visionaries celebrate, the more firmly the pragmatists dig in. Not because the pragmatists are afraid of change — Moore is emphatic on this point — but because the visionaries are providing precisely the wrong kind of evidence. Every breathless tweet about building in a weekend is, from the pragmatist's perspective, evidence that the technology community does not understand institutional reality.

The backlash compounds. Pragmatists begin to associate AI not with its capabilities but with the social identity of its advocates. AI becomes "a visionary thing," which is to say, a thing for people who do not operate under the constraints that pragmatists face. The technology gets coded, culturally, as belonging to a tribe the pragmatist does not belong to and does not want to join. This tribal coding is one of the most powerful forces in technology adoption, and it operates almost entirely beneath conscious awareness.

Moore described a version of this dynamic in his February 2026 diginomica interview, pushing back against the "SaaSpocalypse" narrative with characteristic pragmatism: "SaaS contains almost a half a century of business acumen, and I'm sorry, but you're not going to just displace a half a century of experience, you're just not. Every company in the world runs on systems of record and has overlaid systems of engagement, so nobody's going to rip them out." This is Moore performing the translation function that visionaries cannot perform: taking the pragmatist's concerns seriously, validating the pragmatist's experience, and framing the technology not as a replacement for what exists but as an enhancement of it.

The distinction between replacement and enhancement is not semantic. It is the difference between a narrative the pragmatist will reject and a narrative the pragmatist will consider. Visionaries are comfortable with replacement narratives. They want the new to supersede the old. Pragmatists are terrified of replacement narratives, because they have decades of investment in the old systems — investment measured not just in dollars but in institutional knowledge, in trained personnel, in regulatory compliance frameworks, in the accumulated decisions that a John Mecke analysis of Moore's framework aptly called "business acumen baked into the software."

The whole product that crosses the chasm must include a narrative — a story that allows the pragmatist to adopt without feeling that adoption is an admission of obsolescence. In previous technology cycles, this narrative was relatively straightforward: the new tool makes your existing work easier. The PC made the secretary's typing faster. The spreadsheet made the accountant's calculations faster. The CRM made the salesperson's pipeline management faster. In each case, the tool enhanced an existing identity rather than threatening it.

AI complicates this narrative in a way that Moore's framework has not fully addressed, because AI threatens not just the efficiency of work but the identity of the worker. When Claude Code writes the brief that the lawyer used to write, the lawyer's identity as "the person who writes briefs" is destabilized. When the AI generates the code that the developer used to write, the developer's identity as "the person who writes code" is destabilized. The whole product for AI must include not just training and integration and support but a new professional identity — a story the pragmatist can tell herself about who she is and what she contributes in a world where the machine does what she used to do.

This identity challenge is the reason the chasm in AI adoption is wider than in any previous technology cycle. The pragmatist must cross not just a product gap but a self-concept gap, and self-concept gaps do not close with better documentation.

Moore's framework provides one more insight that the visionary discourse has missed entirely: the importance of the bowling alley. In Moore's lifecycle, after crossing the chasm, adoption does not immediately explode into mass-market uptake. It proceeds through a series of niche segments — the bowling alley — where each segment's success knocks down the next. The first pin might be developer tools. The second might be customer service. The third might be legal research. Each segment requires its own whole product, its own reference customers, its own proof points. The bowling alley is slow. It is unglamorous. It is the opposite of the tornado that visionaries fantasize about.

But it is how technologies actually cross. Pin by pin. Segment by segment. Proof by proof.

The visionary's blind spot is the inability to see that this sequential, evidence-driven, pragmatist-paced process is not an obstacle to adoption but the mechanism of adoption. The visionary wants the tornado now. Moore's framework says the tornado comes only after the bowling alley has been run, and the bowling alley has been run only after the chasm has been crossed, and the chasm has been crossed only after the whole product has been built for one specific beachhead segment, and the whole product has been built only after someone has done the unglamorous work of understanding exactly what one specific pragmatist needs and building it for her.

That work is not visionary. It is operational. And it is the work that determines whether AI fulfills its potential or joins the long list of technologies that were powerful, elegant, and dead.

The visionary sees the sunrise from the top of the tower. The pragmatist needs stairs. Moore's entire career has been spent building stairs, and the AI industry has barely started construction.

Chapter 3: The Whole Product and the Beaver's Dam

In 1983, a Harvard Business School professor named Theodore Levitt drew four concentric circles on a whiteboard and changed how a generation of marketers thought about what they were selling. The innermost circle was the generic product — the thing itself, the core technology, the minimum viable offering. The next ring was the expected product — what the customer assumes comes with the purchase: basic documentation, reasonable reliability, some level of support. The third ring was the augmented product — the additional services, integrations, and capabilities that differentiate one offering from another. The outermost ring was the potential product — everything the offering could eventually become as the market matures.

Levitt's model was elegant. Moore made it operational. In Crossing the Chasm, Moore argued that the whole product — the complete set of products and services needed to fulfill the customer's compelling reason to buy — is the single most important strategic concept in technology marketing. Not because it is complicated, but because technology companies systematically fail to build it. They ship the generic product, the core technology, and assume the market will assemble the rest. The market will not. The visionary will. The pragmatist will not.

This distinction between what the visionary will tolerate and what the pragmatist requires is the operational definition of the chasm. And it explains, with uncomfortable precision, why AI in 2026 is simultaneously the most impressive technology most people have ever encountered and the least deployed at institutional scale.

The generic product for AI-assisted work is extraordinary. Claude Code, GPT-4, Gemini — these tools can generate functional software from natural language descriptions, draft legal briefs, compose business analyses, produce educational materials, and engage in sophisticated conversation across virtually any domain of human knowledge. The generic product is not the problem. The generic product is, by historical standards, miraculous.

But pragmatists do not buy miracles. They buy solutions. And a solution, in Moore's framework, is a whole product: the generic technology plus every additional component required to make it work in a specific institutional context, for a specific set of users, addressing a specific problem, within a specific set of constraints.

Consider what the whole product for AI in a mid-size law firm actually requires. The generic product is the language model. The expected product includes reliable output quality, data privacy guarantees sufficient for attorney-client privilege, integration with the firm's existing document management system, and citation accuracy high enough that a junior associate can rely on the output without re-checking every reference. The augmented product includes firm-specific training on the model — the ability to fine-tune outputs for the firm's house style, jurisdictional preferences, and practice area specializations. It includes change management consulting: helping partners who have practiced law for thirty years understand how their role shifts when the brief-writing they delegated to associates can now be delegated to a machine. It includes malpractice insurance guidance: what happens when an AI-drafted brief contains a hallucinated citation and the court sanctions the firm?

The potential product, the outermost ring, includes a complete rethinking of the firm's economics: if AI reduces the hours required to produce a brief from forty to four, and the firm bills by the hour, the firm's revenue model is broken. The potential product includes a new business model for legal services — value-based billing, subscription arrangements, productized legal offerings — that most firms have not begun to contemplate.

Not one of those whole product components is a technology problem. Every one of them is an institutional, organizational, cultural, or economic problem that the technology company cannot solve alone. And in Moore's framework, this is precisely why the pragmatist has not adopted. Not because the technology does not work. Because the whole product does not exist.

The same analysis applies to every sector where AI adoption is being discussed. In healthcare, the generic product can read imaging scans with accuracy matching or exceeding radiologists. The whole product requires FDA clearance, integration with electronic health record systems, clinician training, malpractice liability frameworks, patient consent protocols, and a reimbursement model that accounts for AI-assisted diagnosis. In education, the generic product can tutor students one-on-one with infinite patience. The whole product requires curriculum alignment, teacher training, assessment redesign, data privacy compliance with laws designed for a pre-AI world, and a pedagogical framework that uses AI to develop student capacity rather than bypass it.

Moore's framework reveals a structural asymmetry in AI adoption that most analysts have missed: the sectors where AI could generate the greatest social value are the sectors where the whole product gap is widest. Healthcare, education, public safety, social services — these are the domains Moore identified in his November 2024 blog post as "target-rich environments for applying AI," the domains with the most "trapped value" waiting to be released. They are also the domains with the heaviest regulatory burden, the most complex institutional structures, the most entrenched professional identities, and the least venture capital flowing toward whole product development.

The irony is structural: the bowling alley Moore describes, the sequential process by which AI will cross the chasm one segment at a time, is proceeding through the sectors where adoption is easiest rather than the sectors where adoption would generate the most value. Developer tools crossed first because the whole product gap was narrowest — developers are the one user population that can build their own whole product components. Consumer chatbots crossed next because consumer adoption has, as Moore noted, essentially "no chasm." Enterprise deployment for knowledge work is crossing now, segment by segment. But healthcare, education, and the public sector — the domains where trapped value is greatest — are last in line, because their whole product requirements are the most demanding.

Moore's "trapped value" concept deserves particular attention, because it reframes the entire conversation about where AI should be deployed. In a series of LinkedIn posts and his Valize article on Zone to Win with AI, Moore argued that the question organizations should ask is not "Where can we use AI?" but "Where is the most value trapped in our current processes?" Trapped value is Moore's term for the productivity and quality improvements that are theoretically possible but practically unreachable because existing processes were designed around human limitations. A customer service operation that routes every inquiry through a human agent, regardless of complexity, has enormous trapped value — the simple inquiries that consume agent time without requiring agent judgment. A healthcare system that requires a radiologist to review every scan, including the ninety percent that are clearly normal, has trapped value. An educational system that delivers the same lecture to thirty students with wildly different needs has trapped value.

AI releases trapped value. But releasing trapped value is not the same as building the whole product. The release requires the institutional changes, the workflow redesigns, the regulatory adaptations, and the professional identity reconstructions that constitute the augmented and potential rings of Levitt's model. The generic product unlocks the value. The whole product captures it.

Moore drew a direct parallel to what The Orange Pill describes as the beaver's dam — the structures that redirect the flow of capability toward constructive outcomes. The dam is not the river. The dam is everything humans build around the river to make its power useful rather than destructive. In Moore's vocabulary, the dam is the whole product. The river is the generic AI capability, flowing with increasing force and breadth. The dam is the institutional infrastructure that channels that force into specific, measurable, sustainable outcomes for specific communities of users.

The distinction illuminates why the Berkeley researchers' "AI Practice" framework — structured pauses, sequenced workflows, protected mentoring time — is not a nice-to-have but a whole product component. Without it, the generic product (Claude Code, the language model, the AI assistant) reaches the user without the surrounding structure that makes its use sustainable. The user works harder, takes on more, fills every gap with more prompts, and burns out — not because the technology failed but because the whole product was never built.

Moore pointed toward this risk in his 2026 diginomica interview when he concurred that "if you outsource, you cognitively decline," citing GPS dependence as an analogy. The observation is a whole product critique dressed in psychological language. The generic product (GPS navigation) works perfectly at the task level. But without the whole product component of intentional skill maintenance — without the dam that protects the human's navigational capacity from atrophying — the generic product degrades the user's capability over time. The same dynamic applies to every AI tool. The code that Claude writes works. The lawyer's brief that the model drafts is competent. The student's essay that the chatbot composes is articulate. But without the whole product components that preserve human development — the friction, the struggle, the attentional ecology — the generic product hollows out the user it was meant to serve.

This is Moore's framework at its most powerful: not as a marketing strategy but as a theory of institutional health. The whole product is not a business concept. It is an ecological concept. It is the recognition that a technology, deployed without the surrounding infrastructure that makes it sustainable, is not a tool but an extractive force — one that mines the user's existing capability without replenishing it.

The companies and institutions that build the whole product for AI — not just the technology but the training, the norms, the practices, the identity narratives, the assessment frameworks, the regulatory adaptations — will be the ones that cross the chasm successfully. The ones that ship the generic product and assume the market will assemble the rest will produce the pattern the Berkeley researchers documented: short-term productivity gains followed by burnout, cognitive decline, and a pragmatist backlash that sets adoption back by years.

Moore's career has been a sustained argument that the most important work in technology is not building the product. It is building everything around the product that makes the product useful. In the age of AI, this argument has never been more urgent, because the generic product has never been more powerful, and the gap between what the technology can do and what the institutional infrastructure is prepared to support has never been wider.

The dam is not optional. The dam is the product.

Chapter 4: The Bowling Alley — Where AI Actually Crosses

Moore's bowling alley is the most underappreciated phase in technology adoption, and the most strategically consequential. It is the period after a technology has crossed the chasm in its first beachhead segment but before it has achieved mass-market adoption. The metaphor is precise: the first pin falls, and if the angle is right, its fall knocks down the next pin, which knocks down the next, each segment's success creating the conditions for the adjacent segment's adoption. The sequence matters. The angle matters. The weight of each pin — the size and influence of each segment — matters. And the most common strategic error in the bowling alley is trying to knock down all the pins at once instead of letting the sequence do the work.

In the AI adoption landscape of 2026, the bowling alley is where the real story lives — not in the visionary demonstrations that dominate the discourse, and not in the tornado that the industry wishes it were already in, but in the sequential, segment-by-segment process by which AI is actually crossing from early adoption into the mainstream. The pins are falling in a specific order, and understanding that order is the difference between strategy and noise.

The first pin was developer tools. This is not coincidental. Developer tools represent the ideal beachhead segment in Moore's framework, for reasons that illuminate the entire adoption sequence. The users — software developers — are the one population that can evaluate the technology on its own terms, without the mediation of industry-specific whole product components. A developer testing GitHub Copilot or Claude Code can assess code quality directly. She does not need a regulatory framework, a change management consultant, or a new billing model. She needs the tool to produce functional code, and she can verify that it does within minutes. The whole product gap is minimal because the user population is uniquely equipped to bridge it themselves.

The developer tools beachhead also benefits from what Moore calls the bowling alley's self-referencing dynamic. Developers talk to developers. The reference customer for a developer tool is another developer, and the social networks through which developers share information — GitHub, Stack Overflow, X, internal Slack channels — are dense, fast, and culturally predisposed toward experimentation. A positive reference propagates through the developer community at the speed of a viral post, which is exactly what happened with Claude Code's growth curve: $2.5 billion in annualized run-rate revenue within months of launch, driven almost entirely by developer-to-developer reference.

But the developer tools pin, however dramatic its fall, does not automatically knock down the next pin. This is the bowling alley's central lesson, and the one the AI industry is most in danger of forgetting.

The second pin — the segment that fell next, and is still falling — is consumer conversational AI. ChatGPT's two-month sprint to fifty million users remains the most dramatic adoption curve in the history of consumer technology. Moore explained this in his 2024 InnoLead interview with characteristic directness: "There's no chasm [with generative AI]. It doesn't cost anything to adopt it." Consumer adoption faces a trivially small whole product gap: the user needs a browser and curiosity. No integration, no training, no regulatory compliance, no organizational change management. The generic product is the whole product.

But consumer adoption, Moore's framework suggests, is a misleading indicator of institutional adoption. The fact that a hundred million people use ChatGPT for personal queries tells the enterprise pragmatist almost nothing about whether AI is ready for mission-critical deployment. Consumer and enterprise adoption operate on different lifecycle curves with different chasm dynamics. The confusion of consumer ubiquity with enterprise readiness is precisely the kind of strategic error Moore's framework was designed to prevent.

The third pin, the one currently wobbling, is enterprise knowledge work — and here the bowling alley dynamics become genuinely complex. Moore himself identified the beachhead segments within enterprise AI in his Forbes interview and subsequent writings: customer service (where, as he noted, "Gen AI already does a better job answering level one and perhaps level two customer support questions than people do"), code generation within established development organizations, legal research, financial document analysis, and marketing content production.

Each of these segments shares a set of characteristics that make them vulnerable to AI adoption in Moore's framework. The work is high-volume and repetitive enough that AI's throughput advantage is decisive. The quality bar is assessable — the output can be checked against existing standards without requiring fundamental redefinition of what "good" means. The regulatory environment, while present, is less constraining than in healthcare or education. And crucially, the whole product gap is narrow enough to be bridged by the technology vendor in partnership with systems integrators, without requiring wholesale institutional transformation.

Customer service is the most instructive case, because it illustrates both the power and the limits of the bowling alley sequence. AI-powered customer service crossed the chasm because the trapped value was enormous and the whole product was buildable. Call centers operate on metrics — average handle time, first-call resolution, customer satisfaction scores — that make the value of AI deployment immediately measurable. The integration requirements, while nontrivial, are well-understood: the AI plugs into existing ticketing systems, CRM platforms, and knowledge bases. The regulatory environment is manageable. And the reference customers are accumulating fast enough that pragmatist organizations can now evaluate peer deployments in their own industry.

But customer service AI crossed the chasm for level one and level two inquiries, not for the complex, judgment-intensive interactions that constitute the most valuable customer service work. The distinction matters enormously, because it maps directly onto Moore's observation about trapped value: "I don't think in either case, there's enough trapped value in the process to cause the complete takeover of the thing." The bowling alley proceeds segment by subsegment. AI handles the simple inquiries. Humans handle the complex ones. The boundary between "simple" and "complex" shifts over time, but it does not disappear, and each shift requires its own whole product adaptation.

Legal research is the next pin most likely to fall, and its dynamics illuminate a feature of the bowling alley that Moore emphasizes but that the AI discourse has largely ignored: the role of catastrophic reference failure. In the legal domain, the catastrophic reference failure occurred publicly. In 2023, a New York attorney submitted a brief containing AI-hallucinated case citations — citations to cases that did not exist, complete with fabricated holdings and docket numbers. The court sanctioned the attorney. The incident propagated through the legal profession at the speed of professional shame, and it set back AI adoption in legal practice by at least a year.

In Moore's framework, a catastrophic reference failure in the bowling alley is devastating because pragmatists weight negative references more heavily than positive ones. A single, vivid, widely publicized failure — especially one that confirms the pragmatist's pre-existing fears about the technology's reliability — can undo dozens of quiet successes. The legal profession's response to the hallucination incident was not "we need better tools" but "we need to be more cautious," and caution, in the pragmatist's vocabulary, means delay.

The recovery is now underway. Legal AI platforms have invested heavily in the whole product components that the hallucination incident revealed were missing: citation verification systems, confidence scoring, human-in-the-loop review protocols, and malpractice insurance frameworks. These are not generic product improvements. They are whole product improvements — the augmented ring of Levitt's model, built specifically for the pragmatist lawyer's needs. As these components mature and as reference customers accumulate, the legal research pin will fall. But it will fall on the bowling alley's timeline, not the tornado's.

The pins that have not yet fallen — and that represent the most consequential crossings — are the ones Moore identified in his "altruistic wish" for AI's future: healthcare, education, public safety, and social services. These are the domains with the greatest trapped value and the widest whole product gaps. The bowling alley sequence suggests they will cross last, not because the technology is less capable in these domains but because the institutional, regulatory, and identity barriers are highest.

Healthcare illustrates the point. AI diagnostic imaging has demonstrated radiologist-level accuracy in specific, narrow applications — detecting certain cancers, identifying fractures, screening for diabetic retinopathy. The generic product works. But the whole product for AI in healthcare requires FDA clearance (which takes years, not months), integration with electronic health record systems that were not designed for AI inputs, clinical workflow redesign, malpractice liability frameworks that address AI-assisted diagnosis, patient consent protocols, and — perhaps most critically — a narrative that allows clinicians to adopt AI without feeling that their diagnostic expertise has been devalued. This last requirement connects directly to the identity chasm discussed elsewhere, and it is the whole product component that no technology vendor can build alone.

Education faces a parallel challenge with a different texture. The generic product — AI tutoring, AI-generated lesson plans, AI-assisted assessment — is impressive. But the whole product for AI in education requires alignment with curriculum standards that vary by state, district, and sometimes school. It requires teacher training that most school districts cannot afford and do not know how to procure. It requires a pedagogical framework that distinguishes between AI as a tool for developing student capacity and AI as a tool for bypassing student effort — a distinction that is philosophically contentious and practically crucial. It requires data privacy compliance with laws written before the technology existed. And it requires a cultural narrative that answers the twelve-year-old's question — "Mom, what am I for?" — in a way that is honest, reassuring, and not condescending.

None of these whole product components can be built by a technology company in isolation. They require collaboration between technologists, domain experts, regulators, educators, clinicians, and the communities that will live with the consequences of deployment. Moore's bowling alley is not just a sequence of market segments. It is a sequence of collaborative whole product development efforts, each more complex and institutionally demanding than the last.

The bowling alley also reveals something about timing that the AI discourse has systematically misunderstood. Moore's framework predicts that the bowling alley phase lasts longer than anyone expects. The transition from beachhead to tornado is not continuous. There are stalls, false starts, and segments that look ready to fall but do not because one critical whole product component is missing. The AI industry, primed by consumer adoption curves that measured adoption in weeks, is psychologically unprepared for a bowling alley that might last years.

Moore's counsel, delivered with the equanimity of someone who has watched this pattern repeat across five technology cycles, is consistent: "You don't have to evangelize the technology. You just have to take their problem off the table." The bowling alley is not evangelized into motion. It is not accelerated by enthusiasm. It is advanced by the patient, segment-specific, whole-product-building work of understanding what each pragmatist population actually needs and building it for them, one pin at a time.

The pins will fall. The sequence will proceed. The question is not whether but how long, and what the cost of the transition will be for the people standing between the pins that have fallen and the ones that have not. Moore's framework does not offer comfort. It offers clarity. And clarity, in the bowling alley, is the only strategic advantage that matters.

Chapter 5: The Tornado and the Death Cross

Every technology cycle has a moment when the bowling alley gives way to something faster, louder, and far less forgiving. Moore calls it the tornado — the phase of hypergrowth where pragmatist demand, having accumulated quietly behind the chasm, releases all at once. In the tornado, the rules invert. The bowling alley rewards depth: serve one segment completely, build the whole product, accumulate references, advance to the next. The tornado rewards breadth: ship to everyone, capture distribution, establish the de facto standard. The company that tries to maintain bowling alley discipline in the tornado loses. The company that tries to run tornado tactics in the bowling alley loses. The strategic error, in both cases, is the same: applying the right strategy at the wrong time.

The Software Death Cross that The Orange Pill describes — the trillion-dollar SaaS valuation collapse of early 2026 — is, in Moore's framework, a lifecycle event masquerading as a financial crisis. The market was not panicking. The market was recognizing, with the brutal efficiency that markets sometimes achieve, that two technology categories were at different lifecycle positions and that the valuation multiples appropriate to one were being incorrectly applied to the other.

AI developer tools entered the tornado in late 2025. The evidence was unmistakable: Claude Code's annualized run-rate crossed $2.5 billion within months. GitHub Copilot's adoption curves bent vertical. The percentage of AI-assisted code on major platforms climbed past forty percent and kept climbing. These are tornado metrics — demand exceeding supply, adoption outrunning the industry's ability to support it, market share being determined not by who had the best product but by who shipped fastest and captured the most distribution.

Traditional SaaS platforms, meanwhile, were in a different lifecycle phase entirely. Moore would locate most of them on Main Street — the mature phase where the technology is fully commoditized, growth has slowed to single digits, and competitive differentiation has narrowed to incremental feature additions and ecosystem lock-in. Main Street is not a bad place to be. Main Street companies generate enormous cash flows. But Main Street companies do not command tornado valuation multiples, and the market's repricing of SaaS was, at its core, the correction of a mismatch between lifecycle position and investor expectations.

The repricing was painful. Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. When Anthropic published a blog post about Claude's ability to modernize COBOL, IBM suffered its largest single-day decline in more than a quarter century. The numbers were dramatic, and the narrative that accompanied them — "AI is eating software" — was dramatic too.

But Moore's framework suggests the narrative was wrong. Not because the repricing was unjustified, but because the narrative misidentified what was being repriced. The market was not declaring that software is worthless. The market was declaring that code-as-product is worthless — that the act of writing software, which had been the primary value driver for the SaaS industry for two decades, was being commoditized by AI at a pace that made previous commoditization cycles look glacial.

The distinction between code and ecosystem is the key that Moore's framework turns. When Moore pushed back against the SaaSpocalypse narrative in his February 2026 diginomica interview, he was making precisely this point: "SaaS contains almost a half a century of business acumen, and I'm sorry, but you're not going to just displace a half a century of experience, you're just not." The half century of business acumen he refers to is not code. It is the accumulated institutional knowledge embedded in the platform — the workflow assumptions, the integration architectures, the compliance frameworks, the data models, the trained user populations, the partner ecosystems. This is the whole product, built over decades, and it does not disappear when the cost of writing new code approaches zero.

Moore's concept of core versus context, developed in Living on the Fault Line, illuminates the dynamic further. Core is the work that differentiates a company — the capability that customers choose the company for. Context is everything else — the operational infrastructure that qualifies the company to compete but does not distinguish it from competitors. Moore's strategic prescription is relentless: invest in core, outsource or automate context, and never confuse the two.

For SaaS companies, AI has reclassified what counts as core and what counts as context. Code was core when writing code was hard. When writing code becomes cheap, code becomes context. The core migrates upward — to the data layer, the ecosystem, the institutional trust, the domain expertise embedded in the platform's design decisions. The SaaS companies that survive the Death Cross will be the ones whose core was always above the code layer. The ones that die will be the ones whose core was the code itself — thin applications that solved singular problems with competent but replaceable implementations.

Moore's tornado framework adds a second layer of analysis: what happens inside the tornado itself. In Inside the Tornado, Moore identified three strategic positions — the gorilla, the chimp, and the monkey — defined by market share during the hypergrowth phase. The gorilla establishes the de facto standard. Chimps compete directly with the gorilla but hold smaller market share. Monkeys differentiate into niches the gorilla does not serve.

In the AI developer tools tornado of 2025-2026, the gorilla question was actively being contested. Anthropic's Claude, OpenAI's GPT series, and Google's Gemini were each competing for the position of de facto standard, with the outcome far from determined. Moore's tornado analysis predicts that this contest will resolve through a dynamic he calls "just ship" — the winner will be determined not by who has the best model in absolute terms but by who achieves the widest distribution, the deepest integration into existing workflows, and the strongest lock-in through accumulated context and conversation history.

The tornado dynamics also explain a phenomenon that puzzled many observers: why SaaS valuations fell even for companies whose AI strategies were credible. In Moore's framework, the tornado reprices the entire category, not just the laggards. When a new technology enters the tornado, the valuation premium shifts from the incumbent category to the insurgent one, regardless of individual company quality. This is not rational in the narrow sense — a strong SaaS company with a credible AI strategy should not be repriced at the same multiple as a weak one — but it is predictable from lifecycle dynamics. The market is not evaluating individual companies. It is reallocating capital from one lifecycle position to another.

Moore's framework also carries an important warning about what follows the tornado. Every tornado ends. The technology that was scarce becomes abundant. The growth that was exponential becomes linear. The market that was supply-constrained becomes demand-constrained. And Main Street arrives — the phase where the technology is commoditized, differentiation is marginal, and value migrates entirely to the ecosystem layer.

This trajectory, projected forward, suggests that AI itself — the capability to generate code, draft briefs, compose analyses, produce content — will eventually reach Main Street. When it does, the capability will be ubiquitous and undifferentiated, available to everyone at near-zero marginal cost. The competitive advantage will no longer reside in having AI. It will reside in what is done with it — the judgment, the taste, the domain expertise, the institutional knowledge that directs AI toward valuable outcomes rather than trivial ones.

Moore's observation about AI as "an advisory technology" rather than "automation" becomes most relevant here. On Main Street, AI's advisory function is commoditized. Everyone has access to the same quality of advice. The differentiator becomes the quality of the questions asked, the soundness of the judgment applied to the AI's output, and the depth of domain understanding that allows a human to distinguish between plausible and true, between competent and excellent, between what the model can generate and what deserves to exist.

The Death Cross, read through Moore's complete lifecycle, is not the end of software. It is the beginning of a transition from a world where writing software was a competitive advantage to a world where directing software — deciding what it should do, for whom, and why — is the only advantage that remains. The SaaS companies that understood this before the Death Cross arrived are repositioning. The ones that did not understand it are being repriced. And the AI companies that are currently in the tornado will, eventually, arrive at their own Main Street, where the same question will apply to them: when the capability is everywhere, what differentiates the company that wields it?

Moore's answer, consistent across thirty years of framework development, has not changed: the whole product. Not the technology. The ecosystem that surrounds it, the institutional infrastructure that channels it, the domain expertise that directs it. The dam, not the river.

The Death Cross is a lifecycle event. The tornado is a phase. Main Street is a destination. And the question that persists across all three is the question Moore has been asking since 1991: not whether the technology is powerful, but whether anyone has built what the pragmatist needs to make that power useful.

The trillion-dollar repricing of February 2026 was the market's way of asking that question at volume. The answer is still being assembled, pin by pin, segment by segment, in the bowling alley that most observers are too impatient to watch.

Chapter 6: Zone Management in the Age of AI

The hardest problem in technology leadership is not choosing between the present and the future. It is running both at the same time without letting either destroy the other.

Geoffrey Moore formalized this problem in Zone to Win, published in 2015, drawing on decades of consulting with companies that faced existential disruption while still needing to make quarterly numbers. The framework divides organizational activity into four zones. The performance zone runs the current business — the revenue-generating products, the existing customers, the established go-to-market machinery. The productivity zone optimizes the current business — the shared services, the operational infrastructure, the cost-reduction initiatives that keep the performance zone efficient. The incubation zone builds the future business — the early-stage bets, the experimental products, the market explorations that have no revenue today but might constitute the company's core in five years. The transformation zone scales a future business into the new core — the rare, high-stakes moment when a company redirects its entire organizational weight behind an incubation zone bet that has demonstrated enough traction to justify the bet.

The genius of the framework is not in the zones themselves but in the discipline of separation. Moore argues that the zones must be managed independently, with different metrics, different timelines, different leadership styles, and different definitions of success. The performance zone is measured on quarterly revenue and margin. The incubation zone is measured on learning velocity and option value. The moment an organization applies performance zone metrics to an incubation zone initiative — demanding revenue too early, insisting on efficiency before product-market fit, evaluating experimental work by the standards of established work — the initiative dies. Not because it lacked potential but because the wrong measurement system was applied to the wrong phase of development.

AI has made zone management simultaneously more important and more difficult than at any previous point in Moore's career. More important because the disruption AI represents is faster and more fundamental than any previous technology wave, which means the cost of failing to incubate the future business is not gradual decline but rapid obsolescence. More difficult because AI's most visible effect — the dramatic productivity gains in the performance zone — creates a seductive argument against incubation zone investment.

This is the trap, and it is the trap that The Orange Pill documents from inside the room where it springs.

When a team of twenty engineers, augmented by Claude Code, produces output that previously required a hundred, the performance zone looks spectacular. Revenue per employee climbs. Margins expand. The quarterly numbers beat expectations. Every dashboard in the organization confirms that the current strategy is working. The board is delighted. The investors are delighted. The CEO, who must now justify redirecting resources from a performing business to an unproven experiment, faces the worst possible form of organizational resistance: the resistance of success.

Moore described this dynamic directly in his Valize article on Zone to Win with AI: "The goal here is to use AI to increase the competitiveness of your established lines of business, either through materially differentiating your products or dramatically reengineering your processes." This is performance zone strategy, and Moore endorses it. But he then adds the qualification that makes the framework operational: "Your job is to make sure you use it to target those opportunities where there is the most trapped value to release... Good as it is, AI is still a work in progress, so you will likely be taking a human-in-the-loop approach for the foreseeable future."

The qualification matters because it reveals the zone management tension. Releasing trapped value in the performance zone is necessary but not sufficient. It is the defensive play — the move that prevents competitors from gaining an efficiency advantage. But it is not the offensive play. The offensive play lives in the incubation zone: the new business models, new value propositions, and new market positions that AI makes possible but that require strategic vision to identify and organizational discipline to pursue.

The organizational behavior that Moore's framework predicts — and that the evidence from 2025 and 2026 confirms — is that most companies will over-invest in performance zone AI and under-invest in incubation zone AI. The reasons are structural, not personal. The performance zone has established metrics, established customers, and established revenue. Performance zone AI investments produce measurable returns on measurable timelines. Incubation zone AI investments produce uncertain returns on uncertain timelines. In any resource allocation conversation where a performance zone initiative competes with an incubation zone initiative, the performance zone wins — not because it is more important but because it is more legible.

Moore's prescribed discipline is to protect the incubation zone from performance zone logic by giving it separate funding, separate leadership, and separate success metrics. In practice, this requires the CEO to make an argument that sounds, to the board and the market, like waste: "We are going to spend money on something that will not generate revenue for two to three years, even though we have profitable opportunities that could absorb that capital today." This argument is difficult enough in normal times. In the age of AI, when the performance zone is producing unprecedented returns, the argument becomes nearly impossible.

Moore's Escape Velocity provides a complementary lens: the hierarchy of powers framework, which evaluates companies on five dimensions — category power, company power, market power, offer power, and execution power. In the AI context, the hierarchy reveals which companies are positioned to succeed in the incubation zone and which are trapped in performance zone optimization.

Category power — the advantage of being in a growing market — currently favors AI-native companies over incumbents. The AI category is expanding faster than any established software category, and capital flows toward category power with the reliability of water flowing downhill. Company power — the advantage of brand and market position — favors incumbents. Salesforce, Microsoft, Adobe have brand recognition and customer relationships that AI-native companies cannot replicate. Market power — the advantage of segment leadership — is being contested in real time, segment by segment, in the bowling alley. Offer power — the advantage of product differentiation — is fleeting in an environment where model capabilities converge every six months. Execution power — the advantage of operational excellence — favors organizations that have mastered zone management, because execution power in the age of AI means the ability to deploy AI in the performance zone while simultaneously building AI-native businesses in the incubation zone.

The thirty-day sprint to build Napster Station, described in The Orange Pill, is an incubation zone story wearing performance zone clothes. The product was new — an AI-powered concierge kiosk that did not exist eight weeks before its debut. The technology stack was experimental — conversational AI handling live interactions with hundreds of strangers across languages, with AI-generated music delivered in real time. The timeline was impossible by performance zone standards. But the organizational discipline that made it possible was zone management in action: the team was given a separate mandate (build something new), separate metrics (does it work at CES?), and a protected timeline (thirty days, no interference from performance zone priorities).

Moore would observe that the Station sprint succeeded precisely because it was managed as an incubation zone initiative rather than a performance zone one. Had it been subjected to performance zone scrutiny — What is the revenue forecast? What is the unit economics model? How does this integrate with existing product lines? — it would not have been approved. The incubation zone does not answer those questions. It answers different questions: Is this technically feasible? Does this open a new market? Does this demonstrate a capability that could become core?

The zone management challenge becomes existential when the disruption is large enough to require a transformation zone intervention — the moment when the incubation zone bet becomes large enough to justify redirecting the entire organization's weight behind it. Moore wrote in the Valize article: "The existential threat posed by Generative AI and its successors is still hard to predict, but two industries that have already sensed it are media entertainment and publishing. This is a job for the Transformation Zone."

The transformation zone is the most painful of the four, because it requires the performance zone to subordinate itself to the incubation zone's bet. Revenue-generating products are deprioritized. Established teams are redirected. The organizational identity shifts from "we are a company that does X" to "we are a company that is becoming Y." This shift is wrenching under any circumstances. In the age of AI, it is wrenching on a compressed timeline, because the window between "AI is an incubation zone experiment" and "AI is an existential threat to the performance zone" is shorter than any previous technology cycle.

Moore has been characteristically direct about the discipline required. In a 2026 post on X, he wrote: "AI no longer gets to be an experiment. In 2026, results are the bar." The statement is a zone management directive: the incubation zone experiments of 2024 and 2025 must now demonstrate enough traction to justify transformation zone commitments in 2026 and 2027. Companies that are still "experimenting" with AI — running pilots without deployment mandates, testing tools without workflow integration, exploring possibilities without committing resources — are behind the lifecycle curve. The bowling alley is advancing. The tornado is forming in adjacent segments. The time for experimentation has passed.

The zone management discipline that Moore prescribes is, in essence, the organizational equivalent of holding two truths simultaneously — the truth that the current business is valuable and must be optimized, and the truth that the current business is insufficient and must be supplemented by something fundamentally new. The first truth generates revenue. The second generates survival. The hardest thing any leader does is allocate attention and resources between the two without letting the urgency of the first consume the importance of the second.

In the age of AI, that allocation decision is not quarterly. It is daily. And the organizations that get it right will be the ones that built zones before they needed them — that protected the incubation function when the performance zone was performing, that developed the muscle for transformation before transformation became mandatory, that treated Moore's framework not as a theory but as operational infrastructure for a world where the distance between the present and the future collapsed to the width of a quarterly earnings call.

Chapter 7: The Identity Chasm

Geoffrey Moore's framework was built to explain why pragmatists hesitate. The whole product is incomplete. The reference customers are insufficient. The risk is unquantified. These are product problems and market problems, and Moore's toolkit is exquisitely designed to solve them. Build the whole product. Accumulate references. Quantify the value. Cross the chasm.

But the AI transition has surfaced a dimension of the chasm that Moore's original framework addressed only obliquely — a dimension that is neither product nor market but psychological, and that may prove to be the widest gap any technology has ever had to cross.

The identity chasm is the gap between who a professional believes herself to be and who she must become in order to adopt a technology that redefines the value of her expertise.

Every previous technology transition required pragmatists to learn new tools. The personal computer required the secretary to learn word processing. The spreadsheet required the accountant to learn a new interface for familiar calculations. The CRM required the salesperson to learn a new system for managing relationships. In each case, the professional's identity remained intact. The secretary was still a secretary, now with a better tool. The accountant was still an accountant, now with faster calculations. The work changed. The worker's self-concept did not.

AI is different. When the machine can write the code, draft the brief, compose the analysis, generate the design — when the machine performs, competently and at scale, the specific activities that constituted the professional's identity — the professional faces not a product adoption decision but an existential one. To adopt the tool is to acknowledge that the skill set that defined a career, that justified years of training, that earned professional respect and economic reward, is now available at commodity prices.

This is not a rational calculation about productivity gains. It is a confrontation with the question of what one is worth — not in economic terms, though the economic dimension is real, but in the deeper sense of professional selfhood. The senior lawyer who spent twenty years developing the ability to construct a precise legal argument, the software architect who spent a decade building intuition about system design, the radiologist who trained for twelve years to read imaging scans with expert accuracy — each of these people built a professional identity on the foundation of a specific, hard-won capability. AI does not merely threaten their productivity. It threatens the story they tell themselves about who they are and why they matter.

Moore acknowledged this dimension in his "Making Peace with Generative AI" essay, where he framed generative AI as "an advisory technology" that "is not eliminating the need for human beings to make judgment calls" but rather "accelerating the preparation for so doing." The framing is strategic. By positioning AI as advisory rather than autonomous, Moore offers the pragmatist an identity bridge — a narrative in which the professional's judgment remains central, and the AI merely handles the preparatory work that consumed time without requiring the full exercise of professional expertise. The lawyer still decides. The AI drafts. The architect still designs. The AI implements. The doctor still diagnoses. The AI screens.

This narrative is partially true, and it is the best available narrative for crossing the identity chasm in the short term. But it is also unstable, because the boundary between "preparation" and "judgment" is not fixed. Each generation of AI capability pushes that boundary further into territory the professional previously considered core to her identity. The lawyer who initially used AI to draft routine correspondence now finds it drafting complex motions. The developer who initially used AI for boilerplate now finds it making architectural decisions. The boundary migrates, and with each migration, the identity narrative requires revision.

Moore's bowling alley framework provides a structural lens for understanding how the identity chasm plays out across different professional segments. The segments that crossed first — developers, consumer users — are segments where professional identity is relatively fluid. Developer culture celebrates tool adoption. The developer who masters a new tool is not diminished but elevated. The identity narrative — "I am the person who builds things" — is compatible with AI adoption, because AI expands what can be built. The developer's identity is defined by output, and AI increases output.

The segments that are crossing now — legal, financial services, marketing — are segments where professional identity is more rigid. The lawyer's identity is defined not just by output (briefs, contracts, opinions) but by the process of producing that output (research, analysis, argumentation). The process is where the expertise lives, and the expertise is where the identity lives. When AI handles the process, the identity is destabilized even if the output improves.

The segments that have not yet crossed — healthcare, education, the skilled trades — are segments where professional identity is most deeply entrenched. A surgeon's identity is inseparable from the physical act of surgery. A teacher's identity is inseparable from the relational act of teaching. These identities are not just professional but vocational — they carry moral weight, cultural prestige, and a sense of calling that transcends economic calculation. The identity chasm in these segments is not a gap to be bridged by better products or more references. It is a transformation to be navigated with care, humility, and an understanding that what is being asked of these professionals is not merely to adopt a tool but to reconceive the meaning of their life's work.

Moore's whole product model must be extended to accommodate this reality. The whole product for crossing the identity chasm includes not just technology, integration, and support but what might be called a whole narrative — a coherent, credible, emotionally resonant story that allows the professional to adopt AI without experiencing adoption as surrender.

The whole narrative must satisfy three conditions. First, it must be true. Professionals are sophisticated evaluators of narrative, and a narrative that overpromises or obscures genuine risks will be rejected with the speed and finality that pragmatists reserve for products that insult their intelligence. The narrative that says "AI will make you more productive and nothing will change" fails the truth test, because things will change, and the pragmatist knows it. Second, the narrative must acknowledge loss. The skills that are being commoditized were genuinely valuable. The years of training were genuinely formative. The expertise was genuinely hard to acquire. A narrative that dismisses these investments as obsolete — "Learn to prompt or get left behind" — is not just tactless but strategically counterproductive, because it confirms the pragmatist's fear that the technology community does not understand or respect what is being lost. Third, the narrative must point toward a new identity that is more valuable, not less, than the old one. The lawyer who uses AI is not a diminished lawyer. She is a lawyer who can now focus on the judgment, strategy, and client relationship work that was always the highest expression of legal practice but was buried under hours of research and drafting. The developer who uses AI is not a diminished developer. He is an architect who can now direct computational capability across a breadth of problems that implementation labor previously precluded.

This reframing — from identity-as-skill to identity-as-judgment — is the core of the whole narrative, and it echoes across the broader AI discourse. Moore's formulation in his LinkedIn posts is precise: "Judgment carries accountability. It requires context, interpretation, and a willingness to stand behind a decision." The implication is that judgment is more valuable than execution, more difficult than execution, and more uniquely human than execution. The identity that is being offered is not smaller but larger — and the challenge is making that offer credible to people who are experiencing the transition not as an expansion but as a loss.

Moore's framework suggests that the companies and institutions that cross the identity chasm first will be the ones that invest in the whole narrative as seriously as they invest in the whole product. This means training programs that are not just technical ("how to use AI") but professional ("how to redefine your role in an AI-augmented practice"). It means organizational communication that names the loss honestly before pointing toward the gain. It means leadership that models the new identity — executives who are visibly using AI while visibly exercising judgment, demonstrating through their own practice that the tool enhances rather than replaces the human contribution.

The identity chasm is wider than any gap Moore's original framework was designed to cross. But the mechanism is the same: understand what the pragmatist needs, build it, and let the results speak. The pragmatist does not need reassurance. The pragmatist needs a credible story about who she becomes on the other side — and proof, in the form of peers who have made the crossing and emerged with their professional selfhood intact, that the story is true.

Moore's career has been spent building bridges between visionary enthusiasm and pragmatist need. The identity chasm is the widest bridge he has ever been asked to build. And the materials are not products or integrations or support contracts. They are narratives — stories that honor what was, acknowledge what is changing, and make visible what could be. The technology has never been the hard part. The hard part has always been the humans, and in the age of AI, the humans have never been harder.

Chapter 8: The Laggard's Wisdom

In Moore's Technology Adoption Lifecycle, the laggard is the last to arrive and the first to be dismissed. Laggards adopt only when the technology has become so thoroughly embedded in institutional infrastructure that refusing it costs more than accepting it — when the old system is no longer supported, when the forms require the new format, when the world has moved on so completely that holding out has become a form of self-punishment rather than a form of principle.

The standard reading of the laggard is strategic irrelevance. Innovators matter because they find the frontier. Early adopters matter because they demonstrate the vision. The early majority matters because it constitutes the market. The late majority matters because it represents the long tail of revenue. The laggard, in this reading, matters only as a demographic inevitability — the final segment to be collected, not consulted.

Moore himself has not been immune to this reading. His strategic focus, across six books, is overwhelmingly on the transition from early adopters to the early majority — the chasm crossing that determines whether a technology lives or dies. The laggard appears in his framework as an afterthought, the segment that adopts when it has no choice, whose concerns are too late to influence strategy and too conservative to inform design.

But the AI transition has revealed something about laggard concerns that Moore's framework has historically underweighted, and that the broader technology industry has systematically ignored: laggards are often right about what is being lost, even when they are wrong about what should be done about it.

Byung-Chul Han, the philosopher whose critique of smoothness and auto-exploitation occupies a substantial portion of The Orange Pill, is a laggard in Moore's taxonomy. He does not own a smartphone. He gardens in Berlin. He listens to analog music. He writes by hand. His refusal is total, principled, and — by the standards of Moore's framework — strategically irrelevant. Han will never adopt AI. His concerns will never influence product design. His critique will never appear in a customer requirements document.

And yet his diagnosis is precise enough to make every builder uncomfortable. The observation that friction produces depth. The insight that smoothness, when it becomes the dominant aesthetic of a culture, hollows out the capacity for sustained attention, for genuine understanding, for the kind of knowledge that can only be built through struggle. The warning that auto-exploitation — the internalized imperative to optimize, to produce, to convert every moment into output — is not a failure of individual discipline but a structural feature of a system that has made resistance invisible by disguising compulsion as freedom.

These are laggard concerns. They are also, increasingly, empirical findings. The Berkeley researchers documented work intensification, task seepage, and attention fragmentation — precisely the phenomena Han's philosophy predicted. The GPS analogy that Moore himself cited in his 2026 interview — "if you outsource, you cognitively decline" — is a laggard concern expressed in pragmatist language. The senior engineer who worries that AI-assisted developers are building without understanding, that the geological layers of knowledge deposited by years of debugging are not being laid down, that the codebase is growing faster than the team's ability to comprehend it — this is a laggard concern with immediate operational consequences.

Moore's framework treats laggard resistance as a problem to be managed. The strategic prescription is patience: wait for institutional infrastructure to force adoption, and the laggards will follow. But this prescription assumes that laggard concerns are primarily about comfort — that laggards resist because change is uncomfortable, and comfort can be addressed by making the transition as painless as possible.

The AI transition suggests a different reading. Laggard concerns, at their best, are not about comfort. They are about cost — specifically, about costs that the earlier segments of the adoption lifecycle are structurally unable to see. The innovator cannot see the cost because the innovator is too excited by possibility. The early adopter cannot see it because the early adopter is too invested in the vision. The early majority cannot see it because the early majority is too focused on pragmatic implementation. The late majority cannot see it because the late majority is too busy catching up.

The laggard, standing outside the adoption curve, looking back at the entire trajectory, can sometimes see what none of the other segments can: the aggregate cost of the transition, the things that were lost in the enthusiasm and the rush, the capabilities that atrophied, the practices that disappeared, the knowledge that was not passed on because the friction that produced it was optimized away.

This does not mean the laggard is right about what to do. Han's prescription — resist the tools, return to the garden, choose the analog over the digital — is no more viable as a civilizational strategy than the Luddites' prescription of breaking machines. The river flows. The tools exist. The capability is in the world. No amount of principled refusal will remove it.

But the laggard's diagnosis — the precise identification of what is being lost — is too valuable to dismiss. Moore's framework needs what might be called a feedback loop from the laggard segment, a mechanism by which the concerns of the last adopters inform the design of the whole product for the next technology cycle.

In practice, this means incorporating laggard concerns into the augmented product ring — the ring of the whole product model where differentiation lives. The AI platform that includes structured friction — mandatory human review stages, enforced pauses for reflection, interfaces designed to maintain the user's understanding of what the AI is doing and why — is building a whole product that addresses laggard concerns without requiring laggard behavior. The teacher who uses AI in the classroom but designs assignments around the questions rather than the answers is incorporating laggard wisdom into pragmatist practice. The organization that builds AI Practice frameworks — structured pauses, sequenced workflows, protected mentoring time — is translating laggard insight into institutional infrastructure.

Moore's "trapped value" concept applies in a direction he may not have intended. The laggard is sitting on trapped value — not the economic trapped value of inefficient processes, but the epistemological trapped value of knowledge about what matters, what works, what holds up over time, what cannot be replaced by speed. The master craftsman whose guild was dissolved by the power loom possessed knowledge about materials, about quality, about the relationship between maker and made, that the factory system needed but did not know how to extract. The senior developer whose debugging intuition was built through years of friction possesses knowledge about system behavior, about failure modes, about the gap between code that compiles and code that endures, that the AI-augmented team needs but does not know how to preserve.

The strategic question is not whether to listen to laggards. It is how to extract and preserve the knowledge that laggard practice embodies, how to build it into the institutional infrastructure that surrounds the new tools, and how to do this before the knowledge disappears along with the practitioners who carry it.

Moore's framework provides the mechanism: the whole product. But the whole product must be expanded to include components that no previous technology cycle has required — components that preserve the human capacities the technology threatens to erode. Structured friction. Attentional ecology. Protected spaces for the slow, effortful, apparently inefficient cognitive work that produces depth. These are not features. They are whole product components, as essential to sustainable AI adoption as integration, training, and technical support.

Moore has written that "the world will never run out of work," and this is almost certainly true. But the world can run out of the kind of work that develops human capability — the formative friction, the productive struggle, the patient accumulation of understanding through difficulty. If that kind of work disappears, the human capacity to direct AI wisely will erode, and the erosion will be invisible until it is catastrophic, because the performance metrics — speed, output, efficiency — will continue to improve even as the judgment that directs those metrics degrades.

This is the laggard's deepest warning, and Moore's framework has not yet fully absorbed it. The lifecycle moves forward. The segments adopt in sequence. The tornado comes. Main Street follows. But the quality of what remains after the tornado — the depth of human capability, the integrity of professional judgment, the capacity for the kind of thinking that only friction produces — depends on whether the whole product includes the laggard's wisdom or merely overrides it.

The healthiest technology ecosystems, Moore's career suggests, are the ones that learn from every segment of the adoption curve — not just the innovators who find the frontier and the pragmatists who define the market, but the laggards who remember what the frontier left behind. Their concerns are not obstruction. They are data. And in the age of AI, it is the most important data the industry is not collecting.

Chapter 9: The Nation as Market — Geopolitics of the Adoption Lifecycle

Markets are not abstractions. They are populations — groups of human beings organized by shared institutions, shared norms, shared infrastructure, and shared assumptions about what constitutes acceptable risk. Geoffrey Moore built his framework by studying how technology products move through corporate markets, but the framework's deepest insight — that adoption proceeds through psychographically distinct segments, each requiring a different strategy — applies at every scale of human organization. Including the largest one.

Nations are markets. They sit at different positions on the AI adoption lifecycle, they contain different proportions of innovators and pragmatists and laggards, and the strategies they pursue are subject to the same lifecycle dynamics that govern corporate technology adoption. The nation that builds the best whole product for its pragmatist citizens — the education systems, the retraining infrastructure, the regulatory frameworks, the institutional trust — will lead the next century. Not because it will have the most powerful AI models. Because it will have the most capable population directing those models toward human ends.

Moore has been characteristically precise about this. In his Forbes interview, he identified the greatest near-term AI risk not as technological but as political: "The greatest AI risks for the foreseeable future come from empowering malicious actors, not from the AI per se somehow getting out of control. This will be the latest leg in an arms race that can never end, and we will have to find ways to adapt to adversarial innovation." The observation reframes the geopolitical AI competition from a technology race to an institutional race. The question is not which nation develops the most capable model. It is which nation builds the institutional infrastructure — the whole product, at national scale — that allows its population to use AI wisely, resist AI-enabled manipulation, and direct AI capability toward genuine human flourishing.

The United States, in Moore's lifecycle terms, is deep in the early adopter phase. Massive visionary investment, world-leading research, the most dynamic startup ecosystem on earth, and a cultural disposition toward experimentation that has served it well through every previous technology cycle. American AI companies — Anthropic, OpenAI, Google DeepMind — are producing the most capable frontier models. American developers are adopting AI coding tools at rates that dwarf every other national population. American venture capital is flowing into AI at a pace that makes the dot-com bubble look measured.

But early adopter leadership is not the same as mainstream readiness, and this is where Moore's framework issues its sharpest warning. The United States has invested enormously in the generic product — the models, the chips, the infrastructure. It has invested almost nothing in the whole product for its pragmatist population. The retraining infrastructure for workers whose skills are being commoditized is rudimentary. The educational system has not adapted — curricula designed for a pre-AI world are being taught by teachers who have received no training in AI integration, to students who are using the tools anyway, without guidance, without framework, without the attentional ecology that would allow them to develop the judgment AI demands. The regulatory framework is fragmented: a patchwork of executive orders, state-level initiatives, and sector-specific guidelines that does not constitute a coherent national strategy.

Moore's bowling alley analysis suggests that the United States risks a specific failure mode: winning the visionary phase and losing the pragmatist phase. The models are the best in the world. The whole product for the American citizen — the training, the institutional support, the regulatory clarity, the educational framework — is among the weakest in the developed world relative to the technology's capability. The gap between what American AI can do and what the average American is prepared to do with it is wider than in any comparable nation.

The European Union presents the inverse profile. The EU AI Act, implemented in 2024, is the most comprehensive AI regulatory framework in the world. It classifies AI systems by risk level, mandates transparency and accountability, and establishes institutional mechanisms for oversight. In Moore's terms, the EU is building whole product components — the regulatory infrastructure, the institutional trust frameworks, the consumer protection mechanisms — that the pragmatist population requires before adopting at scale.

But the EU risks what Moore would call over-optimization of the productivity zone at the expense of the incubation zone. The regulatory framework, designed to protect citizens from AI harms, may also protect them from AI benefits by raising the compliance cost of deployment beyond what startups and smaller institutions can absorb. The whole product requires both the technology and the surrounding infrastructure. The EU is building the infrastructure without ensuring it has the technology to put inside it. European AI research is world-class; European AI commercialization is not. The regulatory framework may inadvertently widen the chasm by making the whole product more expensive to assemble, which means the pragmatist's adoption timeline extends, which means the economic benefits of AI flow to jurisdictions with lower regulatory friction, which means the EU's citizens receive less of the value the regulations were designed to protect.

China represents a third model: the attempt to force-march through the chasm via state direction. The Chinese government has invested massively in AI research, designated AI as a strategic priority, and directed state resources toward rapid deployment across the economy. In Moore's framework, this is an attempt to skip the bowling alley entirely — to move directly from early adoption to the tornado through institutional mandate rather than organic market dynamics.

Moore's framework predicts that this approach will produce adoption breadth but not adoption depth. State-directed deployment can force tools into institutional settings — government agencies, state-owned enterprises, educational institutions. But it cannot force the pragmatist psychology that produces genuine integration. Pragmatists adopt because they are convinced, not because they are instructed. State-mandated AI deployment produces compliance, not capability. The tools are used because they must be, not because the users have developed the judgment to use them well.

The deeper problem with state-directed adoption is that it suppresses the feedback mechanisms that Moore's lifecycle depends on. In a healthy adoption cycle, pragmatist hesitation is informative — it reveals whole product gaps, identity challenges, institutional barriers that must be addressed before the technology can be productively deployed. In a state-directed cycle, hesitation is not informative. It is impermissible. The whole product gaps go unidentified because the users who would identify them have no mechanism — and no incentive — to report them. The result is a superficially impressive adoption curve that conceals fragility: a population using AI tools without the institutional support, the professional identity adaptation, or the attentional ecology frameworks that sustainable adoption requires.

Smaller nations present perhaps the most instructive cases. Singapore, with its combination of state capacity and market orientation, is building national AI strategies that include both generic product development and whole product infrastructure — education reform, workforce retraining, regulatory frameworks tailored to specific sectors. Estonia, which built the world's most advanced digital government infrastructure over two decades, has institutional trust and digital literacy that give its pragmatist population a shorter distance to cross. Israel, with its concentration of technical talent and entrepreneurial culture, is producing AI innovation at a rate disproportionate to its population, though its whole product infrastructure lags its generic product capability.

Moore's framework suggests that the nations that lead the AI era will not be the ones that produce the most powerful models. They will be the ones that build the most complete whole products for their citizens. This means investing not just in research and development but in the institutional infrastructure that translates capability into adoption: education systems that teach judgment, not just execution. Retraining programs that address the identity chasm, not just the skill gap. Regulatory frameworks that protect citizens without preventing them from accessing the technology's benefits. And cultural narratives — the whole narrative described in the identity chasm discussion — that allow a population to embrace AI as an enhancement of human capability rather than a replacement of it.

Moore's observation from his November 2024 blog post takes on geopolitical weight in this context: "Higher education, social services, healthcare, and law enforcement are all staggering under increasingly untenable demands... All of them are target-rich environments for applying AI." Every nation faces these demands. The public sector in every developed nation is straining under demographic pressure, fiscal constraint, and rising citizen expectations. The nation that successfully deploys AI in its public sector — that builds the whole product for healthcare, education, public safety — will not just improve service delivery. It will demonstrate, at national scale, the pragmatist reference that other nations need to see before they commit.

The geopolitical AI race, read through Moore's framework, is not a race to build the best model. It is a race to build the best whole product at national scale. The model is the generic product. The nation's institutional infrastructure — its education system, its regulatory framework, its retraining programs, its cultural narratives about human value in the age of AI — is the whole product. And the nations that build the best whole products will not just lead the AI era. They will define it, because their citizens will be the ones whose experience of AI becomes the reference for the rest of the world's pragmatist populations.

The chasm at national scale is wider than at corporate scale, because the whole product components are more complex, more politically contested, and more deeply embedded in institutional culture. But the mechanism of crossing is the same: understand what the pragmatist citizen needs, build it, and demonstrate the results. The nation that does this first does not just cross the chasm. It sets the standard for how the chasm is crossed everywhere else.

Moore's framework, designed for Silicon Valley product launches, turns out to be a theory of civilizational readiness. The technology is the easy part. It has always been the easy part. The hard part is building everything around the technology that makes it worthy of the humans it is meant to serve. And at national scale, the stakes of getting that wrong are not a failed product launch. They are a generation of citizens unprepared for the world they are inheriting.

Chapter 10: After the Tornado — What Remains When the Dust Settles

Every tornado ends.

This is the fact that participants in a technology hypergrowth cycle cannot feel from the inside and that observers cannot believe from the outside. The tornado phase — Moore's term for the period of explosive adoption where demand exceeds supply, market share is seized rather than earned, and the winning strategy is pure distribution velocity — is so intense, so all-consuming, and so economically rewarding that it warps the perception of everyone inside it. The tornado feels permanent. The growth feels structural. The idea that the acceleration will slow, that the market will saturate, that the technology will commoditize, seems not just unlikely but impossible while the wind is howling.

But it ends. It always ends. And what remains after the tornado — the landscape it carved, the institutions it built or destroyed, the human capabilities it enhanced or eroded — is the only thing that matters in the long view.

Moore's framework maps the post-tornado landscape with characteristic precision. After the tornado comes Main Street — the phase where the technology is mature, widely deployed, and no longer a source of competitive advantage. On Main Street, everyone has the technology. The early movers have no lead. The late adopters have no disadvantage. The capability that was scarce during the tornado is abundant. The capability that was abundant — the human judgment required to direct the technology — has become scarce.

The trajectory of AI, projected through Moore's complete lifecycle, points toward a Main Street that will look radically different from any previous one. When AI reaches commodity status — when the capability to generate code, draft documents, compose analyses, produce designs, and engage in sophisticated conversation is available to everyone at near-zero marginal cost — the entire architecture of competitive advantage in knowledge work will have been inverted.

For the past half-century, competitive advantage in knowledge industries has been built on execution: the ability to do things that are difficult to do. Write complex software. Produce sophisticated financial models. Draft legal arguments that withstand scrutiny. Design products that delight users. Each of these capabilities was scarce because each required years of training, significant investment, and a level of cognitive discipline that limited the pool of practitioners. The scarcity of execution was the foundation on which professional careers, organizational hierarchies, and entire industries were built.

On AI's Main Street, execution is abundant. Everyone can produce competent code, competent briefs, competent analyses, competent designs — because the tool that produces them is universally available. The scarcity migrates from execution to direction: the capacity to decide what should be executed, for whom, to what standard, and toward what end. This is not a metaphorical migration. It is an economic restructuring as fundamental as the one that occurred when the factory replaced the workshop.

Moore addressed this restructuring through his concept of core versus context, first developed in Living on the Fault Line. Core is the work that creates differentiation — the capability that customers choose you for. Context is everything else — the work that qualifies you to compete but does not distinguish you from competitors. Moore's strategic prescription has never wavered: invest ruthlessly in core, minimize investment in context, and — critically — reassess what counts as core every time the competitive landscape shifts.

AI has triggered the largest reassessment of core versus context in the history of knowledge work. For a software company, writing code was core when writing code was hard. When AI writes code, writing code becomes context. Core migrates to product vision, architectural judgment, user insight — the capacities that determine what gets built. For a law firm, legal research was core when research required domain expertise and patience. When AI performs legal research, research becomes context. Core migrates to legal strategy, client counsel, the judgment that connects legal knowledge to human situations. For a hospital, diagnostic imaging interpretation was core when interpretation required twelve years of training. When AI interprets images, interpretation becomes context. Core migrates to treatment planning, patient communication, the clinical judgment that connects diagnosis to care.

In every domain, the same pattern: AI commoditizes execution. Core migrates to judgment. And judgment — the capacity to evaluate, to choose wisely among possibilities, to weigh competing priorities in the absence of complete information, to take responsibility for decisions whose consequences are uncertain — becomes the scarcest and most valuable human capability.

Moore's "trapped value" framework, which he developed to identify where AI should be deployed, gains a second meaning on Main Street. The trapped value is no longer in inefficient processes. It is in underdeveloped human judgment — the judgment that was buried under layers of execution work and never had the bandwidth to develop. The developer who spent eighty percent of her time debugging did not have the cognitive space to develop product vision. The lawyer who spent seventy percent of his time on research did not have the bandwidth to develop client strategy. The teacher who spent sixty percent of her day on administrative tasks did not have the capacity to develop individualized pedagogical relationships with each student.

On Main Street, the execution burden is lifted. The bandwidth is available. But bandwidth without capacity is not productive. The developer who has been freed from debugging must actually possess the product judgment that the freedom makes room for. The lawyer freed from research must actually possess the strategic capability that the freedom enables. The freedom creates the opportunity. It does not create the capability.

This is where the laggard's wisdom, discussed in the previous chapter, becomes strategically critical. The laggard's concern — that the friction of execution was formative, that the struggle of doing the hard thing was building the judgment to direct the hard thing — is not just philosophically interesting. On Main Street, it is operationally decisive. The organizations that preserved formative friction during the tornado — that built whole products including structured human development, not just AI deployment — will arrive on Main Street with judgment. The organizations that optimized purely for efficiency during the tornado will arrive on Main Street with capability and no direction.

Moore himself gestured toward this in his observation that AI is "an advisory technology" that "is not eliminating the need for human beings to make judgment calls" but "accelerating the preparation for so doing." On Main Street, the advisory function is universally available. Every organization has access to the same quality of AI advice. The differentiator is the quality of the human judgment that evaluates, contextualizes, and acts on that advice. And that quality is a function of development — of years of experience, mentorship, failure, reflection, and the kind of learning that cannot be compressed into a training program or extracted from a model.

The post-tornado landscape also reveals the importance of the national whole product analysis. Nations that invested in institutional infrastructure during the tornado — education reform, workforce development, regulatory frameworks, cultural narratives about human value — will arrive on Main Street with populations prepared to exercise judgment. Nations that invested only in model development and deployment will arrive with powerful tools and populations unprepared to use them wisely. The national whole product is not a policy luxury. It is the infrastructure of civilizational competence.

Moore's lifecycle, traced to its conclusion, ends where The Orange Pill begins: with the question of what humans are for. When execution is abundant and direction is scarce, human value resides in the capacities that machines do not possess — not because machines are technically incapable, but because direction requires stakes, and stakes require mortality, vulnerability, and the specific quality of caring that arises from being a creature whose time is finite and whose choices are irreversible.

The tornado is forming. For developer tools, it has already arrived. For enterprise knowledge work, it is approaching. For healthcare, education, and the public sector, it is still on the horizon. But Moore's framework provides the map: the tornado will come, and it will pass, and what remains will be determined not by who had the most powerful AI during the storm but by who built the institutional infrastructure — the whole product, the dams, the attentional ecology — that preserved and developed human judgment while the wind was howling.

Moore's career has been a sustained argument that technology adoption is predictable. The lifecycle repeats. The chasm appears. The bowling alley proceeds. The tornado arrives. Main Street follows. The patterns hold because human psychology holds — because the innovator's excitement, the pragmatist's caution, and the laggard's resistance are features of human cognition, not features of any particular technology.

AI does not break the pattern. It accelerates it. And acceleration makes the pattern more important, not less, because the consequences of misreading your lifecycle position — of running tornado tactics in the bowling alley, or bowling alley tactics in the tornado, or ignoring Main Street entirely because the tornado feels permanent — compound faster when the cycle moves faster.

The dust will settle. The question is what will be standing when it does. Moore's answer has not changed in thirty years: the organizations, the institutions, the nations that built the whole product. Not the technology. The whole product. The complete set of capabilities — technological, institutional, educational, cultural, human — required to fulfill not just the customer's reason to buy but the citizen's capacity to flourish.

That is what remains after the tornado. Not the wind. The structures that were built to endure it.

Epilogue

The board meeting that nobody remembers changed how I think about sequence.

It was 2019, years before the orange pill, and I was presenting a product roadmap to investors who wanted to know why we were not shipping faster. The product was ready — or ready enough, by the standards of the startup world, where shipping beats perfecting and speed is synonymous with ambition. I had a slide showing our feature set, another showing the competitive landscape, and a third showing our growth projections. What I did not have was a slide explaining why the people who were supposed to use this product had not yet decided they needed it.

The investors were polite. One of them said something I have never forgotten: "You're building for people who don't know they're your customers yet."

At the time, I took it as a compliment. We were ahead of the market. Visionary. Building the future before the future arrived. Moore would have taken it as a diagnosis. We were in the bowling alley and behaving as though we were in the tornado. We were shipping breadth when we needed depth. We were talking to everyone when we should have been listening to one specific pragmatist in one specific segment and building exactly what she needed before moving to the next.

That board meeting taught me a lesson I apparently needed to learn again, because when AI crossed the threshold in December 2025, I made the same mistake. I felt the exhilaration and I wanted everyone to feel it. I stood in the room in Trivandrum and said, "By the end of this week, each one of you will be able to do more than all of you together," and the statement was true, but it was a visionary's statement aimed at a pragmatist's audience. Some of my engineers heard possibility. Others heard threat. And the ones who heard threat were not wrong to hear it — they were responding to the absence of the whole product, the whole narrative, the institutional infrastructure that would make the possibility sustainable rather than terrifying.

Moore's framework arrived in my thinking like a corrective lens — not replacing what I could already see but sharpening it, making the blurred edges crisp. I had been looking at AI adoption and seeing a single curve: the exponential sweep of capability from lab to market. Moore taught me to see five curves nested inside that one, each with its own psychology, its own timeline, its own requirements. The innovators had already crossed. The early adopters were crossing. The pragmatists were watching. The laggards were warning. And the gaps between them — the chasms — were not empty space. They were full of the unbuilt whole products, the unwritten narratives, the unaddressed fears that would determine whether this technology reached the people who needed it most.

The concept that rearranged my thinking most was not the chasm itself — I had read Crossing the Chasm years ago, and the basic framework was familiar. What rearranged me was the identity chasm: the recognition that AI adoption asks professionals not just to learn a new tool but to reconceive who they are and what they contribute. That senior engineer in Trivandrum, the one who spent two days oscillating between excitement and terror — his terror was not about the tool. It was about himself. About twenty years of expertise that had just been repriced. About the story he told himself every morning when he sat down at his keyboard, the story that said I am the person who solves hard technical problems, and the sudden, vertiginous realization that the machine solves them too, and faster, and the question that follows: Then what am I?

Moore's answer — that the professional becomes the judgment layer, the person who directs rather than executes, the person whose value is in the questions asked rather than the answers produced — is the right answer. But delivering it requires the whole narrative. It requires honoring what was built before announcing what comes next. It requires the patience of the bowling alley when every instinct screams for the tornado.

I am not a patient person. Building this book has been an exercise in confronting that impatience, chapter by chapter, framework by framework. Moore's lifecycle is fundamentally a theory of patience — of understanding that the sequence matters, that skipping steps does not accelerate adoption but delays it, that the pragmatist's caution is not an obstacle but a compass pointing toward the work that actually needs to be done.

The work that needs to be done — the whole product for AI at every scale, from the classroom to the corporation to the nation — is the most important work of this decade. Not the most glamorous. Not the most fundable. Not the kind that generates breathless posts on social media. The patient, unglamorous work of building the infrastructure that allows a technology of extraordinary power to reach the people standing on the other side of the chasm, waiting for proof that the crossing is safe.

Moore showed me that the proof is built, not proclaimed. Pin by pin. Segment by segment. Reference by reference. Until the pragmatist sees someone like her, in a context like hers, who crossed and flourished — and only then does she take the step.

The sunrise from the roof of the tower is real. But the stairs matter more than the view.

-- Edo Segal

Everyone agrees AI changes everything. Almost no one has built what the next hundred million users actually need to adopt it.
Geoffrey Moore spent thirty years mapping the invisible fracture that kill

Everyone agrees AI changes everything. Almost no one has built what the next hundred million users actually need to adopt it.

Geoffrey Moore spent thirty years mapping the invisible fracture that kills promising technologies -- the chasm between visionary enthusiasm and pragmatic deployment. His framework reveals what the AI discourse has missed entirely: the gap is not about capability. It is about the whole product -- the training, the trust, the institutional scaffolding, the identity narrative -- that nobody is building while everyone celebrates the demo. This book applies Moore's complete lifecycle to the AI revolution, from beachhead segments already crossing to the healthcare and education sectors where trapped value is greatest and the whole product gap is widest. The result is a strategic map for leaders, policymakers, and parents who need to know not just what AI can do, but when it will reach the people standing on the other side of the gap -- and what must be built to get it there.

Geoffrey Moore
“AI is crossing in specific segments, and the segments where it has crossed are not yet sufficient to pull the mainstream behind them.”
— Geoffrey Moore
0%
11 chapters
WIKI COMPANION

Geoffrey Moore — On AI

A reading-companion catalog of the 29 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Geoffrey Moore — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →