By Edo Segal
The competitor I kept watching was the wrong one.
For months after the December 2025 threshold, I benchmarked obsessively. How fast could my team ship relative to the startup down the street? How many features could we deploy compared to last quarter? How did our Claude Code adoption curve stack against the industry average? I built dashboards. I tracked metrics. I knew exactly where we stood relative to every company I considered a rival.
And that knowledge was worthless. Not wrong — worthless. Because the market those rivals and I were fighting over was evaporating beneath us while we measured our strokes.
W. Chan Kim saw this pattern decades before AI made it visceral. He and Renée Mauborgne spent twenty years at INSEAD studying what separates companies that achieve extraordinary growth from companies that grind themselves down in incremental competition. Their finding was deceptively simple: the winners stopped competing. They did not fight harder for existing customers in existing markets. They created markets that did not exist yet, served people who were not yet being served, and made the factors everyone else was competing on irrelevant.
They called the old markets red oceans — red with the blood of companies bleeding each other for fractions of market share. They called the new markets blue oceans — uncontested, undefined, open.
I needed this framework the way a drowning person needs the concept of shore. Not because it tells you how to swim. Because it tells you that the water you are swimming in is not the only water there is.
The AI revolution is creating the largest red ocean in economic history. Every software company, every knowledge-work firm, every organization that sells execution is fighting harder with better tools in a market whose underlying value structure has been demolished. The code is commodity. The features are commodity. The speed is commodity. And yet the instinct — my instinct, the instinct of every builder I know — is to compete harder on the factors that just stopped mattering.
Kim's work is the corrective. It asks the question that the technology discourse almost never asks: not "How do we build faster?" but "Who are we not serving? What market does not yet exist? What would happen if we stopped fighting for the customers everyone is fighting for and started looking at the billions of people the industry has never seen?"
That reframe changed how I think about everything my team builds. It should change how you think about what yours builds too.
— Edo Segal ^ Opus 4.6
1952–
W. Chan Kim (1952–) is a South Korean-born strategist and the Boston Consulting Group Bruce D. Henderson Chair Professor of Strategy and International Management at INSEAD in Fontainebleau, France. Together with his long-time collaborator Renée Mauborgne, Kim developed blue ocean strategy, a framework for creating uncontested market space rather than competing within existing industries. Their landmark book *Blue Ocean Strategy: How to Create Uncontested Market Space and Make the Competition Irrelevant* (2005) has sold over four million copies, been translated into nearly fifty languages, and is recognized as one of the most influential business books of the twenty-first century. Kim and Mauborgne's subsequent works include *Blue Ocean Shift* (2017) and *Beyond Disruption: Innovate and Achieve Growth Without Displacing Industries, Companies, or Jobs* (2023). Their research program, grounded in empirical analysis of over 150 strategic moves across 30-plus industries, introduced widely adopted concepts including value innovation, the strategy canvas, the four actions framework, noncustomer analysis, fair process, and tipping point leadership. Kim has been consistently ranked among the world's top management thinkers by Thinkers50, receiving its Strategy Award, and his work has shaped corporate strategy, public policy, and entrepreneurial practice globally.
In the winter of 2025, the global software industry employed roughly twenty-eight million people across enterprise platforms, cloud infrastructure, developer tools, and the vast archipelago of SaaS applications that had colonized every corner of organizational life. These twenty-eight million people worked inside companies that, in aggregate, commanded more than three trillion dollars of market capitalization. They built products that competed with other products for the attention and budget of the same customers, in markets whose boundaries had been drawn decades earlier and redrawn only incrementally since.
They were swimming in blood, and most of them did not know it.
The terminology belongs to W. Chan Kim and Renée Mauborgne, who spent two decades at INSEAD studying the strategic moves of more than one hundred and fifty companies across thirty industries, searching for the pattern that separated extraordinary growth from the grind of incremental competition. What they found was not a new competitive technique. It was the observation that the most successful companies had stopped competing altogether.
Kim and Mauborgne divided the economic universe into two kinds of market space. Red oceans are existing industries with established competitors, defined boundaries, and known rules of engagement. Companies in red oceans fight over a fixed pool of demand. They benchmark against rivals. They try to outperform on the factors the industry has already agreed matter — price, features, speed, quality — and as the space fills with competitors, the margins compress, the differentiation blurs, and the water turns red with the effort of organizations bleeding each other for fractions of market share. Blue oceans, by contrast, are market spaces that do not yet exist. Industries that have not been defined. Demand that has not been created. Competition that is, by definition, irrelevant, because there is no one to compete with in a market you invented.
The distinction sounds like metaphor. It is not. It is the most consequential empirical finding in thirty years of strategy research, supported by a dataset of strategic moves spanning more than a century. Kim and Mauborgne's analysis of one hundred and eight new business launches showed that while eighty-six percent were incremental extensions into existing markets — red ocean moves — the remaining fourteen percent, the blue ocean moves, generated sixty-one percent of total profits. The arithmetic is brutal in its clarity: the vast majority of strategic effort is directed at the market spaces least likely to produce extraordinary returns. The minority of effort directed at creating new market spaces generates the majority of the value.
This finding has been validated across industries, geographies, and decades. Cirque du Soleil did not build a better circus. It created a new form of entertainment that borrowed elements from circus and theater while eliminating the elements — animal acts, star performers, multiple show arenas — that had defined the circus industry and driven its costs. Southwest Airlines did not compete with American Airlines on the factors American Airlines had defined: hub-and-spoke routing, assigned seating, first-class cabins, in-flight meals. It eliminated those factors and created new ones — point-to-point routes, rapid turnarounds, a culture of informality — that attracted a different customer at a different price point. Yellow Tail wine did not compete with established wineries on complexity, aging potential, or vineyard prestige. It made a wine for people who did not drink wine, by eliminating the factors that intimidated noncustomers and creating new ones — simplicity, approachability, fun — that expanded the total market.
In every case, the blue ocean creator did not win inside the existing market. The creator made the existing market's competitive factors irrelevant.
Now consider the software industry that existed on December 1, 2025. The factors on which companies competed were well-established and had been stable for decades. Team size: how many engineers could you deploy against a problem. Development speed: how quickly could you ship features. Feature breadth: how many capabilities could your product offer. Integration depth: how tightly could your platform connect with the rest of the enterprise stack. Compliance certifications: how many regulatory frameworks could you satisfy. Customer support: how quickly could you respond when something broke. Every enterprise SaaS company in the market competed on some combination of these factors. The competition was intense. The differentiation was marginal. The customer acquisition costs were climbing. The churn rates were rising. The margins were compressing.
It was a red ocean. One of the largest and most consequential red oceans in economic history, employing millions of people and commanding trillions of dollars, and it had been red for so long that the people swimming in it had forgotten what another color of water looked like.
The enterprise SaaS market had followed the classic red ocean trajectory Kim and Mauborgne documented across dozens of other industries. Early entrants, companies like Salesforce, Workday, and ServiceNow, had created genuine value by moving enterprise software from on-premise installation to cloud delivery. That move was, in Kim's terms, a blue ocean creation: it redrew the boundaries of the software industry by eliminating the need for on-site servers, reducing the time and cost of deployment, raising the accessibility of enterprise tools, and creating a new category — software as a service — that had not previously existed. The early SaaS pioneers were not competing with the existing enterprise software companies on the existing factors. They were making those factors irrelevant.
But blue oceans attract imitators. Success signals opportunity, and opportunity draws competition, and competition commoditizes the very innovation that created the blue ocean in the first place. Within a decade, the SaaS model itself had become a red ocean. Hundreds of companies offered cloud-based enterprise tools. The factors of competition — uptime, feature count, integration breadth, price per seat — were identical across competitors. The water was thick with rivals, and the margins were thinning.
By the time the AI threshold arrived in late 2025, the enterprise SaaS industry had reached the terminal stage of red ocean competition: the stage at which companies compete primarily on the basis of switching costs rather than genuine value creation. Customers stayed not because the product was superior but because leaving was expensive. The data was locked in. The integrations were built. The workflows were embedded. The institutional muscle memory of thousands of employees trained on the platform constituted a barrier to exit that had nothing to do with the quality of the software and everything to do with the friction of change.
This is what a red ocean looks like when viewed from the strategy canvas Kim and Mauborgne developed as their primary diagnostic tool. The strategy canvas plots the factors of competition along a horizontal axis and the offering level of each competitor along a vertical axis. When the strategy canvas of an industry shows all major competitors clustered at similar offering levels across similar factors, the industry is a red ocean. The visual signature is unmistakable: a set of nearly identical curves, varying only at the margins, converging toward the same value profile. That convergence is the market's way of announcing that the space for differentiation within the existing boundaries has been exhausted.
Kim and Mauborgne's framework identifies a specific danger in this convergence, one that is directly relevant to the moment Segal describes in The Orange Pill. When all competitors converge on the same factors, the industry develops a collective blind spot. The factors that everyone competes on become the factors that everyone assumes matter. The industry's shared assumptions about what buyers value become self-reinforcing, because every company's market research confirms what every other company already believes, because they are all asking the same questions of the same customers about the same factors. The strategy canvas becomes a consensus document rather than a diagnostic instrument. It describes not what buyers actually need but what the industry has agreed to provide.
The collective blind spot of the SaaS industry was this: the companies assumed that the value of software resided in the code. In the features. In the technical sophistication of the implementation. This assumption was so deep, so structural, so universally shared that it was invisible — the way water is invisible to the fish swimming in it. Every competitive move in the industry was a move within this assumption. Build more features. Ship faster. Integrate more deeply. Hire more engineers. The code was the product. The product was the code.
Then the machines learned to write code. And the assumption that had organized a three-trillion-dollar industry was revealed for what it always was: a contingent fact about the cost of execution, not a permanent truth about the nature of value.
Segal describes this moment with the precision of someone who lived through it. In The Orange Pill, he traces the trajectory from the first week of December 2025, when a Google principal engineer described a problem to Claude Code in plain English and received a working prototype of her team's year-long project in an hour, through the subsequent months in which the productivity multipliers became undeniable: twenty-fold in his own team's experience, measurable and repeatable across the industry. He documents the stock market's belated recognition: a trillion dollars of value evaporating from software companies in the first weeks of 2026, the event the market called the SaaS Death Cross.
Kim's framework provides the diagnostic structure for understanding what actually happened. The Death Cross was not a panic. It was not an overreaction. It was the market recognizing, with the brutal efficiency that markets sometimes achieve, that the strategy canvas of the software industry had been redrawn. The factors that had defined competition for decades — team size, development speed, feature count — had been commoditized in months. The companies whose value was entirely in the code, the thin applications that solved singular problems through technical execution, were exposed. Their competitive advantage had been a function of scarcity: code was hard to write, so the ability to write it was valuable. When that scarcity vanished, so did the advantage.
This is the signature of red ocean collapse: the moment when the factors that everyone competed on are simultaneously devalued by an external force that the industry's collective blind spot prevented it from seeing. Kim and Mauborgne's research across industries shows that this moment arrives not gradually but as a phase transition. The factors do not lose value incrementally. They lose it all at once, because the new technology does not merely improve on the old factors. It makes them irrelevant. The power loom did not make hand-weaving slightly less efficient. It made the speed of hand-weaving irrelevant as a factor of competition. The automobile did not make the horse slightly slower. It made the speed of the horse irrelevant as a factor of transportation. AI did not make human code-writing slightly more expensive. It made the cost of code-writing irrelevant as a factor of competitive advantage.
But Kim's framework insists on a distinction that the market's panic obscured. The companies that lost the most value in the Death Cross were not the companies with the deepest ecosystems. They were the companies whose value was most concentrated in the factors that AI commoditized. The distinction matters enormously, because it reveals what the red ocean was actually red with. The blood was not the blood of all software companies. It was the blood of the companies that had been competing on execution — on the speed and cost and technical sophistication of writing code — without building the ecosystem layer that AI could not replicate.
Salesforce lost twenty-five percent of its value. But Salesforce also possessed twenty years of enterprise data, hundreds of thousands of integrated workflows, compliance certifications across dozens of regulatory frameworks, and institutional relationships with the largest organizations on earth. Those assets were not code. They were the accumulated result of decisions, relationships, and trust built over decades. They were, in Kim's vocabulary, the factors that AI could not commoditize — the factors that would define the new strategy canvas.
The red ocean, then, was not the software industry as a whole. It was the subset of the software industry whose value proposition was reducible to code. The companies that had built something above the code layer — ecosystems, data assets, institutional trust, workflow intelligence — were not in a red ocean at all. They were in a transitional space, uncomfortable and uncertain, but not mortally threatened. Their value had migrated upward, from the execution layer to the judgment layer, from the code to the question of what the code should do and for whom.
Kim and Mauborgne's research contains a finding that illuminates this migration with particular force. In their study of one hundred and eight business launches, they found that the companies that succeeded in creating blue oceans were not, typically, the ones with superior technology. They were the ones that saw the technology as a means rather than an end — a tool for delivering a value proposition that the existing market could not match, not a value proposition in itself.
In September 2025, Kim, Mauborgne, and their collaborator Mi Ji published an article in the Harvard Business Review that applied this finding directly to the AI moment. The title was blunt: "Make Sure Your AI Strategy Actually Creates Value." The argument cut against the grain of every breathless AI announcement that season. Companies that lead with AI, that treat the technology as the strategy rather than as a delivery mechanism for strategy, put the cart before the horse and risk compromising their value proposition to the marketplace. The companies that start with strategy — that identify how they can offer buyers a leap in value, and then look to technology as a tool to deliver that leap — are the ones for whom AI becomes a genuine catalyst for growth.
The distinction is load-bearing. It separates the companies that will drown in the red ocean from the ones that will swim out of it. The red ocean is not AI. The red ocean is the strategic posture that treats AI as a competitive weapon in an existing market rather than as a means of creating a new one. The companies that deployed AI to write code faster were competing in the red ocean — using a new tool to improve on the same factors the industry had always competed on. The companies that used AI to ask what code should be written, for whom, and why, were creating blue oceans. The tool was the same. The strategic posture determined the outcome.
Every industry in the knowledge economy faces this choice in 2026. The legal profession competes on billable hours, research depth, and the technical precision of document drafting — all factors that AI commoditizes. The consulting industry competes on analytical rigor, data synthesis, and the persuasiveness of slide decks — all factors that AI can replicate at a fraction of the cost. The accounting profession, the design profession, the journalism profession — each operates in a red ocean defined by factors of competition that are being devalued by the same force that devalued code.
The question Kim's framework poses to every participant in every knowledge-work red ocean is not whether the water is turning red. The evidence is decisive. The question is whether the participant can see the blue ocean that the same technology is simultaneously creating — the uncontested market space where the factors of competition are not yet defined, where the demand has not yet been created, and where the old competitive advantages are not assets to be defended but anchors to be released.
---
The central concept of Kim and Mauborgne's research program is value innovation, and it is the concept most consistently misunderstood by the practitioners who attempt to apply it. The misunderstanding goes like this: value innovation means finding a way to be different. It does not. Differentiation alone is a red ocean move — it accepts the existing boundaries of the market and tries to stand out within them. Nor does value innovation mean finding a way to be cheap. Cost leadership alone is another red ocean move — it accepts the existing value proposition and tries to deliver it for less. Value innovation is the simultaneous pursuit of both: a leap in value for buyers that is achieved at a lower cost to the company. The simultaneity is not optional. It is the mechanism. It is what opens new market space, because the new value curve — different from every existing competitor's and cheaper to deliver — creates demand that did not previously exist.
The history of value innovation is the history of people who refused the choice that conventional strategy theory insisted they make. Michael Porter's competitive strategy framework, the dominant paradigm before Kim and Mauborgne, held that firms must choose between differentiation and low cost. You can be special or you can be cheap. You cannot be both, because the activities required for differentiation — premium materials, specialized labor, customized service — inevitably raise costs, while the activities required for cost leadership — standardization, scale, efficiency — inevitably reduce differentiation. The trade-off was presented as structural, baked into the economics of production itself.
Kim and Mauborgne's empirical finding was that the trade-off is not structural. It is conventional. It reflects the accumulated assumptions of an industry about which factors matter and how those factors are related, not the underlying economics of value creation. When a company challenges those assumptions — when it asks not "How do we compete on the existing factors?" but "Which factors should we eliminate, reduce, raise, or create?" — it can often find a value curve that breaks the trade-off entirely. Cirque du Soleil eliminated animal acts and star performers, which reduced costs dramatically, while creating artistic theater-quality production values, which increased differentiation dramatically. The cost savings from elimination funded the investment in creation. The trade-off dissolved because the factors had been redrawn.
Segal's imagination-to-artifact ratio — the distance between a human idea and its realization — describes the same trade-off operating at the level of individual capability. For the entire history of technology, building something required either deep technical expertise or the capital to hire people who possessed it. The person with the idea but without the expertise faced a choice: invest years in acquiring the expertise (which was expensive in time and foreclosed other uses of that time) or hire experts to implement the idea (which was expensive in money and introduced the translation costs, the signal loss, the friction that Segal documents across The Orange Pill). The trade-off between capability and cost — between what you could build and what it cost to build it — constrained every act of creation.
AI collapsed this trade-off. Not incrementally. Structurally. The custom application built by a non-programmer using Claude Code is simultaneously more differentiated and less expensive than any comparable product in the pre-AI market. More differentiated because it is tailored to a specific need so particular that no commercial software company would have found it economically viable to address. Less expensive because it was produced without a development team, without months of implementation, without institutional overhead, without the entire apparatus of cost that the software industry had built around the act of writing code. The imagination-to-artifact ratio did not narrow. It approached zero. And when it did, it executed the most radical act of value innovation the software industry has ever experienced.
To understand why this matters strategically, rather than merely technologically, Kim's framework requires examining three historical instances of value innovation and observing the structural pattern that connects them to the AI moment.
The first is Cirque du Soleil, Kim and Mauborgne's signature case study. The circus industry in the 1980s was a red ocean. Ringling Brothers, Barnum & Bailey, and their competitors fought over a shrinking audience with rising costs. Star performers demanded higher fees. Animal acts attracted protests. Multiple rings required enormous venues. The industry's strategy canvas showed all competitors clustered around the same factors: star clowns, animal shows, aisle concession sales, multiple show arenas, fun and humor, thrills and danger. Every circus offered roughly the same mix at roughly the same level. Cirque du Soleil redrew the canvas. It eliminated animal shows, star performers, and aisle concessions. It reduced fun and humor. It created a refined viewing environment, artistic music and dance, a theme and storyline, and a unique venue for each production. The result was a product that cost less than a traditional circus to produce — no animals, no star salaries, no multiple arenas — while commanding prices several times higher, because the experience it offered was categorically different from anything the circus industry had provided.
The parallel to the AI moment is structural, not cosmetic. The software industry in 2025 competed on factors analogous to the circus industry's star performers and animal acts: team size, development speed, feature count, technical sophistication. These factors were expensive. They required armies of engineers, months of development cycles, and massive infrastructure investments. AI eliminated the cost of these factors the way Cirque du Soleil eliminated the cost of animal acts — not by doing them cheaper but by making them unnecessary. When code can be generated through conversation, the cost of a development team is not reduced. It is removed from the equation entirely. The resources that were consumed by implementation — the majority of every software company's budget — become available for investment in the factors that AI cannot replicate: creative direction, domain expertise, taste, the judgment about what should be built.
The second historical instance is Southwest Airlines, which broke the trade-off between convenience and cost in the airline industry. In the 1970s, the major airlines competed on a set of factors — hub-and-spoke routing, multiple cabin classes, assigned seating, lounges, in-flight meals, baggage connections — that collectively defined what air travel was supposed to look like. These factors were expensive. They required complex logistics, large workforces, and premium infrastructure. Southwest eliminated most of them. No assigned seats. No meals. No lounges. No hub-and-spoke routing. What remained was radically simple: point-to-point flights, rapid turnarounds, and prices low enough to compete not with other airlines but with driving. Southwest was not a cheap airline. It was a different kind of transportation, one that made the automobile — not American Airlines — its reference competitor.
The analogy to AI is in the reference competitor. The non-programmer who builds a custom tool using Claude Code is not competing with Salesforce. She is competing with the spreadsheet she was using before, or the manual process she was suffering through, or the complete absence of any tool at all. Her reference competitor is not a software company. It is the status quo of doing without. This is the signature of blue ocean creation: the product is positioned not against existing competitors but against a non-consumption baseline that reveals a vast population of unserved need.
The third instance is Yellow Tail wine, which broke the trade-off between quality and accessibility in the American wine market. The wine industry competed on prestige, complexity, aging vocabulary, vineyard reputation — factors that impressed connoisseurs and intimidated everyone else. Yellow Tail eliminated the complexity, the aging language, and the prestige signaling. It created a wine that was easy to choose, easy to drink, and fun to buy. The result was the fastest-growing wine brand in American history, built not by winning wine enthusiasts away from existing brands but by converting beer and cocktail drinkers into wine drinkers. The total market expanded, not because the existing pie was redistributed but because new demand was created from people who had previously considered wine inaccessible.
Kim and Mauborgne's research shows that these three cases share a structure that generalizes across industries and eras. In every instance, the blue ocean creator did four things simultaneously: eliminated factors the industry took for granted, reduced factors below the industry standard, raised factors above the industry standard, and created factors the industry had never offered. This is the four actions framework, and its application to AI will occupy a full chapter later in this analysis. The critical observation here is that the framework's power does not reside in any single action. It resides in the simultaneity. Eliminating and reducing lower costs. Raising and creating increase buyer value. When all four happen at once, the result is value innovation: a leap in value at lower cost that opens new market space.
AI achieves this simultaneity at a scale and speed that no previous technology could approach. Consider a specific case that illustrates the mechanism. Segal describes in The Orange Pill how his engineer in Trivandrum, a backend specialist who had never written a line of frontend code, built a complete user-facing feature in two days using Claude Code. The factors that were eliminated: the need for frontend expertise, the handoff between backend and frontend teams, the translation loss at every interface between specialists. The factors that were reduced: the time from conception to working prototype, the cost of implementation, the number of people required to ship a feature. The factors that were raised: the speed at which the engineer could iterate on the user experience, the alignment between her understanding of the system's logic and the interface that expressed it. The factors that were created: the capacity for a single person to operate across the full stack, producing a feature that was simultaneously more coherent — because one mind held the whole picture — and less expensive — because one person replaced a team.
This is value innovation in its purest form, applied not to a product in a market but to the act of creation itself. The trade-off between breadth and cost has been broken. The engineer did not become a generalist by investing years in cross-training. She became one by conversing with a tool that held the expertise she lacked. The cost of that capability was one hundred dollars a month.
Kim and Mauborgne, writing in the Harvard Business Review in September 2025, identified the critical error that companies make when confronted with a technology of this magnitude. The error is leading with the technology rather than leading with the strategy. Companies that deploy AI to accelerate their existing competitive moves — to write code faster, to generate features more cheaply, to reduce headcount on the current roadmap — are pursuing cost reduction within the existing strategy canvas. They are competing in the red ocean with a more efficient stroke. The water is still red. The factors of competition are unchanged. The margins may improve temporarily, but the structural position is unaltered: the company is still fighting for share of existing demand in a market whose boundaries remain where they were.
The alternative, which Kim's framework prescribes and Segal's experience demonstrates, is to use AI as the engine of value innovation — to ask not "How can we do what we already do more cheaply?" but "What can we now offer that was previously impossible, and who would value it?" The answer to that question is, by definition, a blue ocean: a market space that did not exist before the technology made it possible, populated by buyers whose needs were not being served because the cost of serving them was prohibitive.
The question of what was previously impossible is not rhetorical. It has concrete answers. Before AI, a small business owner in rural Indonesia could not build a custom inventory management system tailored to the specific rhythms of her supply chain. The cost of custom development was tens of thousands of dollars. The available off-the-shelf products were designed for different businesses in different countries with different needs. She made do with paper ledgers and periodic counting. After AI, she can describe her business in her own language and receive a working system in hours. The system is more differentiated than any commercial product — it reflects her specific operations, her specific constraints, her specific intelligence about her own business — and it costs her a subscription fee.
That is value innovation. Not a marginal improvement on an existing product. A leap in value at lower cost that creates new demand from a buyer who was previously a noncustomer. The market she occupies did not exist before. There is no competitor in it. The water is blue.
The mechanism is the collapsing ratio. When the distance between imagination and artifact approaches zero, the number of value propositions that can be created approaches infinity. Each one is a potential blue ocean. Each one represents demand that did not exist before, because the cost of creating the supply was prohibitive. The technology did not create the need. It removed the barrier that prevented the need from being served.
Kim and Mauborgne's DeepSeek case study, published through INSEAD in 2025, provides a complementary illustration at the industry level. DeepSeek, the Chinese AI company, broke the value-cost trade-off in large language model development itself. While OpenAI, Anthropic, and Google competed in what Kim's framework would identify as a red ocean — an arms race of scale, spending, and computational resources — DeepSeek delivered comparable AI performance at a fraction of the cost, under significant resource constraints. The case describes the company as having identified and opened a blue ocean opportunity space by simultaneously pursuing low cost and high performance, the very move that conventional wisdom in the AI industry held to be impossible. The incumbents assumed that frontier AI required frontier spending. DeepSeek challenged the assumption and broke the trade-off.
The DeepSeek case demonstrates that value innovation operates at every layer of the AI economy — from the infrastructure layer (where DeepSeek broke the cost-performance trade-off) to the application layer (where individual builders create custom solutions at near-zero cost) to the organizational layer (where companies that use AI strategically rather than reactively create new market spaces that competitors locked in red ocean thinking cannot see). At every layer, the mechanism is the same: the simultaneous pursuit of higher value and lower cost, achieved not by working harder within the existing boundaries but by redrawing the boundaries themselves.
The implication for any organization confronting the AI transition is this: the question is not whether to adopt AI. That question has been answered by the market with the finality of the adoption curve Segal documents — ChatGPT reaching fifty million users in two months, Claude Code crossing $2.5 billion in annual run-rate revenue by February 2026. The question is whether the adoption will be a red ocean move or a blue ocean move. Whether AI will be used to compete more fiercely on the factors the industry already competes on, or to create new factors that make the old competition irrelevant.
The companies that answer this question correctly will not merely survive the transition. They will define the markets that emerge from it. The companies that answer it incorrectly will discover what the SaaS industry discovered in early 2026: that competing harder in a red ocean, even with a powerful new tool, only accelerates the bleeding.
---
The strategy canvas is Kim and Mauborgne's primary diagnostic tool, and its power lies in what it makes visible. A strategy canvas plots the factors on which an industry competes along a horizontal axis and the level at which each competitor offers those factors along a vertical axis. When every competitor in an industry converges on the same factors at the same levels, the canvas reveals a red ocean. When a company diverges — eliminating some factors, reducing others, raising a few above the industry standard, and creating new ones the industry has never offered — the canvas reveals a value curve that defines a blue ocean.
The tool is deceptively simple. Drawing a strategy canvas for any industry requires answering two questions that most participants in that industry have never explicitly asked. First: what are the factors on which we actually compete? Not the factors the marketing department highlights. Not the factors the CEO discusses in quarterly earnings calls. The actual factors that determine whether a buyer chooses this product over that one. Second: at what level do we and our competitors offer each factor? The honest answer, in most industries, is: at approximately the same level. The convergence is the diagnosis.
The software industry's strategy canvas before the AI threshold was stable enough to describe with confidence, because the factors had been consistent for more than two decades. The horizontal axis plotted: technical team size, development velocity, feature breadth, integration depth, compliance and security certifications, customer support responsiveness, and platform reliability. These were the factors. Every enterprise SaaS company competed on some weighted combination of them. A Salesforce or SAP emphasized integration depth and compliance certifications. A startup emphasized development velocity and feature innovation. But the factors themselves were shared. The industry had agreed, implicitly and without deliberation, on what mattered.
The vertical axis revealed the convergence. The major platforms — Salesforce, Workday, SAP, ServiceNow, Adobe, Autodesk — offered strikingly similar levels across most factors. The differences that sales teams emphasized in competitive presentations were, on the strategy canvas, barely distinguishable. A slightly higher score on integration depth here, a slightly faster development cycle there. The curves tracked each other with the fidelity of competitors who had been benchmarking against each other for so long that their offerings had converged toward an industry consensus about what an enterprise software product should look like.
Kim and Mauborgne observed this pattern in industry after industry. They called it the "strategy convergence" problem, and they identified it as the defining symptom of red ocean competition. When every company's value curve looks like every other company's, the industry has exhausted the differentiation available within its current boundaries. The only moves left are incremental: a marginal improvement on one factor at increased cost on another. The trade-off reasserts itself. Growth stalls. Margins compress. The companies that survive do so through scale, switching costs, and the institutional inertia of customers who find it more painful to leave than to stay.
Then AI arrived, and the canvas tore.
The first factor to collapse was technical team size. For two decades, the size of a company's engineering organization had been a proxy for its capability. Large teams shipped more features, maintained more code, addressed more edge cases, and responded to more customer requests. The assumption was so deep that headcount was used interchangeably with capacity in budget discussions. When a product roadmap required more capability, the response was: hire more engineers.
Claude Code made a single engineer capable of output that had previously required a team. Segal's account of the Trivandrum training — a twenty-fold productivity multiplier at one hundred dollars per person per month — was not an anomaly. It was the leading edge of a structural shift. The factor of "team size" did not decline in importance. It became, in the precise sense that Kim and Mauborgne use the term, strategically irrelevant. A factor is strategically irrelevant when it no longer differentiates competitors in a way that influences buyer decisions. Team size had been a differentiator because it correlated with output. When that correlation broke — when a small team using AI could match or exceed the output of a large team without it — the factor ceased to carry strategic weight.
Development velocity followed. The cycle time from conception to working prototype had been a competitive dimension: companies that shipped faster won customers faster. AI compressed this cycle from months to days for a significant class of applications. When Segal's team built Napster Station in thirty days — a product that conventional timelines would have placed at six to twelve months — the velocity factor did not just improve. It jumped to a level that made the previous range of competition irrelevant. The gap between the fastest pre-AI company and the slowest was measured in weeks. The gap between an AI-enabled team and a pre-AI team was measured in orders of magnitude. The factor had been disrupted past the point at which marginal improvements within the old range carried strategic meaning.
Feature breadth suffered the same fate. When AI tools can generate features through conversation, the total number of features a product offers is no longer a meaningful differentiator. The scarcity that made features valuable — the engineering hours required to build each one — had been the real constraint. Remove the constraint, and the scarcity evaporates. Every competitor can offer every feature. Feature breadth converges not at a competitive level but at a commodity level, the way bandwidth converged to commodity in the telecommunications industry after fiber optic deployment.
What remained — the factors that did not collapse — are the factors that define the new strategy canvas. And they are categorically different from the factors they replace.
The first new factor is creative direction: the capacity to envision what should be built. When code is cheap, the bottleneck is not implementation but imagination. The company that can identify unmet needs, envision products that address them, and make the thousand small decisions about scope, tone, and experience that determine whether a product delights or disappoints — that company possesses a capability AI cannot replicate. Creative direction was always important, but it was buried under the weight of implementation. A brilliant product vision was only as valuable as the team's ability to execute it, and execution consumed eighty percent of the resources. When execution becomes cheap, creative direction becomes the dominant factor. It moves from background to foreground on the strategy canvas.
The second new factor is domain expertise: deep knowledge of the specific context in which the product will be used. A general-purpose AI can write code for a hospital management system. It cannot know which workflows a night-shift nurse actually follows, which data a surgeon needs at the moment of incision, which compliance requirements vary by state, or which features the procurement committee will reject because they conflict with existing infrastructure. That knowledge lives in the people who inhabit the domain — the nurses, the surgeons, the administrators, the patients. Domain expertise, which was always valuable but often subordinate to technical skill in the old hierarchy, becomes a primary factor of competition. The person who understands the problem deeply is now worth more than the person who can build the solution technically, because the technical capability has been democratized.
The third new factor is taste: the aesthetic and experiential judgment that determines whether a product feels right. Taste is not quantifiable. It cannot be measured on a performance benchmark. It is the quality that separates a product users tolerate from a product users love, and it resides in the accumulated sensitivity of people who have spent years paying attention to how humans interact with designed objects. When AI can produce any interface, any feature, any workflow, the differentiator is not what the product does but how it feels. Taste is the factor that explains why some AI-generated outputs are adopted immediately and others sit unused despite being technically correct.
The fourth new factor is ecosystem depth: the accumulated relationships, data assets, workflow integrations, and institutional trust that surround a platform. This is the factor that protected certain SaaS companies during the Death Cross and will continue to protect them as the canvas is redrawn. Ecosystem depth cannot be generated through conversation with an AI. It is the deposit of years of deployment, thousands of customer relationships, millions of data points, and the institutional memory of organizations that have built their operations around a specific platform. It is, in Kim's vocabulary, a barrier to imitation — the factor that prevents competitors from replicating a blue ocean even after they can see it.
The new strategy canvas, then, plots creative direction, domain expertise, taste, and ecosystem depth along the horizontal axis. The companies that score highest on these factors will define the next era of the software industry. The companies that continue to compete on the old factors — team size, development velocity, feature breadth — are competing on a canvas that no longer exists.
Kim and Mauborgne's NVIDIA case study, published through INSEAD, illustrates the dynamics of the transitional period in which both canvases coexist. The case describes NVIDIA's extraordinary rise to become one of the most valuable companies in the world, driven by the AI revolution's insatiable demand for computational infrastructure. NVIDIA dominated the old canvas factor of processing power — the technical capability that made AI training possible. But the case then asks whether NVIDIA can sustain its blue ocean as competitors — Google designing custom TPU chips, Microsoft developing its own AI silicon, Amazon investing in Graviton processors — enter the same space. The question is whether NVIDIA's advantage is in the hardware factor (which competitors can replicate) or in the ecosystem factor (the CUDA software platform, the developer community, the institutional relationships) that hardware alone cannot replicate.
The answer, which the case leaves to its students, is the same answer the entire software industry faces: the factors that will sustain competitive advantage are the factors above the execution layer. The hardware, like the code, is becoming commodity. The ecosystem — the accumulated intelligence about how the hardware is used, by whom, for what purposes — is the durable advantage.
Segal's account of the Napster Station build provides a case study in operating on the new canvas from the builder's perspective rather than the analyst's. The product that his team shipped in thirty days was not primarily an achievement of coding speed. It was an achievement of creative direction: the vision of an AI-powered concierge kiosk, the decisions about how it should interact with users, the judgment about which capabilities mattered and which could wait. It was an achievement of domain expertise: the understanding of high-volume customer environments, the knowledge of how people behave at trade shows, the sensitivity to the difference between a technology demo and a product that serves a genuine need. And it was an achievement of taste: the choices about industrial design, audio routing, conversational flow, and the thousand small details that determined whether Station felt like a piece of technology or a piece of hospitality.
The coding was the easy part. That is the signal the new strategy canvas sends. The coding was the factor that had consumed the majority of resources in the old canvas and commanded the majority of competitive attention. On the new canvas, it is the baseline — the table stakes, the factor that every competitor can match, the dimension on which differentiation is no longer possible.
The organizations that read the new canvas correctly will restructure accordingly. They will invest in creative direction, hiring people who can envision products that do not yet exist. They will invest in domain expertise, building relationships with the communities their products serve. They will invest in taste, cultivating the aesthetic and experiential judgment that makes the difference between functional and beloved. And they will invest in ecosystem depth, understanding that the data, relationships, and institutional trust they accumulate today are the strategic assets that will sustain their position tomorrow.
The organizations that misread the canvas will continue to hire engineers, compete on feature count, and benchmark their development velocity against competitors who are benchmarking against them. They will swim in a red ocean that no longer exists, competing on factors that no longer differentiate, bleeding resources into dimensions of competition that the market has already abandoned.
The strategy canvas is a tool. Its value is diagnostic, not prescriptive. It tells an organization where it stands but not where it should go. The direction — the specific combination of factors to eliminate, reduce, raise, and create — is the strategic decision that must be made by human beings with the judgment, the domain knowledge, and the taste to see what the canvas reveals.
That is the work that Kim's framework prescribes. It is also the work that AI, for all its extraordinary capability, cannot do. Drawing the canvas requires understanding what buyers value. Redrawing it requires the courage to abandon the factors that everyone else still competes on and invest in the factors that no one has yet defined. That courage is human. The canvas merely makes it legible.
---
Kim and Mauborgne's four actions framework is the operational grammar of blue ocean strategy. Where the strategy canvas is diagnostic — revealing where an industry competes and where value curves converge — the four actions framework is prescriptive. It tells the strategist what to do. It asks four questions, and the answers, taken together, define the value curve of a blue ocean offering.
What factors that the industry takes for granted should be eliminated?
What factors should be reduced well below the industry standard?
What factors should be raised well above the industry standard?
What factors should be created that the industry has never offered?
The framework's power resides in the simultaneity of the four actions. Elimination and reduction lower costs. Raising and creating increase buyer value. When all four happen at once, the result is value innovation — a leap in value at lower cost — and value innovation is the mechanism that opens new market space. The framework is not a menu from which the strategist selects one or two actions. It is a system in which all four actions reinforce each other.
Kim and Mauborgne's own application of the framework to historical blue ocean creators is instructive. When they analyzed Cirque du Soleil, the four actions were: Eliminate animal shows, star performers, and aisle concession sales. Reduce fun and humor, and thrills and danger. Raise the artistic quality of the production. Create a refined environment, original music, artistic themes, and a storyline. The eliminations and reductions lowered costs dramatically — no animal maintenance, no star salaries, no multiple rings. The raises and creations justified a price point several times higher than a traditional circus. The combination produced a new entertainment category that was both less expensive to deliver and more valuable to the buyer.
Applying the four actions framework to the AI moment with the same analytic rigor reveals a restructuring of unprecedented scope, one that operates not at the level of a single industry but across the entire knowledge economy.
Eliminate: the requirement that builders must be programmers.
This is the most consequential elimination in the history of the software industry, and its implications extend far beyond software. For fifty years, building digital products required mastering a specialized language — a programming language — that most human beings did not speak. The requirement was not a feature of the work. It was an artifact of the interface: the machine could not understand human language, so humans were required to learn the machine's. Every programming language ever designed, from FORTRAN to Python, was a compression format for human intention — a way of squeezing what the builder wanted into a structure the machine could parse.
The compression was lossy. Ideas that were rich and contextual in the builder's mind arrived at the machine stripped of nuance, flattened into syntax, reduced to the subset of intention that the language could express. The gap between what the builder imagined and what the code expressed was the translation cost, and it was paid on every project, by every builder, at every stage of development.
AI eliminated the translation. Not reduced it. Eliminated it. The machine learned to meet the human in the human's language, with all its ambiguity, implication, and contextual richness. Segal describes the moment he felt it — working late, describing a problem to Claude in plain English, and receiving not a literal translation but an interpretation, a reading of his intention that was informed by everything he had said before. The interface had crossed a qualitative threshold. The requirement that the builder speak the machine's language was gone.
The elimination of this requirement does not merely make existing builders faster. It expands the population of who can build. The forty-seven million professional developers worldwide are joined by the hundreds of millions of domain experts, teachers, small business owners, designers, administrators, and individuals whose ideas have been locked behind the translation barrier for decades. Each of these people possesses knowledge that is valuable — knowledge of their domain, their users, their specific needs — and AI has eliminated the factor that prevented them from acting on it. In Kim's terms, the elimination converts noncustomers into customers — or more precisely, into builders. The market does not just grow. It transforms in kind.
The cost reduction from this elimination is not marginal. The average cost of custom software development for a small business in the United States in 2024, according to industry surveys, ranged from fifty thousand to five hundred thousand dollars, depending on complexity. The cost of a Claude Code subscription that enables a non-programmer to build a comparable application through conversation is one hundred to two hundred dollars per month. The elimination of the programming requirement represents a cost reduction of two to three orders of magnitude. That is not an efficiency gain. It is a structural transformation of the cost curve that makes an entirely new category of products economically viable.
Reduce: the time and institutional overhead of implementation.
Even for professional developers, the pre-AI software development cycle was laden with overhead that had little to do with the intellectual challenge of the work. Dependency management. Configuration files. Boilerplate code. Documentation. Testing infrastructure. Code review processes. Deployment pipelines. Each of these activities consumed time, and the aggregate consumption was staggering. Studies of developer productivity consistently showed that professional software engineers spent between thirty and fifty percent of their time on what they themselves classified as non-creative work: the mechanical connective tissue between the components they actually cared about.
AI reduces this overhead to a fraction of its former weight. Segal's engineer in Trivandrum described spending roughly four hours a day on what she called "plumbing" — the mechanical work of connecting systems. Claude Code handled the plumbing in minutes. The reduction freed approximately half of her working day for the higher-order work of design, architecture, and judgment.
But Kim's framework insists on understanding what, precisely, is being reduced and what costs the reduction may carry. This is where Segal's engagement with Byung-Chul Han provides a necessary complication to the strategic analysis. Segal describes the engineer discovering, months after the reduction, that her architectural intuition — her sense of how systems fit together — had diminished. Buried within the four hours of daily plumbing were approximately ten minutes of encounters with unexpected system behaviors that forced her to understand connections she had not previously grasped. Those ten minutes were formative. They built the embodied knowledge that no documentation could teach. When Claude took over the plumbing, she lost both the tedium and the ten minutes. The tedium she was glad to lose. The ten minutes she did not know she had lost until the absence became visible in the quality of her decisions.
Kim's framework does not address this kind of loss directly, because the framework operates at the level of buyer value and cost structure, not at the level of individual cognitive development. But the loss is real, and it has strategic implications. If the reduction of implementation overhead also reduces the formation of the judgment that the new strategy canvas values most highly, then the short-term efficiency gain may come at the cost of long-term strategic capability. The organization that reduces all implementation friction for its engineers may produce faster output this quarter and weaker judgment next year. The trade-off is not between speed and slowness. It is between present output and future capability. And it is precisely the kind of trade-off that Kim's framework is designed to make visible, because the framework asks not just "What should we reduce?" but "What should we reduce below the industry standard?" — implying that some level of the factor should be preserved.
The strategic application is not to eliminate implementation friction entirely but to reduce it to the level that preserves the formative encounters while removing the mechanical drudgery. This is a design problem, not a deployment problem. It requires understanding which forms of friction build capability and which merely consume time — a distinction that most organizations have not yet made, because most organizations have not yet recognized that the distinction exists.
Raise: the premium on judgment, creative direction, and domain expertise.
The elimination and reduction actions lower costs. The raising action increases buyer value. And the factor that must be raised above the industry standard in the AI era is judgment — the capacity to decide what should be built, for whom, and why.
Judgment was always valuable. But its value was suppressed by the weight of execution. In the old model, the person with the best judgment was not necessarily the person who commanded the highest premium. The person who could implement, who could write the code, who could ship the feature, who could translate the vision into a working artifact — that person commanded the premium, because implementation was the scarce resource. Judgment was abundant relative to the capacity to act on it. There were always more ideas than there was bandwidth to build them.
AI inverts the scarcity. Implementation is now abundant. Judgment becomes the constraint. The person who can see what the market does not yet see, who can identify the unmet need before it becomes obvious, who can make the call that the data cannot fully justify — that person's premium rises not incrementally but structurally, because the value of judgment scales with the capability it directs. When judgment directed a team of five engineers, its leverage was five. When judgment directs an AI-augmented capability equivalent to a hundred engineers, its leverage is a hundred. The premium on judgment tracks the leverage it commands, and AI has increased that leverage by an order of magnitude.
Kim and Mauborgne's 2025 Harvard Business Review article makes this point with the directness that characterizes their best work. Companies that start with AI — that deploy the technology first and ask what it is for afterward — are pursuing a cost reduction strategy within the existing canvas. Companies that start with strategy — that identify how they can offer buyers a leap in value, and then use AI as the delivery mechanism — are raising the judgment factor above the industry standard. The difference is not tactical. It is the difference between red ocean and blue ocean, between competing harder on the old factors and creating new factors that make the old competition irrelevant.
The practical implication for organizations is that the hiring profile changes. The most valuable hire in the AI era is not the most technically skilled candidate. It is the candidate with the deepest domain knowledge, the most developed aesthetic judgment, the strongest capacity for creative direction — combined with enough technical fluency to direct AI tools effectively. This is a new profile. It did not exist five years ago. It combines the sensibilities of a creative director, the knowledge of a domain expert, and the operational fluency of a technologist, and it is the profile that the new strategy canvas values most highly.
Create: entirely new categories of products and services that could not exist before.
The creation action is where blue ocean strategy achieves its full scope, because creation is the action that expands the total market rather than redistributing the existing one. Elimination, reduction, and raising all operate within the universe of existing buyer needs and existing competitive factors. Creation operates outside that universe. It asks: what can we offer that no one has offered before, because no one could?
The AI moment creates new categories of products at three levels of scale.
At the individual level, personal software — applications built by individuals for their own use, addressing needs so specific that no commercial product could profitably serve them. The marketing manager who builds a custom analytics dashboard tailored to her specific KPIs, data sources, and decision-making rhythms. The teacher who creates a personalized learning application for a student with a specific combination of strengths and challenges. The small business owner who builds an inventory management system that reflects the idiosyncratic supply chain of her specific business in her specific geography. Each of these products serves a market of one. Each is a blue ocean in miniature — an uncontested space where competition is irrelevant because the product is uniquely tailored to a unique need.
At the organizational level, AI creates new categories of products that serve needs no existing product addresses. Segal's Napster Station is an example: an AI-powered concierge kiosk that did not exist as a product category before the capabilities that AI provided made it possible. The product was not a faster version of an existing kiosk. It was a new kind of interaction — a conversational, intelligent, context-sensitive service that combined capabilities from multiple domains (conversational AI, music generation, computer vision, audio engineering) in a configuration that no pre-AI product architecture could have supported. The creation was not incremental. It was categorical.
At the market level, AI creates new categories of economic activity by converting noncustomers into participants. Kim and Mauborgne's research on noncustomers — the people who do not currently participate in a market — identifies three tiers of non-consumption: those who are on the edge of the market and about to leave, those who consciously reject the market's offerings, and those who have never considered the market relevant to their lives. AI reaches all three tiers simultaneously. The first tier: the developer who was considering leaving the profession because the tedious aspects of coding had overwhelmed the creative aspects, now reinvigorated by AI's elimination of the tedium. The second tier: the domain expert who consciously rejected the software industry's offerings because none of them fit her specific needs, now building her own. The third tier: the hundreds of millions of people worldwide who never considered software creation as something relevant to their lives, now discovering through a single conversation with an AI tool that the thing they have been imagining can be made real.
Each tier represents new demand. The demand does not come from existing markets. It does not cannibalize existing products. It is genuinely new — created by the elimination of the cost barrier that previously prevented these people from participating. Kim and Mauborgne's research shows that this kind of demand creation — the conversion of noncustomers into active participants — generates more value and more durable growth than any amount of competitive market-share capture.
The four actions, taken together, define the value curve of the AI moment. Eliminate the programming requirement. Reduce implementation overhead. Raise the premium on judgment. Create entirely new categories of products and services. The curve is categorically different from the curve of the pre-AI software industry. It does not compete on the old factors. It renders them irrelevant. It defines a new market space — a blue ocean — in which the factors that matter are the factors that AI cannot replicate: the human capacity to envision, to judge, to care about the specific needs of specific people.
But Kim's framework carries a warning embedded in the creation action that is easily overlooked in the excitement of possibility. New categories of products are created. That is the opportunity. New categories of products also fail. That is the risk. The history of blue ocean creation is littered with products that were genuinely new, that addressed genuinely unserved needs, and that failed because the creators did not understand the difference between a capability and a product. A capability is something that can be done. A product is something that someone will pay for, return to, depend on, recommend to others. The gap between them is the gap between a weekend prototype and a sustainable business, and AI, by making the prototype trivially easy to build, may obscure the difficulty of crossing that gap.
Kim and Mauborgne's warning in their 2025 article echoes here. Starting with AI rather than starting with strategy risks producing capabilities rather than products — impressive demonstrations of what the technology can do, rather than sustainable offerings that deliver genuine value to genuine buyers. The four actions framework is the discipline that prevents this drift: by forcing the strategist to ask what to eliminate, reduce, raise, and create, it keeps the analysis grounded in buyer value rather than technological capability. The technology enables. The framework directs. And the direction must come from human judgment about what is worth building — judgment that remains, in 2026, the scarcest and most valuable resource in the knowledge economy.
The most counterintuitive principle in Kim and Mauborgne's research program is also the most consequential: the greatest growth opportunities lie not with existing customers but with noncustomers — the people who do not currently participate in the market.
This principle cuts against every instinct of conventional business strategy. Conventional strategy tells organizations to listen to their customers, to survey their customers, to build what their customers ask for, to measure satisfaction among their customers. Kim and Mauborgne's research reveals the limitation of this approach with empirical force: existing customers, by definition, are already being served. They are already inside the market. The incremental value of serving them better is real but bounded. The unbounded value — the value that creates new market space and drives extraordinary growth — lies with the people who are not being served at all.
Kim and Mauborgne identify three tiers of noncustomers, arranged by their distance from the existing market. The first tier consists of people who sit on the edge of the market. They are minimally served. They use the industry's products because no alternative exists, but they would leave if one appeared. They are, in the researchers' language, "soon-to-be noncustomers" — participants whose loyalty is circumstantial rather than genuine.
The second tier consists of people who have consciously evaluated the market's offerings and refused them. They know the products exist. They have considered purchasing. They have decided that the value proposition does not justify the cost — whether that cost is measured in money, time, complexity, or the frustration of using a product designed for someone else's needs. These are "refusing noncustomers," and their refusal is a signal, not of indifference but of unmet need.
The third tier is the largest and most invisible. It consists of people who have never considered the market relevant to their lives. They are not refusing the industry's products. They are not sitting on the edge. They have simply never imagined that the industry has anything to offer them. These are "unexplored noncustomers," and they represent the vast majority of the potential demand in most industries — the demand that no existing competitor has touched, because no existing competitor has looked beyond the boundaries of the existing market to see it.
The software industry's noncustomer population, viewed through this framework, is staggering in its scale. The world's forty-seven million professional developers represent the existing market — the people who currently participate in software creation. The people who do not are everyone else. Seven billion human beings who have never written a line of code. Several hundred million of them possess domain expertise, creative vision, or specific operational knowledge that would, if it could be translated into software, produce products of genuine value. They are noncustomers not because they lack the desire to build but because they lack the means. The cost barrier — years of training, thousands of dollars in development resources, access to institutional infrastructure — has kept them outside the market as effectively as a wall.
AI demolished the wall. Not gradually. The wall came down in months, and the people standing behind it could suddenly see the territory that had been invisible to them for decades.
Consider the three tiers as they manifest in the AI moment.
The first tier — the soon-to-be noncustomers, the people on the edge of the software creation market — includes the professional developers who were considering leaving the profession. Segal describes this population in The Orange Pill: senior engineers watching their specialized knowledge commoditize, questioning whether the years of investment in mastering a particular language or framework still carried the value it once did. Some were moving to the woods, as Segal put it, lowering their cost of living in anticipation of a livelihood that seemed about to evaporate. These were people who participated in the market but whose attachment to it was fraying.
AI did not push them out. For many, it pulled them back in. The engineer whose daily work had been consumed by mechanical plumbing — dependency management, configuration files, the tedious connective tissue between the systems she actually cared about — discovered that AI handled the plumbing and freed her to do the work that had drawn her to engineering in the first place. The creative work. The architectural work. The judgment work. The first-tier noncustomer was retained not by a better version of the old product but by a transformation of the work itself. The drudgery that had been pushing her toward the exit was eliminated. What remained was the part she had always wanted.
The retention of first-tier noncustomers is valuable but limited. It preserves existing demand. It does not create new demand. The transformative action happens at the second and third tiers.
The second tier — the refusing noncustomers — includes every domain expert, business owner, educator, designer, and operational professional who had evaluated the software industry's offerings and found them wanting. Not because the products were bad. Because the products were not theirs. The CRM was designed for a generic sales organization, not for the specific rhythms of a boutique consulting firm in Accra. The learning management system was designed for a generic classroom, not for the specific needs of a literacy program in rural Bihar. The inventory management tool was designed for a generic supply chain, not for the specific logistical challenges of a small manufacturer in Medellín whose suppliers are reachable only by river.
These people knew what they needed. They could describe it in detail. What they could not do was build it, because building required a language they did not speak and a skill set they did not possess. The software industry's response to their needs was, at best, a configurable product that approximated what they wanted — close enough to be useful, far enough from what they actually needed to be a constant source of friction. At worst, the industry simply did not see them. Their needs were too specific, their markets too small, their willingness to pay too constrained by local economics, for any commercial software company to find them worth serving.
AI converted these refusing noncustomers into builders. The conversion is the most radical market creation in the history of the software industry, and its mechanism is simple: the person who can describe what she needs can now build what she needs. The translation barrier that separated domain knowledge from software creation has been eliminated. The literacy program director in Bihar who understands exactly what her students need — which exercises, in which sequence, with which feedback mechanisms, adapted to which local languages and cultural contexts — can now describe that understanding to an AI tool and receive a working application. The application is not a generic product configured to approximate her needs. It is her product, built from her knowledge, serving her students.
The strategic implications are enormous. Each second-tier noncustomer who becomes a builder creates a product that did not previously exist, serving a need that was not previously served, in a market space that no existing competitor occupies. Each conversion is a blue ocean in miniature. And the scale of the second tier — hundreds of millions of domain experts worldwide, each possessing specific knowledge that the software industry has never accessed — suggests a total addressable market that dwarfs the existing software industry.
Kim and Mauborgne's research shows that second-tier noncustomers share a common characteristic: they can articulate why they refused. Their refusal is informed, specific, and diagnostic. It contains, embedded within it, the blueprint for the product that would win them. In the pre-AI era, that blueprint was useless — a description of something that could not be built without the resources the noncustomer did not possess. In the AI era, the blueprint is the product. The description is the code. The refusal transforms into creation.
The third tier is the one that Kim and Mauborgne identify as the most transformative and the least visible. Third-tier noncustomers have never considered the market relevant to their lives. They do not refuse the industry's products. They have not evaluated them. The entire category of "software creation" exists, in their mental model, in the same space as "orbital mechanics" or "cardiac surgery" — something that other people do, requiring expertise so far from their own that the question of participation never arises.
A twelve-year-old in São Paulo who has an idea for a game. A retired teacher in Osaka who has developed, over forty years of classroom experience, a method for teaching fractions that works better than anything in the curriculum. A farmer in Kenya who has noticed a pattern in crop yields that correlates with a combination of weather, soil, and planting timing that no agricultural app has captured. Each of these people possesses knowledge that, if it could be expressed as software, would have genuine value. None of them has ever imagined themselves as a software creator.
AI does not persuade them to enter the market. It does not market to them. It does not lower the price of an existing product until they can afford it. What AI does is far more radical: it eliminates the category distinction between "people who build software" and "people who do not." The twelve-year-old describes her game in Portuguese. The retired teacher describes her method in Japanese. The farmer describes his observation in Swahili. Each receives a working prototype. The category boundary dissolves. They have become builders without having decided to become builders, without having undergone training or acquired credentials or crossed any of the thresholds that previously separated the software creator from everyone else.
Kim and Mauborgne's framework predicts that the conversion of third-tier noncustomers produces the largest market expansion, because the third tier is the largest population and the least served. When Gutenberg made books cheap, the largest market expansion came not from the scholars who bought more books or the merchants who bought their first but from the vast population of literate people who had never imagined owning a book. The printing press did not compete for the manuscript market. It created the reading public — a market so large that the manuscript market became a rounding error within it.
The analogy is structural, not merely illustrative. AI is creating the building public — a population of people who create software not as professionals but as an expression of their needs, their knowledge, their creativity. This population does not compete with professional developers any more than the reading public competed with scribes. It is a different market, operating at a different scale, serving different needs, and generating different kinds of value.
Segal's account of the developer in Lagos captures the second-tier conversion with specificity: a person who had the ideas, the intelligence, the ambition, but lacked the infrastructure — the team, the capital, the institutional support — that the old model required. But the more profound implication of his democratization argument lies at the third tier, which he gestures toward without fully developing. The student in Dhaka, the parent at the kitchen table, the small business owner in a rural economy — these are not people who were waiting for cheaper software. They are people who had never imagined that the thing they understood about their domain could become a tool that served others. The AI did not sell them a product. It revealed a capability they did not know they possessed.
This is the moral center of Kim's framework applied to the AI moment. The strategic argument and the ethical argument converge at the noncustomer. The greatest growth opportunity is also the greatest equity opportunity. The market space that produces the highest returns is also the market space that serves the most underserved populations. Blue ocean strategy does not require a choice between profit and purpose. When the blue ocean is created by converting noncustomers into builders, profit and purpose are the same act.
But the convergence carries a responsibility that Kim's framework acknowledges through its emphasis on fair process and that Segal articulates through his ethic of stewardship. The noncustomer who becomes a builder is also the noncustomer who is most vulnerable to the pathologies that Segal documents: the productive addiction, the task seepage, the erosion of boundaries between work and rest, the smoothness that conceals the loss of depth. The professional developer has institutional structures — teams, schedules, the accumulated norms of a profession — that provide at least partial protection against these pathologies. The new builder has none. She is alone with the tool, and the tool is designed to be maximally responsive, maximally available, maximally capable, and minimally aware of the human cost of its own generosity.
The conversion of noncustomers into builders is the most important market creation of the AI era. It is also the market creation that most urgently requires the dams that Segal describes — the structures that redirect the flow of capability toward sustainable human flourishing rather than the exhausting, exhilarating, ultimately corrosive cycle of production without rest that the Berkeley researchers documented. The blue ocean is real. The water is clean. But the builders who swim in it need structures they do not yet possess, and the organizations that serve them have an obligation to provide those structures, not merely because it is ethical but because the long-term health of the market depends on the long-term health of the people who constitute it.
Kim's noncustomer framework reveals what the technology discourse often obscures: the AI revolution is not primarily a story about machines. It is a story about people who were locked out and are now, suddenly, standing inside the building. What they build there — and whether the conditions support their flourishing or their exhaustion — will determine whether the blue ocean sustains or collapses. The noncustomer has arrived. The question now is whether the market is ready to receive her.
---
Every blue ocean in Kim and Mauborgne's dataset was built slowly. Cirque du Soleil spent years developing its first production. Southwest Airlines iterated for a decade before its model stabilized. Yellow Tail wine required years of market research, product development, and distribution-channel construction before its value curve was legible to the market. The creation of new market space, in the historical record, was a patient act. It required sustained investment, institutional commitment, and the organizational discipline to resist the gravitational pull of red ocean competition long enough for the new market to take shape.
AI has compressed this timeline from years to hours. The implications of that compression are both exhilarating and structurally unstable, and Kim's framework provides the tools to understand why.
Consider the mechanism. In the pre-AI era, creating a new product required traversing a sequence of stages, each of which consumed time and resources: conception, specification, design, implementation, testing, deployment, iteration. Each stage introduced friction — the productive kind that built understanding and the unproductive kind that merely consumed bandwidth. The total cycle time, from the first articulation of an idea to a working product in the hands of users, was measured in months at minimum and years at the standard pace.
The cycle time was not merely a cost. It was a filter. Ideas that could not survive the journey — ideas whose champions lost momentum, whose funding ran out, whose market window closed, whose technical challenges proved intractable — were eliminated. The filter was brutal and indiscriminate. It killed bad ideas and good ideas with equal efficiency. But it also performed a function that is only visible in retrospect: it ensured that the ideas that survived were backed by sustained conviction. The builder who brought a product to market after two years of development had demonstrated, through the act of persisting, that the idea mattered enough to warrant the investment. The persistence itself was a signal of value.
AI eliminated most of the filter. A builder can now move from conception to working prototype in a single conversation. The stages that previously consumed months — specification, design, implementation, testing — collapse into an iterative dialogue between the builder and the AI tool. The dialogue moves at the speed of thought. The friction that separated idea from artifact has been reduced to the friction of articulation itself: the effort of describing what you want clearly enough for the tool to produce it.
The result is an explosion of creation. Blue oceans are being created at a pace that no previous technology could support. Each day, thousands of new applications are built by people who, a year earlier, could not have built them. Each application occupies a market space that did not exist before. Each addresses a need that the existing market did not serve. The proliferation of micro-markets — market spaces so specific that they serve a handful of users or even a single person — is a phenomenon without historical precedent.
Kim and Mauborgne's framework celebrates this kind of market creation. It is the purpose of the framework. New market space is the source of new growth, new value, new demand. The more blue oceans, the better. The more noncustomers converted into participants, the more the total economy expands. The logic is sound, and the evidence that the AI-enabled proliferation of new products is generating genuine value is strong.
But the framework also contains a principle that the speed of AI creation puts under strain: the principle that a blue ocean must be sustained to generate durable value. In Kim and Mauborgne's historical analysis, the blue ocean creators that produced lasting impact were the ones that maintained their new market space long enough for the value curve to become established, for buyer habits to form, for the ecosystem around the offering to develop the institutional depth that discourages imitation. Cirque du Soleil did not create a single show. It created a production model, a training pipeline, a brand, and a touring infrastructure that sustained its blue ocean for decades. Southwest Airlines did not launch a single route. It built an operational system — the rapid turnaround, the point-to-point network, the organizational culture — that sustained its cost advantage across thousands of routes over forty years.
Sustainability requires maintenance. Maintenance requires attention sustained over time. And the AI-enabled builder, operating at the speed of conversation, faces a structural temptation that the pre-AI builder did not: the temptation to create rather than maintain. When building the next thing is as easy as describing it, why invest in sustaining the last thing? The dopamine of creation — the rush of seeing an idea become real in minutes — is powerful. The discipline of maintenance — the unglamorous work of fixing bugs, responding to user feedback, iterating on details, building the institutional infrastructure that sustains a product over time — is, by comparison, tedious. The speed of creation makes the contrast sharper. The gap between the exhilaration of building something new and the grind of maintaining something existing has always existed. AI widens it into a chasm.
Segal's account of his own experience illustrates the dynamic. The thirty-day sprint to build Napster Station was exhilarating. The subsequent months of refinement, deployment, user testing, and operational maintenance were harder — not technically harder, but motivationally harder. The tools that made creation fast did not make maintenance interesting. The organizational discipline required to sustain a product after the initial rush of creation is a form of friction that AI does not eliminate, because it is not a technical friction. It is a human friction: the friction of staying committed to something after the novelty has faded.
Kim and Mauborgne address this dynamic through their concept of "blue ocean traps" — the strategic errors that cause blue ocean creators to fail despite having successfully created new market space. One of the most relevant traps for the AI era is what they call the "technology trap": the error of pursuing technological novelty as an end rather than as a means to buyer value. Companies that fall into the technology trap create products that are impressive demonstrations of capability but fail to solve problems that buyers actually have. The products work. They are technically sophisticated. They are created at astonishing speed. And nobody uses them, because the builder was captivated by what the technology could do rather than disciplined by what the buyer needed.
The speed of AI creation amplifies the technology trap. When building is easy, the discipline of asking whether the thing should be built erodes. The question "Can I build this?" is answered so quickly and so affirmatively that the harder question — "Should I build this? Will someone use it? Does it solve a real problem? Is it better than what already exists?" — gets crowded out by the momentum of production.
Kim and Mauborgne's 2025 Harvard Business Review article addresses this directly: companies that lead with AI rather than with strategy put the cart before the horse. The same principle applies to individuals. The non-programmer who discovers she can build an application in an afternoon faces the same strategic question as the Fortune 500 company deploying AI across its operations: is the creation serving a genuine need, or is it serving the impulse to create?
The question is not intended to dampen enthusiasm. The enthusiasm is warranted. The expansion of who can build, and how quickly, is genuinely transformative. But enthusiasm without discipline produces a landscape of disposable innovation — thousands of products created in hours and abandoned in days, blue oceans that flash into existence and evaporate before they can sustain life. The proliferation of micro-markets is a feature of the AI moment. The sustainability of those micro-markets is an open question.
Historical parallels provide some guidance. The early web produced a similar explosion of creation: millions of websites built by people who, a few years earlier, could not have built them. GeoCities, Angelfire, and their contemporaries hosted the first wave of democratized digital creation. Most of what was created was trivial. Most of it was abandoned. The blue oceans were, in retrospect, puddles — market spaces that existed for days or weeks before their creators moved on.
But the puddles fertilized the soil. The people who built trivial websites in 1997 developed skills, intuitions, and ambitions that produced the companies and platforms of the following decade. The early web's explosion of creation was not valuable because of what it produced. It was valuable because of who it developed. The practice of building — even building trivially, even building disposably — cultivated a population of builders whose later work was substantive.
The AI moment may follow the same pattern. The millions of applications created by first-time builders in 2025 and 2026 may be, individually, disposable. But the population of builders that the experience cultivates — people who now know what it feels like to turn an idea into a working thing, who have developed the rudimentary judgment that comes from seeing their ideas confronted by reality, who have tasted the satisfaction and the frustration of creation — that population is the strategic asset. Not the products. The people.
Kim's framework supports this reading. The blue ocean is not the product. The blue ocean is the market space — the combination of buyer needs, value propositions, and competitive dynamics that defines a new category of economic activity. Products come and go within that space. The space itself persists as long as the needs it addresses persist. The blue ocean of personal software — applications built by individuals for their own use — will persist as long as individuals have specific needs that commercial products do not address. That will be, effectively, forever. Individual products within that space will be created and abandoned at the speed of conversation. The space itself is permanent.
The strategic implication is that the organizations and institutions best positioned for the AI era are not the ones that create the most products. They are the ones that create the conditions for sustained creation — the platforms, the tools, the educational frameworks, the cultural norms that support builders not just in the moment of initial creation but in the harder, longer work of maintenance, iteration, and the gradual development of the judgment that separates a disposable prototype from a durable product.
Segal calls these conditions dams. Kim and Mauborgne would call them blue ocean infrastructure. The terminology differs. The insight converges: the speed of creation is a gift. The discipline of sustaining what is created is the work. And the gap between the two is where the value — and the danger — of the AI moment resides.
---
In the early 1990s, Kim and Mauborgne conducted research that would become one of the most cited findings in organizational behavior, though it is less well known than their blue ocean strategy work. They studied what makes people accept decisions — even decisions that go against their interests — and found that the critical variable was not the outcome of the decision but the process by which it was made.
They called the finding "fair process," and it consists of three principles. The first is engagement: involving affected individuals in the strategic decisions that affect them, by asking for their input and allowing them to refute the merits of one another's ideas. The second is explanation: ensuring that everyone involved and affected understands why final strategic decisions are made as they are. The third is expectation clarity: stating clearly what the new rules of the game are, what is expected of each person, and what the standards of evaluation will be.
When all three principles are present, people accept even painful outcomes — restructuring, role changes, the elimination of work they valued — with commitment rather than resentment. When any principle is absent, people resist. Not because the outcome is wrong but because the process felt arbitrary, opaque, or disrespectful of their participation. The resistance is not irrational. It is the rational response of people who have been denied the basic dignities of explanation and inclusion.
The finding has profound implications for the AI transition, because the AI transition is, for millions of knowledge workers, the most consequential organizational change they have experienced. Roles are being redefined. Skills that commanded premiums are being commoditized. Hierarchies that organized work for decades are being flattened. The question of who does what — the most fundamental organizational question — is being renegotiated in real time, often without the people most affected being consulted about the terms.
The absence of fair process in most organizations' AI adoption is not deliberate malice. It is the structural consequence of speed. The technology is moving faster than institutional decision-making can process. Leaders who are themselves disoriented by the pace of change — who are, in Segal's language, experiencing the vertigo of the orange pill — are making consequential decisions about team structure, tool adoption, and role redefinition without the time or the framework to engage the people those decisions affect.
The result is predictable, and Kim and Mauborgne's research predicts it with precision. When organizations deploy AI without fair process — without engaging workers in the decisions about how AI will change their roles, without explaining the strategic reasoning behind the changes, without clarifying what will be expected of people in the new reality — the resistance is fierce, even when the changes are objectively beneficial. An engineer whose role is expanded from narrow specialist to broad creative director may objectively benefit from the change. But if the change is imposed without explanation, without consultation, without a clear articulation of what the new role entails and how it will be evaluated, the engineer experiences not promotion but displacement. The feeling is real, and the feeling determines behavior. The engineer who feels displaced underperforms, disengages, or leaves. The organization loses precisely the judgment and domain expertise that the new strategy canvas values most.
Segal's account of the Trivandrum training illustrates the mechanics of fair process in practice, though he does not use Kim's terminology. When he flew to India in February 2026, his purpose was not merely to teach his engineers how to use Claude Code. It was to be present for the transformation — to stand in the room while the ground shifted, to explain why the shift was happening, to engage his team in the process of redefining their own roles, and to clarify what would be expected of them in the new reality.
The decision to be physically present was, in Kim's framework, an act of engagement. Segal could have conducted the training remotely. He could have sent a tutorial and a Slack message. The choice to fly to Trivandrum, to spend a week working alongside his engineers, to witness their oscillation between excitement and terror and to meet that oscillation with his own presence — this choice communicated something that no remote training could: that the people affected by the change mattered enough to warrant the leader's physical presence during the most disorienting moment of their professional lives.
The senior engineer who spent his first two days in terror — the one Segal describes as oscillating between excitement at the speed and fear at the implications for his identity — arrived at a productive resolution by Friday. The resolution was not that his fears were unfounded. They were founded. His implementation skills, the skills that had defined his professional identity for decades, were being commoditized in real time. The resolution was that the remaining twenty percent of his work — the judgment, the architectural intuition, the taste — was more valuable than the eighty percent that had been automated. But he could only arrive at this resolution because the process allowed him to. He was engaged in the transition, not subjected to it. He was given the explanation — AI is expanding what each of us can do, not eliminating who we are. And he was given expectation clarity — here is what the new role looks like, here is what will be valued, here are the standards.
Kim and Mauborgne's research shows that fair process generates what they call "voluntary cooperation" — the willingness of people to go beyond the minimum required, to invest discretionary effort, to commit to outcomes rather than merely comply with instructions. The distinction between compliance and commitment is the distinction between an organization that deploys AI and an organization that is transformed by it. Compliance produces the minimum: people use the tools because they are told to, generate the outputs because they are measured on them, and withhold the discretionary judgment and creativity that the new canvas values most. Commitment produces the maximum: people invest themselves in the new way of working because they believe in it, because they understand it, because they helped shape it.
The difference in organizational performance between compliance and commitment is not marginal. Kim and Mauborgne's studies across industries show that organizations practicing fair process outperform those that do not by significant margins on measures of innovation, employee retention, and strategic execution. The mechanism is trust. Fair process builds trust, and trust is the lubricant that allows organizations to move through change without the friction of resistance, suspicion, and the hoarding of information that characterizes low-trust environments.
Segal's decision to keep and grow his team — the decision he describes as the Beaver's choice, leaving margin on the table rather than converting the twenty-fold productivity gain into headcount reduction — is a fair process decision with strategic consequences. The decision communicated engagement: the team's contribution was valued enough to retain. It communicated explanation: the strategy is to expand capability, not to reduce cost. And it communicated expectation clarity: the team's role is growing, not shrinking.
The alternative, which Segal describes considering, was the path of pure efficiency. If five people can do the work of a hundred, why not have five? The arithmetic was clean. The quarterly numbers would have improved immediately. The board would have been satisfied. And the organization would have lost something that no quarterly report measures: the trust of the people who remained, the institutional knowledge of the people who were released, and the signal — sent to every remaining employee — that the organization valued efficiency over loyalty.
Kim and Mauborgne's research predicts the consequences of that signal with uncomfortable precision. Organizations that pursue efficiency without fair process — that reduce headcount without engagement, explanation, or expectation clarity — experience a predictable cascade. The most talented people leave first, because they have the most options. The people who remain become risk-averse, because they have learned that investment in the organization is not reciprocated. Innovation declines, because innovation requires the willingness to fail, and the willingness to fail requires the trust that failure will not be punished with elimination. The organization becomes leaner and more brittle simultaneously — optimized for the present quarter and structurally weakened for the quarters that follow.
This is not speculation. It is the documented outcome of every major restructuring wave in modern corporate history, from the downsizing campaigns of the 1990s to the efficiency drives that followed the 2008 financial crisis. The pattern is consistent enough to constitute a finding: efficiency pursued without fair process destroys the organizational capability it was intended to enhance.
The AI transition amplifies this pattern because the speed of the change is faster and the stakes are higher. When the restructuring happens over months rather than years, the window for fair process is narrower. Leaders who are themselves disoriented have less time to engage, explain, and clarify. The temptation to act first and explain later is powerful, because the market rewards speed and the technology makes speed possible and the quarterly pressure is relentless.
But the cost of acting without fair process is also amplified. In the AI era, the organizational assets that matter most — judgment, domain expertise, taste, creative direction — are precisely the assets that reside in people. They cannot be captured in a database. They cannot be transferred to a successor in a handoff document. They cannot be replicated by the AI tools that made the restructuring seem attractive. They walk out the door when the person who possesses them loses trust in the organization.
The practical implications for leaders navigating the AI transition are specific and actionable, because Kim and Mauborgne's fair process principles are not abstract values. They are operational practices.
Engagement means including affected people in the decisions about how AI will change their roles. Not informing them afterward. Including them in the deliberation. This does not mean every decision is democratic. It means every affected person has the opportunity to contribute their perspective before the decision is made, and the decision-maker is visibly influenced by what they hear. The engineer in Trivandrum who was oscillating between excitement and terror needed to voice the terror. The voicing was itself part of the resolution. Suppressing it — telling him to get on board, to adapt, to stop being afraid — would have driven the fear underground, where it would have expressed itself as passive resistance, withholding of effort, or departure.
Explanation means articulating the strategic logic behind the changes with enough candor that affected people can evaluate it. Not corporate euphemisms. Not the sanitized language of press releases. The actual reasoning: here is what the technology can do, here is what it means for our competitive position, here is why we are changing these roles, here is what we believe the new arrangement will produce. The explanation must include an honest accounting of the losses. If a skill is being devalued, saying so directly — and acknowledging the investment the person made in developing that skill — is more respectful than pretending the change is costless.
Expectation clarity means defining the new reality with enough specificity that people can orient themselves within it. What does the new role entail? What will be measured? What skills should be developed? What does success look like in the new arrangement? The absence of this clarity — the vague assurance that "things will work out" or "we'll figure it out together" — is not reassuring. It is destabilizing. People who do not know what is expected of them cannot perform, no matter how willing they are.
The organizations that practice these three principles will not be the ones with the best AI tools. They will be the ones whose people trust the process enough to invest themselves in the outcome. In the AI era, where the most valuable assets are human — judgment, creativity, the capacity to ask the right question — trust is not a soft skill. It is the hardest asset on the balance sheet. And fair process is how it is built.
---
Kim and Mauborgne's research on tipping point leadership addresses a problem that has haunted every large-scale organizational transformation: how to achieve disproportionate results without disproportionate resources. The conventional approach to institutional change is the frontal assault — massive investment, comprehensive restructuring, the mobilization of every available resource against the target. The conventional approach is expensive, slow, and, according to decades of change management research, successful in fewer than thirty percent of cases.
Tipping point leadership rejects the frontal assault. It is based on the observation that in every organization, a small number of factors exert a disproportionate influence on performance and behavior. Identify those factors. Concentrate resources on them. Allow the results to cascade through the organization. The tipping point leader does not try to change everything at once. She identifies the leverage points where small interventions produce large effects, and she focuses relentlessly on those points while ignoring the vast majority of the organizational landscape that will change on its own once the tipping factors shift.
Kim and Mauborgne identify four organizational hurdles that block strategic change: cognitive, resource, motivational, and political. The cognitive hurdle is the failure to see the need for change — the collective assumption that the current approach is working well enough. The resource hurdle is the belief that change requires more resources than are available. The motivational hurdle is the failure of the people who must execute the change to commit to it. The political hurdle is the resistance of entrenched interests that benefit from the status quo.
The AI transition presents all four hurdles simultaneously, at a scale and speed that few organizations have confronted. The cognitive hurdle is evident in every company that continues to plan based on pre-December 2025 assumptions — the organizations that have not yet absorbed the magnitude of the shift, that treat AI as an incremental productivity tool rather than a structural transformation of their competitive landscape. Segal's advice to these organizations is blunt: stop. Throw the plan away. Start from the world that actually exists. The advice is correct, but the cognitive hurdle that prevents organizations from taking it is formidable. Acknowledging that the world has changed requires acknowledging that the expertise and assumptions that organized the previous world are no longer sufficient. That acknowledgment is psychologically expensive, and most organizations avoid it until the cost of avoidance exceeds the cost of acknowledgment — by which point the window for proactive response has narrowed considerably.
Kim and Mauborgne's prescription for the cognitive hurdle is direct experience. Rather than presenting data, reports, or arguments — all of which can be filtered, rationalized, or ignored — the tipping point leader arranges for the decision-makers to experience the reality directly. In their signature case study of New York Police Department Commissioner William Bratton, the leader required his senior officers to ride the subway — the actual subway, at night, in the districts they were responsible for — so that they could see and feel the conditions that the crime statistics described abstractly. The direct experience bypassed the cognitive filters that data could not penetrate.
The analogy to the AI moment is precise. The leaders who have experienced the orange pill — who have sat down with Claude Code and described a problem in plain English and watched a working solution emerge in minutes — do not need to be convinced that the world has changed. The experience is sufficient. The leaders who have not had that experience, who have evaluated AI through reports and demonstrations and quarterly presentations, are still operating behind the cognitive hurdle. Their understanding is abstract. Their plans reflect the abstract understanding. And their organizations are adapting at the speed of abstraction rather than the speed of reality.
Segal's Trivandrum trip was an act of tipping point leadership applied to the cognitive hurdle. By insisting on physical presence, by working alongside his engineers rather than presenting to them, by ensuring that every person on the team experienced the tool directly rather than learning about it secondarily, he created the conditions for the cognitive hurdle to fall. The experience could not be filtered. The twenty-fold productivity multiplier was not a number in a slide deck. It was the lived reality of a week of work, felt in the body, visible in the output, undeniable in the way that only direct experience is undeniable.
The resource hurdle in the AI transition takes a counterintuitive form. The conventional resource hurdle is the belief that change is too expensive. The AI transition's resource hurdle is the opposite: the change is so cheap that organizations fail to take it seriously. One hundred dollars per person per month for Claude Code. The cost is so low that it does not trigger the institutional attention that major investments receive. There is no budget review. No board approval. No cross-functional committee. The tool enters the organization through individual subscriptions, informal adoption, and the grassroots enthusiasm of early users — and the institutional response is correspondingly informal, which means there is no institutional response at all.
The tipping point approach to the resource hurdle is not to increase investment but to redirect existing resources toward the highest-leverage applications. Kim and Mauborgne's concept of "hot spots" — the activities within an organization that consume few resources but produce high performance gains — is directly applicable. In the AI transition, the hot spots are the individuals and teams whose work involves the highest proportion of mechanical implementation relative to creative judgment. These are the people for whom AI will produce the largest productivity gains, and concentrating initial deployment on these hot spots generates visible, measurable results that cascade through the organization more effectively than a comprehensive rollout.
The motivational hurdle is the one that fair process addresses most directly, but tipping point leadership adds a specific technique: the identification and mobilization of "kingpins" — the individuals within the organization who exert disproportionate influence on the behavior and attitudes of their peers. In every organization, a small number of people are watched, listened to, and emulated by a much larger number. If the kingpins adopt the change, the organization follows. If the kingpins resist, the organization stalls.
In the AI transition, the kingpins are typically the senior technical leaders — the engineers and architects whose expertise has been the organizational gold standard and whose response to AI sets the tone for everyone else. If the senior architect embraces AI and demonstrates that it enhances rather than replaces his expertise, the entire technical organization receives permission to follow. If the senior architect dismisses AI as a toy, or refuses to engage with it, or retreats into the defensive posture of the Luddite, the technical organization follows that signal too.
Segal's account of his senior engineer in Trivandrum is a kingpin story. The engineer's oscillation between excitement and terror over the first two days was visible to the entire team. His resolution on Friday — the recognition that his judgment and architectural intuition were more valuable, not less, in the AI-augmented environment — was equally visible. The team watched him process the change and arrive at commitment, and his commitment gave them permission to commit. Had he arrived at resistance instead, the entire training would have produced compliance rather than commitment, and the twenty-fold multiplier would have been a fiction on a spreadsheet rather than a reality in the output.
The political hurdle in the AI transition operates at two levels: within organizations and across institutions. Within organizations, the political hurdle is the resistance of managers and leaders whose authority is derived from the old competitive factors. The engineering manager whose authority rests on the size of her team has a political interest in maintaining team size, regardless of whether the team's output could be achieved with fewer people using AI tools. The resistance is not irrational. It is a rational response to an incentive structure that rewards headcount as a proxy for organizational importance. Tipping point leadership addresses this hurdle by changing the incentive structure — by rewarding output and impact rather than headcount, by aligning organizational metrics with the new strategy canvas rather than the old one.
Across institutions, the political hurdle is the gap between the speed of AI capability and the speed of institutional adaptation — the gap that Segal identifies as the most dangerous failure of the current moment. Educational institutions, regulatory bodies, professional licensing organizations, and labor market structures are all operating on timelines that predate the AI threshold. The curriculum that trains tomorrow's workforce was designed for yesterday's competitive landscape. The regulatory framework that governs AI deployment was written before the deployment it governs had begun. The professional norms that define expertise and credentialing in knowledge-work fields reflect a world in which the skills those credentials certify were scarce and valuable.
Kim and Mauborgne's tipping point framework asks: where are the leverage points? Where can a small intervention produce a disproportionate cascade?
In education, the leverage point may be the assessment system. Segal describes a teacher who stopped grading essays and started grading questions. The shift redefined what the classroom valued: not the capacity to produce correct answers but the capacity to ask generative questions. If assessment systems at the institutional level made a similar shift — if professional certifications tested judgment rather than knowledge, if university admissions valued the quality of questions asked rather than the correctness of answers given — the cascade through the educational system would be rapid and profound. The curriculum would follow the assessment, because it always does. Teachers would teach what is measured. Students would learn what is rewarded. A single change in the assessment system could redirect an entire educational institution toward the capabilities the AI era values most.
In regulation, the leverage point may be the demand side rather than the supply side. Segal observes that existing regulatory frameworks — the EU AI Act, American executive orders — address what AI companies may and may not build. They do not address what citizens, workers, and students need to navigate the AI environment wisely. A regulatory framework that required AI literacy as a component of professional licensing — that made the ability to direct AI tools a condition of practicing law, medicine, engineering, or accounting — would cascade through the professional education system with the efficiency that licensing requirements always produce. Professionals would learn what their licenses required. Schools would teach what professionals needed to know. The demand-side intervention would reshape the supply side more effectively than any supply-side regulation.
In organizational design, the leverage point may be the meeting structure. The Berkeley researchers found that AI-augmented work colonized the pauses that had previously served as moments of cognitive rest. The most effective organizational intervention may not be a policy about AI use but a restructuring of the meeting calendar that protects time for the human interactions — mentorship, debate, the slow, friction-rich conversations that build judgment — that AI cannot replicate and that the new strategy canvas values most. An hour of protected, AI-free conversation between a senior architect and a junior engineer may produce more long-term organizational value than an hour of AI-augmented coding. The meeting structure that protects that hour is a tipping point intervention: small, specific, and disproportionate in its effects.
Kim and Mauborgne's framework does not promise that tipping point interventions are easy to identify or painless to implement. It promises that they exist — that in every organizational and institutional system, a small number of factors exert disproportionate influence, and that concentrating effort on those factors produces results that comprehensive, resource-intensive, frontal-assault approaches cannot match.
The AI transition is too fast and too large for the frontal assault. No organization has the resources to rebuild every process, retrain every worker, and redesign every structure simultaneously. The institutions that navigate the transition successfully will be the ones that find their leverage points and act on them with the precision and the discipline that tipping point leadership demands. The dam does not require rebuilding the entire riverbank. It requires placing the right sticks at the right points in the current — the points where a small structure redirects a large flow.
The sticks are specific. The points are identifiable. The current is powerful and accelerating. And the time for placement, as every chapter in this analysis has argued, is not next year or next quarter. It is now.
On the morning of February 24, 2026, IBM's stock suffered its largest single-day decline in more than a quarter century. The trigger was a blog post. Anthropic had published a description of Claude's ability to modernize COBOL — the programming language that, despite being older than the moon landing, still processes an estimated ninety-five percent of ATM transactions and eighty percent of in-person financial transactions worldwide. The language was so deeply embedded in the infrastructure of global finance that the shortage of programmers who could maintain it had been classified as an operational risk by banking regulators. IBM had built a significant consulting business around this scarcity. Then the scarcity evaporated in a paragraph.
IBM's loss was one data point in a larger pattern. In the first eight weeks of 2026, Workday fell thirty-five percent. Adobe lost a quarter of its value. Salesforce dropped twenty-five percent. Autodesk twenty-one. Figma nineteen. The aggregate destruction was measured in the trillions. The market had given the event a name — the SaaS Death Cross — borrowed from the technical analysis lexicon, where a death cross signals that short-term momentum has dropped below the long-term trend, indicating a structural reversal.
Segal documents the Death Cross in The Orange Pill as an economic earthquake. Kim and Mauborgne's framework reveals it as something more diagnostic: a market-wide signal that the strategy canvas of the software industry has been redrawn, and that the companies on the wrong side of the redrawing are being repriced accordingly.
The signal is not that software is dead. The signal is that a specific theory of software value — the theory that code is the product, that the difficulty of writing code is the source of competitive advantage, that the cost of implementation is the barrier that protects incumbents — has been falsified. The companies that lost the most value were the companies whose business models were most dependent on that theory. The companies that lost the least were the companies whose value resided above the code layer, in the ecosystems, data assets, institutional relationships, and workflow integrations that AI could not replicate in a conversation.
Kim and Mauborgne's framework distinguishes between two kinds of competitive advantage: advantages based on factors the industry currently competes on, and advantages based on factors the industry has not yet recognized. The first kind is fragile. It persists only as long as the factors persist. When the factors are devalued by technological change, the advantage vanishes. The second kind is durable. It consists of assets and capabilities that retain their value regardless of which factors the industry competes on, because they are structural prerequisites for competing on any set of factors.
The Death Cross sorted the software industry into these two categories with brutal precision. The companies in the first category — the ones whose advantage was primarily in the execution of code — saw their valuations collapse. Their moats had been drained. The scarcity that had protected them, the scarcity of engineering talent and the cost of development cycles, had been eliminated by a tool that cost a hundred dollars a month. The companies in the second category — the ones whose advantage was in the ecosystem layer — experienced turbulence but not destruction. Their value was shaken by the market's panic, but the underlying assets remained intact, because those assets were not code. They were the accumulated intelligence of decades of deployment: the data that captured how millions of organizations actually operated, the integrations that connected those organizations' workflows into coherent systems, the institutional trust that made enterprises willing to build their operations on a platform they could not easily leave.
Kim and Mauborgne's NVIDIA case study, published through INSEAD in 2025, provides a parallel illustration at the infrastructure layer. NVIDIA had ridden the AI revolution to become one of the most valuable companies in the world. Its competitive advantage was in processing power — the GPU chips that made AI training possible. But the case asked whether that advantage was durable or fragile. If the advantage was in the hardware itself, it was fragile: Google was designing custom TPU chips, Microsoft was developing its own AI silicon, Amazon was investing in Graviton processors. If the advantage was in the ecosystem — the CUDA software platform, the developer community, the toolchains, the institutional relationships — it was durable, because ecosystems resist replication in a way that hardware specifications do not.
The same question, applied to every software company that survived the Death Cross, yields the same diagnostic. Is the value in the code or in the ecosystem? The answer determines whether the company is swimming in a red ocean that is draining or standing on blue ocean ground that is rising.
Kim's four actions framework, applied to the Death Cross, reveals the strategic logic beneath the market panic. The Death Cross eliminated the factor of implementation cost as a competitive dimension. It reduced the importance of technical team size as a signal of organizational capability. It raised the importance of ecosystem depth, domain expertise, and the capacity to direct AI tools toward genuine value creation. And it created a new competitive dimension: the ability to integrate AI into existing workflows in ways that enhance rather than disrupt the value the platform already provides.
The companies that will emerge strongest from the Death Cross are the ones that read this four-actions map correctly. They will not try to compete with AI on code generation — a red ocean move that would pit them against a force they cannot outrun. They will invest in the factors that the new canvas values: deeper domain expertise, richer data assets, stronger institutional relationships, and the judgment to deploy AI as a strategic tool rather than a defensive reflex.
Segal observes in The Orange Pill that the companies whose moats are shallower when software is easy to build must move faster to survive. Kim's framework specifies what "faster" means in strategic terms: it means redrawing the strategy canvas before the competitors do. It means identifying the factors that will define the next era of competition and investing in them before those factors become obvious. It means, in the language of blue ocean strategy, swimming out of the red ocean before the water becomes lethal.
The Death Cross, then, is not a market correction. It is a market migration. Value is moving from the execution layer to the judgment layer, from the code to the ecosystem, from the factors that AI can replicate to the factors that AI makes more valuable precisely because it can replicate everything else. The migration will continue. The companies that follow it will grow. The companies that resist it will discover what every red ocean competitor eventually discovers: that competing harder on factors the market has abandoned is the fastest route to irrelevance.
But the Death Cross also reveals something that Kim's framework addresses only indirectly and that Segal's account brings into sharp relief: the human cost of market migration. The trillion dollars of evaporated value was not an abstraction. It represented the retirement accounts, the stock options, the career expectations, and the institutional identities of millions of people who had built their professional lives inside the companies that the market was repricing. The engineer at Adobe whose stock options lost a quarter of their value in a month. The sales representative at Workday whose commission structure was predicated on a growth trajectory that the market had just declared obsolete. The project manager at Salesforce who had spent a decade developing expertise in the platform's implementation methodology, only to watch that methodology's value collapse as AI made the implementation trivially easy.
Kim and Mauborgne's fair process research, discussed in the preceding chapter, provides the framework for addressing this human cost. The organizations that navigate the Death Cross while preserving the trust and commitment of their people will be the ones that practice engagement, explanation, and expectation clarity as the migration unfolds. The organizations that treat the migration as a purely financial event — cutting headcount, restructuring divisions, pivoting strategy without involving the people most affected — will lose the judgment and domain expertise that the new canvas values most.
The Death Cross is a signal. It is not the end of a story. It is the beginning of a strategic migration that will reshape the software industry, and through the software industry, the broader knowledge economy, for decades. The organizations that read the signal correctly will follow the value to its new location. The organizations that misread it will continue to compete in a red ocean that no longer has a floor.
The water is not just red. The water is draining. The blue ocean is elsewhere. And the map to it is drawn not in code but in the strategic judgment that Kim and Mauborgne have spent three decades teaching the world to see.
---
Kim and Mauborgne's framework contains a finding that blue ocean creators consistently prefer to ignore: blue oceans are temporary. Every new market space, however brilliantly conceived and executed, eventually attracts imitators. The value curve that was once unique converges toward an industry consensus. The blue ocean turns red.
Cirque du Soleil's blue ocean lasted roughly two decades before a wave of imitators — theatrical circus shows, dinner-theater hybrids, immersive entertainment experiences — crowded the space it had created. Southwest Airlines' blue ocean lasted even longer, but by the 2010s, budget carriers worldwide had adopted point-to-point routing, rapid turnarounds, and no-frills service models. Yellow Tail's blue ocean in approachable wine was colonized within years by dozens of brands pursuing the same noncustomer segment with the same value proposition. The pattern is consistent across every blue ocean in Kim and Mauborgne's dataset: creation, then imitation, then commoditization, then the return of red ocean competition.
The speed of the cycle varies. Some blue oceans persist for decades when the barriers to imitation are structural — when the ecosystem, the brand, the operational system, or the institutional relationships that sustain the new market space are difficult and time-consuming to replicate. Others turn red within months, when the innovation is primarily in the product rather than the system, and the product can be copied by any competitor with the resources to observe it.
AI accelerates both sides of the cycle. It accelerates creation: blue oceans can now be conceived and launched at the speed of conversation, as the preceding chapters have documented. And it accelerates imitation: the same tools that enable a builder to create a new product in hours enable a competitor to replicate it in hours. The barriers to imitation that historically protected blue oceans — proprietary technology, specialized expertise, the sheer difficulty of building something complex — are the same barriers that AI has demolished for everyone simultaneously.
The implication is that execution-layer blue oceans in the AI era will have the shortest lifespans in economic history. A custom application built by a non-programmer over a weekend can be replicated by another non-programmer the following weekend. A new software product launched by a startup can be cloned by a competitor within weeks. The product-level innovation that was sufficient to sustain a blue ocean for years in the pre-AI era will sustain one for months or weeks in the AI era, because the cost and complexity of replication have collapsed alongside the cost and complexity of creation.
This finding might seem to invalidate blue ocean strategy entirely. If every blue ocean turns red faster than ever, why invest in creating one? The answer lies in a distinction that Kim and Mauborgne make but that practitioners often overlook: the distinction between a blue ocean product and a blue ocean market space.
A product is a thing. A market space is a relationship between a set of needs, a set of buyers, and a set of factors that define how value is created and delivered. Products are easy to replicate. Market spaces are not, because they include dimensions — institutional trust, accumulated data, workflow integration, community relationships, the deep understanding of a specific domain — that cannot be copied by building a competing product. They can only be built through sustained engagement over time.
Salesforce's CRM product can be replicated in an afternoon by a competent developer with Claude Code. Segal makes this point explicitly in The Orange Pill. But Salesforce's market space — the twenty years of enterprise data, the hundreds of thousands of integrated workflows, the compliance certifications, the institutional relationships, the community of administrators and developers and consultants who have built their careers on the platform — cannot be replicated in an afternoon. It cannot be replicated in a year. It is the deposit of sustained investment, maintained relationships, and accumulated trust that no AI tool can generate through conversation.
The permanent blue ocean, then, is not a product innovation that remains uncopied. It is a market space sustained by factors that resist replication — factors that are built through time, attention, and the kind of sustained human engagement that AI accelerates but does not replace.
Kim and Mauborgne's framework predicts that the durable competitive advantages in the AI era will cluster around four factors that resist AI-driven replication.
The first is ecosystem depth. An ecosystem is not a product. It is a network of relationships — between the platform and its users, between users and each other, between the platform and the third-party applications and services that extend its capabilities, between the platform and the institutional processes that depend on it. Each relationship in the network strengthens the network as a whole, and the network as a whole creates value that no single product within it could provide. The ecosystem resists replication because it is not a thing that can be copied but a pattern of relationships that can only be grown.
The second is domain intelligence — the accumulated understanding of how a specific domain actually works, derived from years of deployment in that domain. The hospital management platform that has processed millions of patient encounters possesses, in its data and its operational patterns, an understanding of hospital workflow that no competitor can acquire by building a better product. The understanding is embedded in the data, and the data is the product of time. AI can analyze the data. AI can find patterns in the data. But AI cannot generate the data. The data is the record of reality, and reality takes time.
The third is institutional trust. Trust is built through the experience of reliability over time — the experience of a platform that was present during a crisis, that maintained uptime during a surge, that handled a security incident with transparency, that adapted to a regulatory change without disruption. Trust is not a feature. It is not a capability. It is a relationship, and relationships resist compression. The organization that has trusted a platform for ten years will not switch to a competitor that offers a superior product, because the product is not what the organization is buying. The organization is buying the confidence that the platform will be there tomorrow, and next year, and during the next crisis. That confidence can only be built through the sustained experience of being there.
The fourth is the human judgment layer — the capacity of the organization's people to direct AI tools toward genuine value creation rather than mere efficiency. This factor is unique to the AI era, and it is the one that Kim and Mauborgne's 2025 Harvard Business Review article foregrounds with particular urgency. The organizations that use AI to pursue cost reduction within the existing strategy canvas will achieve temporary efficiency gains that competitors can replicate. The organizations that use AI to create new value — to identify unmet needs, to serve noncustomers, to build products and services that the existing market has not imagined — will create market spaces that competitors cannot replicate, because the market spaces are defined not by the AI's output but by the human judgment that directed it.
The permanent blue ocean is not a place. It is a practice. It is the organizational habit of continuous creation — of looking beyond the current market boundaries, identifying the next population of noncustomers, applying the four actions framework to the next set of factors, and building the next market space before the current one turns red.
Kim and Mauborgne describe this practice in their 2023 book Beyond Disruption through the concept of nondisruptive creation — innovation that creates new markets and new jobs without destroying existing ones. The concept has additional urgency in the AI era, because AI's most feared societal consequence is mass displacement. The technology that automates existing work threatens existing livelihoods. The response that Kim and Mauborgne prescribe is not to slow the automation but to accelerate the creation — to build new market spaces faster than the old ones are commoditized, to generate new demand faster than existing demand is automated, to create new roles faster than existing roles are displaced.
This prescription is not naive. Kim and Mauborgne are explicit about the scale of the challenge. They write that the fourth industrial revolution — the convergence of AI, smart machines, and exponential technologies — is on track to trigger leaps in productivity greater than any previous revolution, and that these leaps will come by increasingly replacing a wide swath of existing human jobs. The challenge is unprecedented. The response must be equally unprecedented: a sustained commitment to nondisruptive creation at a scale and speed that matches the scale and speed of the displacement.
Segal's metaphor for this commitment is the beaver. The creature that does not build once but maintains constantly. That studies the river and builds where the current is strongest and the need is greatest. That knows the dam requires daily attention, because the river pushes against every joint, loosens every stick, exploits every gap in the mud.
Kim's framework provides the strategic vocabulary for the beaver's work. The four actions framework tells the beaver what sticks to place and where. The strategy canvas tells the beaver which factors the current is shifting. The noncustomer analysis tells the beaver where the need is greatest. Fair process tells the beaver how to maintain the trust of the ecosystem that depends on the dam. Tipping point leadership tells the beaver where the leverage points are — the places where a small intervention redirects a large flow.
The permanent blue ocean is not a destination. It is the continuous practice of creation, maintained with the discipline that the beaver brings to the dam and the strategist brings to the canvas. The river does not stop. The canvas keeps redrawing itself. The noncustomers keep arriving. The factors keep shifting. And the builder who understands this — who sees the impermanence not as a threat but as the fundamental condition of the work — is the builder who will thrive in the era of artificial intelligence.
Not because the blue ocean lasts forever. Because the capacity to create blue oceans is the one capability that no technology, however powerful, can commoditize. The machine can build anything you describe. The question of what is worth describing — the question of where the next blue ocean lies, who it will serve, and why it matters — belongs to the builder.
It always has. The tools have changed. The question has not.
---
The canvas I drew wrong was my own company's.
Not wrong in the way a bad strategy is wrong — missing the market, misreading the customer, building the feature nobody wanted. Wrong in a more structural way: I had drawn it at all. I had plotted the factors my team competed on, scored us against our competitors, identified the dimensions where we were ahead and the dimensions where we were behind, and felt the grim satisfaction of knowing exactly where we stood in a market I understood.
Kim's work dismantled that satisfaction in a way I did not expect. Not because the canvas was inaccurate. It was precise. That was the problem. A precise map of a market that was about to stop existing is worse than no map at all, because the precision gives you confidence that you know where you are, and the confidence prevents you from looking up.
I had been looking down at the canvas when I should have been looking out at the ocean.
The shift happened during the Trivandrum sprint, though I did not have Kim's vocabulary for it at the time. I was watching my engineers transform — not just their output but their relationship to the work itself — and I kept reaching for strategic frameworks that did not fit. The productivity multiplier was real, but it was not the point. The point was that the factors I had organized my entire team around — technical specialization, implementation speed, the ability to execute on a specification — were becoming table stakes. They still mattered. They just no longer differentiated. The canvas had been redrawn under my feet, and I was standing on the wrong side of the line.
What Kim gave me, working through this analysis, was the language for something I had felt but could not articulate: the difference between competing in a market and creating one. Between swimming harder in red water and finding blue water that nobody else has seen. I had spent my career doing both, but I had never understood the structural distinction clearly enough to know which one I was doing at any given moment. That distinction now feels like the most important piece of strategic knowledge I possess.
The noncustomer framework haunts me the most. When Kim turns the strategist's gaze away from existing customers and toward the people who are not being served, something shifts in how you see the world. You stop asking how to win and start asking who is being left out. The developer in Lagos that I wrote about in The Orange Pill — the one with the ideas and the intelligence and the ambition but without the infrastructure — she was a noncustomer. Not because she lacked desire. Because the market I operated in had never built a door wide enough for her to walk through.
AI built that door. But Kim's framework insists that the door is not enough. The noncustomer who walks through needs the same fair process, the same expectation clarity, the same institutional support that existing participants receive. She needs the dams. She needs the structures. She needs someone on the other side of the door who sees her not as a metric but as a builder.
I am trying to be that someone. I do not always succeed. The quarterly pressure is real. The board conversation about headcount returns every quarter, and the arithmetic of efficiency is always on the table, clean and seductive. Kim's research on fair process gives me the empirical foundation to resist that seduction — not out of sentiment but out of strategy, because the data shows that organizations that preserve trust outperform organizations that sacrifice it for short-term margin.
But the data is not why I resist. I resist because I have seen what happens when a person discovers they can build something they never imagined building. I saw it in Trivandrum. I saw it at CES. I see it every week in my team. That moment of discovery — the moment the noncustomer becomes the builder — is the blue ocean. Not the product. Not the market space. The person.
Kim gave me the canvas. The orange pill gave me the colors. The work of painting is mine.
A trillion dollars of software value vanished in early 2026 because an entire industry was competing on factors that artificial intelligence had just made irrelevant. Team size, development speed, feature count -- the dimensions that organized a three-trillion-dollar market -- collapsed overnight. The companies that survived were not the ones that coded faster. They were the ones that had built something code could never replicate.
W. Chan Kim spent decades proving that the greatest growth comes not from outperforming rivals but from making rivalry irrelevant. This book applies his blue ocean framework to the AI revolution with surgical precision -- revealing why the noncustomer, not the existing customer, holds the key to the next era of value creation, and why the billions of people AI has just empowered to build represent the largest untapped market in economic history.
When everyone reaches for the same tools, strategy is the only differentiator left. Kim's work is the map to water no one else can see.
QUOTE:

A reading-companion catalog of the 29 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that W. Chan Kim — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →