Ikujiro Nonaka — On AI
Contents
Cover Foreword About Chapter 1: The Knowledge-Creating Company Meets the Knowledge-Generating Machine Chapter 2: Tacit Knowledge — What the Machine Cannot Articulate Chapter 3: The Spiral in Motion — Four Modes and Their Dynamic Balance Chapter 4: Socialization Under Threat — What Happens When Shared Experience Disappears Chapter 5: Externalization Amplified — AI as the Excavator of Tacit Insight Chapter 6: Combination at Scale — The Mode That Spins Too Fast Chapter 7: Internalization Interrupted — The Missing Practice of Embodiment Chapter 8: Ba — The Shared Space That Machines Cannot Create Chapter 9: Phronesis in the Age of the Amplifier Chapter 10: The Knowledge Spiral Worthy of Amplification Epilogue Back Cover
Ikujiro Nonaka Cover

Ikujiro Nonaka

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Ikujiro Nonaka. It is an attempt by Opus 4.6 to simulate Ikujiro Nonaka's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The spiral broke before I noticed.

In Trivandrum, watching my engineers build at twenty times their previous speed, I was measuring the wrong thing. I was counting output — features shipped, systems deployed, the visible artifacts of a team discovering superpowers. The dashboards glowed green. The velocity was intoxicating. I wrote about it in *The Orange Pill* with genuine awe, because the awe was genuine.

What I did not measure, because I did not have the vocabulary for it, was what was happening underneath the output. The shared debugging sessions that had quietly transferred architectural intuition from senior to junior — gone, because Claude debugged now. The hallway conversations after a difficult deployment where someone admitted confusion and someone else admitted they'd felt the same thing — fewer, because deployments were no longer difficult in the same way. The slow, unglamorous hours of wrestling with code that didn't work, hours that deposited something in the engineer's nervous system that no tutorial could replicate — optimized away, replaced by fluent machine output that arrived faster than understanding could form.

I could feel the thinning. I described it in *The Orange Pill* through an engineer who lost confidence in her architectural decisions without being able to explain why. I reached for a geological metaphor — layers of understanding that stop forming when the friction stops. The metaphor was the best I had. It was not enough.

Then I encountered Ikujiro Nonaka's framework, and the metaphor became a mechanism.

Nonaka spent decades mapping how organizations actually create knowledge — not store it, not process it, but bring into existence understanding that did not previously exist. His SECI spiral describes four modes of conversion between tacit knowledge (the embodied, felt, hard-to-articulate kind) and explicit knowledge (the codified, communicable kind). The spiral must be whole. Accelerate one mode while the others atrophy, and the organization does not become faster. It becomes deformed. More output. Less understanding. The gap widening with each cycle.

AI turbocharged exactly one mode of that spiral: Combination, the reconfiguration of existing explicit knowledge into new explicit knowledge. That is the twenty-fold multiplier. That is the miracle I witnessed. And Nonaka's framework showed me, with uncomfortable precision, that a miracle in one quadrant can be a catastrophe for the other three — if you are not paying attention.

This book is about paying attention. About understanding, through the lens of one of the twentieth century's most rigorous organizational thinkers, what kind of knowledge AI actually produces, what kind it cannot, and what structures we must deliberately maintain if the spiral is to remain whole enough to be worth amplifying.

The layers must keep forming. Nonaka showed me where to look.

Edo Segal ^ Opus 4.6

About Ikujiro Nonaka

1935-present

Ikujiro Nonaka (1935–2025) was a Japanese organizational theorist and professor emeritus at Hitotsubashi University who transformed the field of knowledge management with his insight that organizations create value not by processing information but by generating new knowledge through the dynamic conversion between tacit and explicit understanding. Born in Tokyo, he studied political science at Waseda University before earning his PhD at the University of California, Berkeley. His landmark 1991 Harvard Business Review article "The Knowledge-Creating Company" and the 1995 book of the same name, co-authored with Hirotaka Takeuchi, introduced the SECI model (Socialization, Externalization, Combination, Internalization) — a framework describing how knowledge spirals upward through four modes of conversion between embodied, personal knowing and codified, communicable knowledge. He further developed the Japanese philosophical concept of *ba* (shared context for knowledge creation) and, in his later work, drew on Aristotle's concept of *phronesis* (practical wisdom) to argue that the highest form of organizational leadership is the cultivation of judgment that no system can codify. Named one of the most influential management thinkers of the twentieth century by the *Wall Street Journal* and recipient of the Purple Ribbon Medal from the Emperor of Japan, Nonaka died on January 25, 2025 — weeks before the generative AI revolution would put his life's work to its most consequential test.

Chapter 1: The Knowledge-Creating Company Meets the Knowledge-Generating Machine

In the spring of 1991, Ikujiro Nonaka published a short article in the Harvard Business Review that would quietly restructure how an entire generation of organizational theorists understood what companies actually do. The article was called "The Knowledge-Creating Company," and its central claim was deceptively simple: the most important activity of a business enterprise is not the processing of information but the creation of new knowledge. Not the storage of data. Not the optimization of existing processes. Not the retrieval and redistribution of facts already known. The creation — the bringing into existence — of understanding that did not exist before any member of the organization possessed it.

The distinction sounds academic until one grasps what rides on it. Information processing — the dominant paradigm in Western management theory from Herbert Simon onward — treats the organization as a machine for taking in data, computing decisions, and producing outputs. The inputs are given. The rules are given. The task is to apply the rules to the inputs efficiently. This is what computers do extraordinarily well. It is also, Nonaka argued across decades of research, a fundamentally incomplete description of what happens inside an organization that actually innovates.

Knowledge creation is something different. It is the process by which an organization generates understanding that was not implicit in any existing dataset, that could not have been computed from available inputs, that emerged through the interaction of people who knew different things in different ways. When Honda's development team for the City car in the early 1980s used the phrase "automobile evolution" to capture an emergent concept about what a compact urban vehicle should be — a phrase that was not a specification but a metaphor, not a data point but a provocation — they were creating knowledge. The concept did not exist in any database. It could not have been computed. It arose from the collision of tacit intuitions held by different members of the team, externalized through figurative language into a form that others could grasp, debate, refine, and eventually build upon.

Nonaka and Hirotaka Takeuchi formalized this insight in The Knowledge-Creating Company in 1995, introducing the framework that would become the most influential model in knowledge management: the SECI spiral. The model describes four modes of conversion between two fundamentally different kinds of knowledge — tacit knowledge, the embodied, experiential, difficult-to-articulate knowledge that resides in skilled practitioners, and explicit knowledge, the codified, systematic, readily communicable knowledge that can be captured in documents, databases, and formulas. The four modesSocialization, Externalization, Combination, and Internalization — describe how knowledge converts from tacit to tacit, tacit to explicit, explicit to explicit, and explicit to tacit, spiraling upward through each cycle into increasingly sophisticated organizational understanding.

The framework was built for a world of human organizations populated by human knowers. Nonaka's entire intellectual architecture assumed that the relevant agents in knowledge creation were people — people with bodies, with histories, with the capacity for empathy and mutual vulnerability that allows tacit knowledge to flow between them. The concept of ba, the shared context of caring and trust in which knowledge creation occurs, was explicitly relational and explicitly human. Knowledge, in this framework, was not information. It was a living process that depended on the quality of human presence.

Then the machines learned to speak.

The arrival of large language models — systems that process, recombine, and generate natural language with extraordinary fluency — represents the most significant challenge to Nonaka's framework since its formulation. Not because these systems invalidate it. They do not. The challenge is subtler and more consequential: AI systems perform certain modes of the SECI spiral with such overwhelming power that they create a gravitational distortion in the spiral itself, accelerating the explicit-knowledge modes while leaving the tacit-knowledge modes untouched or, worse, atrophying them through disuse.

The Orange Pill describes this distortion from the builder's perspective. Edo Segal documents a twenty-fold productivity multiplier when his engineering team in Trivandrum adopted Claude Code — a tool that converts natural language descriptions into working software. The multiplier is real. The engineers built features in days that had previously required weeks. The imagination-to-artifact ratio, as Segal names it, collapsed to the width of a conversation. These are genuine, measurable, consequential gains.

But what kind of gains are they? Nonaka's framework provides a diagnostic vocabulary that the productivity narrative, for all its honesty about costs, does not quite possess. The twenty-fold multiplier is, in SECI terms, overwhelmingly a Combination gain — a gain in the speed and scale at which existing explicit knowledge (codebases, frameworks, documentation, architectural patterns contained in training data) can be reconfigured into new explicit knowledge (working features, integrated systems, deployed products). The machine takes codified inputs and produces codified outputs. It does so with breathtaking speed and remarkable fluency. And the discourse around this capability — including the thoughtful, self-aware treatment in The Orange Pill — tends to describe it in language that conflates processing with creation.

Processing recombines what exists. Creation brings forth what did not exist. The difference is not one of degree. It is categorical. A librarian who reorganizes a collection has processed knowledge. A researcher who reads across that collection and produces a hypothesis that no existing text contains has created knowledge. The distinction matters because the SECI spiral is an engine of creation, not processing, and what AI accelerates is primarily the processing mode.

Nonaka himself addressed this directly in the final years of his life. In a published interview with Norihiro Suzuki of the Hitachi Research Institute, conducted as generative AI was entering mainstream adoption, Nonaka stated with characteristic precision: "No matter how much AI technology advances, the essence of knowledge creation, in which tacit knowledge is the source of new knowledge, will not change." He was not dismissing AI. In the same interview, he acknowledged that "digital technology and AI will serve as an effective support tool for innovation that creates new value." The qualification was deliberate. AI is a tool that supports innovation. It is not an engine that produces it. The engine is the full SECI spiral, which requires the conversion between tacit and explicit knowledge in all four modes. AI operates in one mode — Combination — with extraordinary power, contributes usefully to a second — Externalization — and is structurally absent from the other two.

This distinction echoes a deeper intellectual trajectory in Nonaka's career. He began his academic life studying information processing. At the University of California, Berkeley, his early work was influenced by Herbert Simon's computational model of decision-making, in which organizations are information-processing systems that reduce uncertainty through the application of rules to data. Nonaka's great intellectual turn — the insight that made him one of the most cited organizational theorists of the twentieth century — was the recognition that this model was fundamentally incomplete. Organizations that merely process information can optimize. They cannot innovate. Innovation requires the creation of knowledge that was not implicit in any existing dataset, and this creation depends on the conversion between tacit and explicit knowing — between what can be said and what can only be shown, between what the database contains and what the master craftsman feels in the resistance of the material under her hands.

The irony of the current moment is that AI represents the apotheosis of the information-processing paradigm Nonaka spent his career transcending. Large language models are, at their core, the most powerful information-processing systems ever built. They process explicit knowledge — text, code, data — with a comprehensiveness and speed that make every previous information technology look like a card catalog. And the overwhelming cultural response to this power has been to treat it as knowledge creation. The productivity multiplier looks like creation. The fluent outputs feel like creation. The novel combinations of existing ideas read like creation.

Nonaka's framework insists otherwise. Not because the outputs lack value — they do not — but because the process by which they are produced is not the process by which genuinely new understanding comes into existence. Genuinely new understanding requires the full spiral: the tacit insight that arises from embodied experience, the externalization of that insight into communicable form, the combination of the externalized knowledge with other explicit knowledge, and the internalization of the combined knowledge back into personal, embodied, tacit skill through practice. Each mode feeds the next. The spiral ascends. Remove any mode and the spiral does not slow down. It deforms.

The knowledge-creating company that Nonaka described was an organization designed to enable the full spiral. It was structured not for efficiency but for the quality of knowledge conversion. Its physical spaces encouraged the chance encounters that enable Socialization. Its culture valued the metaphorical and analogical thinking that enables Externalization. Its information systems facilitated the recombination that constitutes Combination. And its practice-based learning culture ensured that explicit knowledge was continuously re-embodied through Internalization.

The AI-augmented company of 2026 is an organization in which one mode of the spiral has been turbocharged while the organizational conditions for the other three modes remain, at best, unchanged — and at worst, are actively eroding under the pressure of the accelerated mode. The Combination engine runs at twenty times its previous speed. The Socialization spaces are being emptied as shared implementation work disappears. The Internalization practices are being bypassed as the machine produces the output the practitioner would have produced through effortful practice. And the Externalization mode, while genuinely enhanced by AI as a conversational partner, depends for its quality on the depth of the tacit knowledge base being externalized — a base that Socialization and Internalization build and that may be shrinking.

This is the diagnostic that Nonaka's framework provides: not a verdict on whether AI is good or bad for organizations, but a precise map of where in the knowledge-creation process the distortion is occurring, which modes are accelerating, which are atrophying, and what the organizational consequences of the resulting imbalance are likely to be. The map does not counsel refusal. Nonaka was explicit that AI is a valuable tool. But it counsels awareness — the kind of structural, process-level awareness that allows an organization to build deliberately for the modes that the technology's gravitational pull would otherwise starve.

In his final public message, delivered via video to the Management Innovation Round Table in Tokyo in August 2024, five months before his death, Nonaka offered what reads in retrospect as a valedictory statement: "Innovation is a collective process of creating new meaning and value for the future. It is brought about by humans, not just by science and technology." The sentence is not anti-technology. It is a calibration. Science and technology contribute. Humans bring about. The distinction between contributing and bringing about is the distinction between a tool that accelerates one mode of the spiral and an engine that requires the full spiral to produce genuinely new understanding. AI contributes. The knowledge-creating organization — the organization that maintains all four modes in dynamic balance — brings about.

Whether this distinction will survive the gravitational pull of the most powerful Combination engine ever built is the question that the remaining chapters of this book attempt to answer.

Chapter 2: Tacit Knowledge — What the Machine Cannot Articulate

In 1966, the Hungarian-British philosopher Michael Polanyi published a short book called The Tacit Dimension that contained a sentence of extraordinary compression: "We can know more than we can tell." The sentence sounds modest. It is, in fact, one of the most consequential claims in the epistemology of the twentieth century, because it asserts that there exists an entire domain of human knowledge that is, by its nature, resistant to explicit formulation — knowledge that cannot be fully captured in words, numbers, diagrams, formulas, or any other system of codified representation. Not knowledge that has not yet been codified. Knowledge that cannot be codified, because the act of codification would destroy the very properties that make it knowledge.

Nonaka built his entire framework on this foundation. The SECI model depends on the recognition that tacit and explicit knowledge are not two points on a single spectrum — with tacit knowledge being merely "not yet articulated" explicit knowledge — but two fundamentally different kinds of knowing that interact through conversion but can never be fully reduced to each other. Tacit knowledge is embodied. It lives in the hands, the nervous system, the patterns of attention that a skilled practitioner has built through years of engaged practice. It is situational — bound to the specific contexts in which it was developed and exercised. And it is, in Polanyi's precise formulation, personal — inseparable from the knower, dependent on the knower's entire history of experience in ways that resist decomposition into transferable rules.

The surgeon whose fingers detect the boundary between healthy tissue and tumor before any imaging system has confirmed it. The software architect who looks at a system diagram and feels, before any analysis confirms it, that a particular coupling will produce failures under load. The experienced teacher who reads a classroom's mood from the quality of the silence in the first thirty seconds and adjusts the lesson plan accordingly. These are not vague intuitions. They are precise, reliable, consequential forms of knowing — knowledge that produces better decisions, better outcomes, better diagnoses than explicit analysis alone. And they share a defining characteristic: the knower cannot fully explain how she knows.

This is the knowledge that makes expertise irreducible to information.

Segal captures the mechanism by which tacit knowledge accumulates in a geological metaphor: every hour spent debugging deposits a thin layer of understanding, and the layers compound over months and years into the solid ground on which expert intuition stands. The metaphor is apt because it conveys several properties that are essential to Nonaka's framework. The deposition is slow. It cannot be accelerated without degrading the product, just as geological sedimentation cannot be hurried without producing a different kind of rock. Each layer depends on the layers beneath it — the understanding deposited in the hundredth hour of debugging rests on the understanding deposited in the first ninety-nine and would not hold without it. And the resulting formation is not a database of explicit rules but a continuous substrate from which the expert can draw without conscious retrieval — a felt sense of how systems behave that informs judgment even when the expert cannot articulate the basis for the judgment.

This geological process is, in Nonaka's SECI vocabulary, Internalization: the conversion of explicit knowledge (error messages, documentation, observed behavior) into tacit knowledge (embodied understanding of system behavior) through the friction of practice. The friction is not incidental to the deposition. It is constitutive. The error that forces the developer to read the documentation, the failed hypothesis that forces the engineer to reexamine her assumptions, the unexpected system behavior that forces the architect to develop a new mental model — these experiences of resistance are the mechanism by which explicit knowledge becomes tacit. Remove the friction and the deposition stops, regardless of how much explicit knowledge flows through the system.

AI removes the friction. Not selectively, but comprehensively. When Claude Code produces working software from a natural language description, the entire sequence of friction-dependent experiences that would have deposited tacit understanding — the debugging, the failed compilations, the unexpected behaviors, the forced consultations with documentation, the slowly-built mental models of system interaction — is bypassed. The explicit output exists. The code works. The feature ships. But the practitioner has not undergone the experience that would have converted the explicit knowledge embedded in that code into personal tacit understanding.

Segal documents this with the story of an engineer on his team who noticed, months after adopting Claude Code, that she was making architectural decisions with less confidence than before and could not explain why. The explanation, in Nonaka's framework, is precise: the Internalization mode had been interrupted. The explicit knowledge she needed for architectural decisions was still available — in documentation, in Claude's outputs, in the codebases she worked with daily. But the tacit knowledge that would have allowed her to evaluate that explicit knowledge with the confidence of embodied understanding — the felt sense of how systems fit together, the intuition for what would fail under stress — had stopped accumulating because the experiences that deposit it had been automated away.

The phenomenon has a parallel in medical education that illuminates the mechanism. When laparoscopic surgery began displacing open surgery in the late 1980s, surgeons trained exclusively on laparoscopic techniques were noted to lack the tactile intuition that open surgeons possessed — the ability to feel the boundary between tissue types, to sense the resistance that indicates anatomical structures the imaging cannot resolve, to know through the hands what the eyes cannot confirm. This was not a deficiency in training quality or duration. It was a direct consequence of the removal of tactile friction. The knowledge that lived in the open surgeon's hands was deposited through years of direct, physical, resistance-rich engagement with the body. Laparoscopic surgery replaced that engagement with instrument-mediated interaction, and the tactile knowledge disappeared as a predictable consequence.

The Orange Pill's treatment of laparoscopic surgery emphasizes what was gained — precision, reduced recovery times, new procedural possibilities. These gains were real and consequential. Nonaka's framework does not dispute them. It simply insists on naming what was lost and understanding the mechanism of the loss, because the mechanism reveals something crucial about the conditions under which expertise develops: tacit knowledge is deposited by friction. When friction is removed, the deposition stops, regardless of what other benefits the removal provides.

The implications extend well beyond software development. Consider the legal profession. A lawyer who uses AI to draft briefs produces competent documents that cite relevant precedents, organize arguments in conventional structures, and present analyses that satisfy the formal requirements of legal reasoning. The output is explicit knowledge of high quality. But the lawyer who produced it has not read the cited cases with the depth of attention that would deposit understanding of how those cases relate to the broader evolution of legal doctrine. The felt sense of the law — the tacit dimension that allows a senior litigator to anticipate how a judge will respond to a particular argument, to sense the weakness in an opposing brief before identifying it analytically, to know which precedent matters most in a particular factual context — is deposited through the friction of direct, effortful engagement with the materials. AI-drafted briefs bypass that engagement. The explicit output improves. The tacit foundation does not build.

Nonaka warned about this dynamic decades before AI made it acute. In a 2008 interview with strategy+business, he argued that "companies and leaders who treat knowledge management as just another branch of IT don't understand how human beings learn and create." The statement was directed at the knowledge management movement of the late 1990s, which had attempted to capture organizational knowledge in databases, intranets, and document management systems — treating knowledge as information to be stored and retrieved rather than as a living process of conversion between tacit and explicit forms. The systems captured explicit knowledge with reasonable fidelity. They captured nothing of the tacit dimension. And the organizations that relied on them found that their knowledge bases grew while their innovative capacity did not, because the databases held information without the embodied understanding that would allow practitioners to use that information wisely.

AI represents a more sophisticated version of the same error. Where the knowledge management databases of the 1990s stored explicit knowledge passively, AI processes it actively — recombining, synthesizing, generating new explicit-knowledge configurations with a fluency that the old systems could not approach. The improvement is genuine. But the underlying epistemological confusion is the same: the treatment of explicit knowledge processing as an adequate substitute for the full knowledge-creation process that includes the tacit dimensions.

The most consequential version of this confusion is the one that concerns the development of new practitioners. Senior experts — the engineers and lawyers and doctors who built their tacit knowledge bases through decades of friction-rich practice before AI arrived — possess the embodied understanding that allows them to evaluate AI outputs with the confidence of genuine expertise. They can feel when Claude's code is structurally sound and when it merely compiles. They can sense when the AI-drafted brief has missed the relevant distinction. This tacit knowledge was deposited before the friction was removed, and it continues to function as a reliable basis for judgment.

But what of the junior practitioner who enters the profession after the friction has been removed? She begins her career with Claude as a standard tool. She has never debugged a null pointer exception by hand. She has never spent three hours reading documentation to understand a library that Claude could have summarized in thirty seconds. She has never experienced the specific failure that would have forced her to build the mental model on which architectural judgment depends. Her explicit knowledge may be broader than any previous generation of juniors — Claude gives her access to more information, more patterns, more examples than any human mentor could provide. But her tacit knowledge base is thinner, because the experiences that deposit it have been optimized away.

This is not a hypothetical concern. It is observable now, in organizations that have adopted AI aggressively. The senior engineers who adopted Claude Code in Segal's Trivandrum training brought decades of accumulated tacit knowledge to the collaboration. Their judgment about what to build, how to architect it, where the failure points would emerge — this judgment was the product of thousands of hours of friction-rich practice deposited over careers that predated AI. Claude amplified that judgment by removing the implementation friction that had consumed their bandwidth, allowing them to operate at the level of vision and direction rather than syntax and debugging. The amplification was genuine and productive precisely because the tacit foundation was deep.

The question Nonaka's framework forces is not whether this amplification is valuable — it clearly is — but whether the conditions that built the tacit foundation are being preserved for the next generation of practitioners. If they are not, the spiral enters a degenerative cycle: the current generation's tacit knowledge powers productive collaboration with AI, but the next generation, deprived of the friction that builds tacit knowledge, enters the collaboration with a thinner base, producing outputs that are explicitly competent but tacitly shallow. The generation after that is thinner still. The spiral does not collapse immediately. It erodes, each cycle producing more output from less understanding, until the organization discovers — perhaps in a crisis, perhaps in a competitive encounter with an organization that maintained its tacit knowledge base — that it has been consuming a resource it was not replenishing.

In his Nikkei obituary, Nonaka was quoted with a warning that reads as prescient in the context of generative AI: "We need to develop our human instincts lest we become slaves to numbers and data." The warning was not about the danger of data. It was about the danger of mistaking data for knowledge — of allowing the extraordinary power of explicit-knowledge processing to create the illusion that the tacit dimension is dispensable, when in fact it is the foundation on which all the explicit processing depends for its meaning, its depth, and its capacity to produce genuinely new understanding rather than merely novel recombinations of what already exists.

Chapter 3: The Spiral in Motion — Four Modes and Their Dynamic Balance

The SECI model is often presented as a two-by-two matrix — a static diagram with four quadrants, each labeled with a mode of knowledge conversion. This presentation is convenient for textbooks and PowerPoint slides. It is also misleading, because it obscures the most important feature of the model: the spiral. The four modes are not alternatives to be selected or quadrants to be managed independently. They are phases in a continuous, recursive, ascending process in which each mode feeds the next, each cycle raises the organization's knowledge to a higher level of sophistication, and the dynamic balance between the modes determines whether the spiral ascends productively or distorts into a pathological shape that produces abundance without depth.

Understanding the spiral in motion requires following it through a complete cycle, watching how knowledge converts from one form to another and how each conversion depends on the conversions that precede and follow it. The examples from Nonaka's original Japanese case studies — Honda, Canon, Matsushita — illustrate the full cycle. But the cycle is equally visible in the contemporary cases documented in The Orange Pill, where the spiral's dynamic balance is being tested by the introduction of the most powerful Combination engine in the history of organizational knowledge management.

Socialization begins the cycle. Tacit knowledge flows from one person to another through shared experience. The transmission is not verbal. It is not codified. It occurs through co-presence, observation, imitation, and the kind of mutual attunement that develops between people who work together closely enough that they begin to sense what the other knows without it being said. Nonaka's original example was the development of a bread-making machine at the Matsushita Electric Company (now Panasonic) in the late 1980s. The software developer Ikuko Tanaka could not replicate the bread-kneading technique of a master baker through specifications alone. She apprenticed herself to the baker at the Osaka International Hotel, working alongside him day after day, absorbing through her body the specific twisting, stretching motion that produced the desired dough consistency. No document could have transmitted this knowledge. No database contained it. It existed in the baker's hands and the baker's muscle memory, and it transferred to Tanaka only through the shared experience of kneading bread together.

In Segal's account, Socialization is visible in the Trivandrum training room: twenty engineers experiencing the transformation of their working lives together, sharing confusion and excitement in real time, absorbing from each other the tacit sense of what the new tools could and could not do. The room itself — the physical co-presence, the shared meals, the conversations that continued after the formal sessions ended — constituted what Nonaka would identify as an originating ba: a space of shared physical presence in which tacit knowledge could flow through channels that no remote collaboration tool can replicate.

Externalization follows. The tacit knowledge acquired through Socialization must be converted into explicit form if it is to become available beyond the individuals who possess it. This is the most creative and most difficult mode of the spiral, because it requires finding language for what resists language. Metaphor, analogy, hypothesis, and model are the primary instruments of Externalization — figurative representations that do not capture the tacit knowledge completely but create a communicable approximation that others can engage with. Tanaka externalized the baker's kneading technique as "twisting stretch" — a metaphorical description that was neither a precise specification nor a casual metaphor but something in between: an articulation that preserved enough of the tacit insight to guide the engineering team toward a mechanical design that replicated the essential motion.

In Segal's experience, Externalization is the mode where AI demonstrates its most generative capacity as a collaborative partner. When he describes Claude helping him "excavate" ideas from his mind — finding the connection between adoption curves and punctuated equilibrium that he had sensed but could not articulate — the process is Externalization performed through human-AI dialogue. The tacit insight (a felt sense that the adoption speed of AI measured something deeper than product quality) was his, the product of decades of experience building and observing technology adoption. Claude's contribution was to provide the explicit-knowledge connection (the concept of punctuated equilibrium from evolutionary biology) that crystallized the tacit insight into communicable form. Neither the tacit insight alone nor the explicit concept alone constituted the knowledge that was created. The creation occurred in the conversion between them — in the Externalization mode of the spiral.

But the quality of this Externalization depends entirely on the depth of the tacit knowledge being externalized. Deep tacit knowledge — the product of years of friction-rich, embodied engagement with a specific domain — externalizes into genuine insight. The connection between adoption curves and punctuated equilibrium is revealing because the person making the connection possessed the tacit understanding of technology adoption that made the parallel meaningful, not merely clever. Shallow tacit knowledge externalizes into what Segal himself identifies as one of AI's most dangerous failure modes: "confident wrongness dressed in good prose." The passage about Deleuze's "smooth space" that Claude produced for an early draft sounded like insight. It felt like Externalization. But the philosophical reference was wrong in a way that a deeper tacit knowledge of Deleuze would have immediately flagged. The Externalization was plausible rather than genuine — a simulation of the mode rather than an authentic instance of it — because the tacit foundation was insufficient.

Combination is the mode that AI accelerates beyond any previous technology. Existing explicit knowledge is sorted, recategorized, recombined, and synthesized into new configurations of explicit knowledge. In Nonaka's original formulation, Combination was the least creative mode of the spiral — the one that operated entirely in the explicit domain, where knowledge had already been flattened into communicable form. It was the mode of the database query, the literature review, the assembly of existing research into new frameworks. Valuable, but dependent on the other three modes to provide the raw material (through Externalization) and to convert the combined knowledge back into something that practitioners could actually use (through Internalization).

AI has transformed Combination from the least dynamic mode of the spiral into the most powerful. Claude processes explicit knowledge from across the entire range of human intellectual production — scientific literature, legal precedent, codebases, design patterns, philosophical traditions, historical records — and recombines it with a speed and comprehensiveness that no human or team of humans can approach. The twenty-fold productivity multiplier that Segal documents is largely a Combination multiplier. The machine takes explicit knowledge (the builder's natural language description, the existing codebase, the architectural patterns in its training data) and produces new configurations of explicit knowledge (working features, integrated systems, deployed products) at a pace that restructures what organizations can attempt within a given timeframe.

The gains are real. They are also, in SECI terms, constrained to a single mode. Combination without the other three modes is recombination, not creation. It produces novel configurations of existing knowledge without the tacit foundation (provided by Socialization) that would allow those configurations to resonate with genuine understanding, without the creative articulation (provided by Externalization) that would connect them to the felt reality of practice, and without the embodied practice (provided by Internalization) that would convert them from external artifacts into personal skill. The code that Claude produces works. The integration it assembles holds. But the practitioner who receives the output has not undergone the process that would make the knowledge in that output hers — part of her tacit repertoire, available for future judgment, integrated into the felt sense of how systems behave that constitutes genuine architectural expertise.

Internalization closes the spiral and is the mode under greatest pressure. Explicit knowledge must be converted back into tacit knowledge through practice — through doing, failing, adjusting, and doing again until the explicit instruction has been deposited into the body and the nervous system as embodied skill. Reading about how to ride a bicycle produces explicit knowledge of the physics of balance. Riding the bicycle — falling, adjusting, falling less — produces tacit knowledge of how to ride. No amount of explicit instruction can substitute for the practice. Internalization is the geological process that Segal's metaphor describes: each hour of doing deposits a layer, and the layers compound into the foundation on which expert judgment stands.

When AI produces the output that the practitioner would have produced through effortful practice, the Internalization step is bypassed. The explicit knowledge has been generated. The conversion into personal tacit knowledge through the friction of doing has been skipped. The practitioner's explicit output increases — she ships more features, produces more analyses, delivers more results — while her tacit knowledge base flatlines or even erodes, because the experiences that deposit it have been optimized away in favor of speed.

The spiral, in its healthy form, operates as a continuous ascending cycle: Socialization deposits shared tacit knowledge through co-experience; Externalization converts that tacit knowledge into communicable form through metaphor and model; Combination reconfigures the explicit knowledge into new configurations; Internalization converts the new explicit knowledge back into personal tacit skill through practice. Each cycle raises the organization's collective understanding to a higher level. The spiral ascends.

The spiral that AI produces is different. It is a cycle that spins rapidly in the Combination quadrant — producing new configurations of explicit knowledge at unprecedented speed — while the Socialization and Internalization modes lag or atrophy. The result is not a faster spiral. It is a deformed one. Imagine a wheel that is round on one side and flat on the other. It rotates, but it does not roll smoothly. Each revolution produces a jolt where the flat meets the ground. The organization produces more. It understands less. The jolts accumulate until the wheel no longer functions.

Nonaka's intellectual legacy insists that the spiral must be whole. His framework was not a menu from which organizations could select their preferred modes. It was a description of a dynamic system in which each mode depends on the others. Accelerate one mode without maintaining the others and the system does not improve. It distorts. The question for organizations in the AI age is not whether to use AI — the Combination gains are too large and too real to forgo. The question is whether they can maintain the dynamic balance of the full spiral even as one mode accelerates beyond any historical precedent.

The following chapters examine each mode in detail — the specific mechanisms by which AI affects it, the organizational consequences of its acceleration or atrophy, and the deliberate structures that can maintain the spiral's balance against the gravitational pull of the most powerful Combination engine ever built.

Chapter 4: Socialization Under Threat — What Happens When Shared Experience Disappears

Before there were words, there was watching. Before there were manuals, there was apprenticeship. Before there were databases or documentation systems or large language models, the primary mechanism by which human beings transmitted their most important knowledge was proximity — the simple, ancient, irreducible act of being in the same place, doing the same thing, at the same time.

Socialization, the first mode of the SECI spiral, describes the conversion of tacit knowledge from one person to another through shared experience. The mode operates beneath language. It does not require articulation. It does not depend on the knower's capacity to explain what she knows. It depends on co-presence: on the junior sitting beside the senior, watching how she works, absorbing through observation and imitation the patterns of attention, the habits of investigation, the micro-judgments that constitute expertise.

The apprentice in a traditional woodworking shop does not learn to read grain by studying a textbook on wood fiber. She learns it by standing next to the master while he runs his hand along a plank, watching where his fingers pause, feeling the same wood under her own hands, gradually developing the same sensitivity through the accumulation of shared experience. The knowledge is in the master's hands. It transfers to the apprentice's hands. No document mediates the transfer. No explicit instruction captures what is transmitted. The transmission occurs through the body, through time, through the particular quality of attention that shared physical work demands.

Nonaka drew this mode from his observation of Japanese corporate practices that Western management theory had largely overlooked or dismissed as cultural peculiarities. Honda's brainstorming camps — intensive, multi-day sessions where engineers from different departments lived and worked together, sharing meals and sleeping quarters, dissolving the boundaries between formal work and informal interaction — were not team-building exercises or morale boosters. They were engines of Socialization: deliberately designed spaces in which tacit knowledge could flow between participants through the unstructured, trust-rich, physically co-present interactions that formal organizational channels cannot replicate. The engineer from manufacturing and the engineer from design, sharing a late-night conversation over beer after a day of intensive work, might transmit more tacit knowledge about the constraints and possibilities of a new product than a month of formal meetings could achieve.

The mechanism depends on conditions that are specific and non-negotiable. Trust is required — the junior must trust the senior enough to observe without defensiveness, and the senior must trust the junior enough to allow observation of the unpolished, in-process, mistake-rich reality of expert practice rather than the curated, post-hoc, cleaned-up version. Shared vulnerability is required — both parties must be engaged in the same difficulty, exposed to the same uncertainty, navigating the same friction. And time is required — the kind of unhurried, unstructured, unoptimizable time in which tacit knowledge reveals itself through the accumulation of small signals that no efficiency-minded calendar would have allocated space for.

AI threatens Socialization not through malice or design but through the secondary effects of its overwhelming Combination power. Three mechanisms are at work, each observable in the organizations that have adopted AI tools most aggressively.

The first mechanism is the reduction of shared implementation work. Before AI coding assistants, a junior and senior engineer working on the same feature shared the experience of implementation: the debugging sessions, the code reviews, the pair programming that required both participants to be present, attentive, and engaged with the same problem simultaneously. These shared experiences were the primary channel through which the senior's tacit knowledge — architectural intuition, debugging heuristics, the felt sense of code quality — flowed to the junior. The knowledge transferred not through the senior's explanations (though explanations helped) but through the junior's observation of the senior in the act of practicing expertise: watching how she read an error message, noticing which questions she asked first, absorbing the pattern of her attention as she navigated a complex system.

When Claude handles the implementation, the shared work that generated these observation opportunities disappears. The senior no longer debugs alongside the junior, because Claude debugs. The senior no longer reviews the junior's code line by line, because Claude generates the code and the review becomes a higher-level assessment of whether the output meets the specification. The senior's expertise has not become less real. But the occasions on which that expertise is visible to the junior — visible in the act of being exercised, visible in the specific, situated, embodied way that Socialization requires — have been dramatically reduced.

The second mechanism is the dissolution of role boundaries. The Berkeley study documented this explicitly: AI tools blurred the lines between roles, with designers writing code, engineers building interfaces, and individual workers expanding into domains that previously belonged to specialized colleagues. The Orange Pill describes the same phenomenon with evident excitement — a backend engineer building user-facing features, a designer implementing complete capabilities end to end. The expansion of individual capability is real and, in many respects, liberating.

But role boundaries, in Nonaka's framework, are not merely administrative conventions. They are the structures within which domain-specific Socialization occurs. The backend team shares tacit knowledge about backend concerns — system reliability, data integrity, performance under load — through the daily practice of working together on backend problems. The frontend team shares tacit knowledge about user experience — interaction patterns, visual hierarchy, the felt sense of how an interface should respond — through the daily practice of building interfaces together. When the boundaries dissolve, the specialized communities of practice that enabled domain-specific Socialization dissolve with them.

The individual who now works across domains may produce more. But she is less likely to develop the deep, domain-specific tacit knowledge that the specialized community would have cultivated, because the community has been replaced by a solo practitioner working with an AI tool. The tool provides explicit knowledge across domains with impressive breadth. It does not provide the tacit dimension — the felt sense of what matters in this particular domain, the intuitions that develop through years of shared practice with others who know the domain deeply — that the community provided.

The third mechanism is the most subtle and perhaps the most consequential: the replacement of human consultation with machine consultation. Before AI, when a junior practitioner encountered a problem she could not solve, she went to a senior colleague. The interaction that followed was not merely an exchange of information. It was a Socialization event. The junior observed how the senior approached the problem — which aspects drew attention first, which questions were asked, which resources were consulted, which possibilities were dismissed and why. The senior's tacit knowledge was on display, available for absorption through the channels of observation and imitation that Socialization requires.

With AI, the junior asks Claude. Claude provides the answer — often a correct answer, often faster than the senior would have provided it. But the Socialization event does not occur. The junior receives explicit knowledge (the solution) without the tacit knowledge (the senior's approach to problems, the patterns of expert attention, the embodied judgment about what matters) that the human consultation would have transmitted. Each such interaction, multiplied across months and years and thousands of practitioners, represents a small reduction in the flow of tacit knowledge through the organization. No single interaction is critical. The cumulative effect is.

This cumulative erosion of Socialization produces a specific organizational pathology that Nonaka's framework predicts: a growing gap between the organization's explicit knowledge (which AI augments continuously) and its tacit knowledge (which Socialization, now weakened, is no longer replenishing at the rate required). The gap is not immediately visible. Explicit knowledge is measurable — lines of code, documents produced, features shipped. Tacit knowledge is not. The organization's dashboards show productivity rising while the tacit foundation quietly thins.

The pathology becomes visible only when the tacit foundation is tested — in a crisis, in an ambiguous situation that requires judgment rather than computation, in a competitive encounter with an organization whose tacit knowledge base is deeper. The organization discovers that its practitioners can produce outputs with impressive speed but cannot evaluate those outputs with the confidence of genuine understanding. They can generate but cannot judge. They can build but cannot diagnose. The senior engineer on Segal's team, the one who spent his first days oscillating between excitement and terror, recognized this instinctively: if the implementation that had consumed eighty percent of his career could be handled by a tool, the remaining twenty percent — judgment, taste, architectural intuition — was everything. But that twenty percent was the product of the eighty percent. The judgment was deposited through the implementation. Remove the implementation and the next generation's judgment does not deposit.

Segal's response — protected mentoring time, structures that preserve human-to-human knowledge transmission — is, in Nonaka's framework, an attempt to build deliberate Socialization ba in an environment where the organic Socialization that previously occurred as a natural byproduct of shared work has been eroded. The recommendation is essential. It is also, by itself, insufficient, because the organic Socialization that shared implementation work provided was not a scheduled event but a continuous condition. Practitioners absorbed tacit knowledge all day, every day, through the ambient experience of working in proximity to others who possessed it. Replacing this continuous condition with scheduled mentoring sessions is better than nothing, but it is a dam made of sticks and mud against a river that has substantially widened.

The deeper solution, which Nonaka's framework suggests but organizational practice has not yet developed, is the deliberate design of shared work that is resistant to AI automation — work that requires human co-presence, shared vulnerability, and mutual engagement with difficulty. Not because such work is more efficient than AI-assisted alternatives, but because it produces the tacit knowledge on which the quality of all AI-assisted work ultimately depends.

The paradox is precise: the more powerful the Combination engine becomes, the more important the Socialization mode becomes, because the quality of the Combination's output depends on the quality of the tacit knowledge that practitioners bring to its evaluation and direction. And the more powerful the Combination engine becomes, the more it displaces the shared work through which Socialization naturally occurs. The solution cannot come from within the technology. It must come from organizational design — from the deliberate, countercultural, possibly inefficient-seeming decision to maintain spaces and practices in which human beings share the experience of difficult work without the mediation of machines. Not because the machines are the enemy, but because the tacit knowledge that makes machines useful is built in their absence.

Chapter 5: Externalization Amplified — AI as the Excavator of Tacit Insight

Of the four modes in the SECI spiral, Externalization is the one most resistant to routine. Socialization has its apprenticeships. Combination has its databases. Internalization has its practice regimens. Externalization has nothing reliable, because its essence is the conversion of what cannot be said into what can — the articulation of tacit knowledge into explicit form through the irreducibly creative instruments of metaphor, analogy, model, and hypothesis. It is the mode where the master baker's kneading motion becomes the phrase "twisting stretch." Where a felt sense of market need becomes a product concept. Where an architect's embodied understanding of how people move through space becomes a blueprint that builders can follow.

Externalization is where knowledge creation is most genuinely creative, and it is the mode where AI's role as a collaborative partner is most interesting, most productive, and most theoretically significant for the future of Nonaka's framework.

The difficulty of Externalization is not that people lack tacit knowledge. Most skilled practitioners possess it in abundance. The difficulty is that tacit knowledge, by definition, resists the explicit formulation that Externalization requires. The surgeon knows where to cut but cannot fully explain the basis for her confidence. The experienced product manager knows that a proposed feature will confuse users but cannot articulate the perceptual mechanism that produces the confusion. The senior engineer feels that an architectural choice will produce failures under load but cannot, in the moment, decompose that feeling into the specific causal chain that would constitute an explicit argument. The knowledge is real. The capacity to externalize it — to find the metaphor, the model, the precise formulation that converts felt understanding into communicable form — is a separate skill, and many practitioners who possess deep tacit knowledge lack it.

This gap between possessing tacit knowledge and being able to articulate it is one of the oldest frustrations in organizational life. Nonaka observed it across decades of fieldwork in Japanese companies: the brilliant engineer who could build anything but could not explain her design rationale to colleagues in other departments. The veteran salesperson who could read a client's mood with uncanny accuracy but could not train junior salespeople in the skill because she could not decompose it into teachable components. The experienced manager who made consistently good decisions under uncertainty but described her decision process as "intuition" — a word that functions, in organizational discourse, as a placeholder for tacit knowledge that has not been externalized.

AI enters this gap with a capability that the original SECI model did not anticipate: the capacity to serve as a conversational partner that helps the human externalize tacit knowledge by providing the explicit-knowledge scaffolding — the concepts, the connections, the vocabulary — that the human's tacit insight needs in order to crystallize into communicable form.

Segal's account of this process is the most vivid in The Orange Pill. Working late, struggling to articulate why AI adoption curves revealed something deeper than product quality, he described the problem to Claude in the imprecise, half-formed language that characterizes tacit knowledge seeking explicit expression. He knew what he meant. He could not find the bridge between his felt sense of the phenomenon and a formulation that would make it legible to others. Claude responded with punctuated equilibrium — a concept from evolutionary biology describing systems that remain stable for long periods and then change rapidly when environmental pressure meets latent variation. The concept was not Segal's. The connection between punctuated equilibrium and technology adoption was not something he had previously considered. But the moment he encountered it, the connection crystallized his tacit understanding into explicit form: the adoption speed of AI was not a measure of product quality but of pent-up creative pressure, the accumulated frustration of builders who had spent years translating ideas through layers of implementation friction.

What happened in that exchange? In Nonaka's framework, it was Externalization — but Externalization of a kind the framework had not originally described. In the classical model, Externalization is performed by the knower herself, through the effortful process of finding language for what resists language. Tanaka externalized the baker's kneading technique. Honda's engineers externalized their vision of "automobile evolution." In each case, the person who possessed the tacit knowledge was the person who found the explicit formulation, sometimes aided by dialogue with colleagues but always performing the creative conversion herself.

In Segal's exchange with Claude, the creative conversion was distributed. The tacit knowledge — the felt sense of what the adoption curves meant — was entirely his, the product of decades of building and observing technology markets. The explicit scaffolding — the concept of punctuated equilibrium, drawn from a domain Segal had not been studying — was Claude's contribution, retrieved from its training data and presented in response to the pattern it detected in Segal's description. The Externalization occurred in the space between them: not in Segal's mind alone, not in Claude's processing alone, but in the collision of a human's tacit insight with a machine's explicit-knowledge retrieval. Neither participant produced the externalized knowledge independently. It emerged from the interaction.

This distributed Externalization is genuinely new. It extends Nonaka's framework in a direction that his original formulation, built for a world of exclusively human knowers, could not have anticipated. The extension is theoretically significant because it suggests that AI can participate productively in the SECI spiral — not as a substitute for human knowledge creation but as a catalyst for the specific mode of conversion that is most resistant to routine and most dependent on the availability of diverse explicit knowledge to serve as scaffolding for tacit insight.

Several scholars have attempted to formalize this extension since 2024. The GRAI framework proposed by Böhm and Durst splits each SECI mode into human and machine perspectives, creating eight fields of action that map how generative AI participates in knowledge conversion processes. The GenAI SECI model developed by Ogawa and colleagues positions generative AI as an auxiliary means — not as a new agent but as a tool that helps humans perform knowledge conversion more effectively. Both frameworks recognize that AI's contribution to Externalization is real but derivative: the machine provides the explicit-knowledge resources that facilitate the conversion, but the tacit knowledge being externalized and the creative act of recognizing the right connection remain human.

The distinction matters because it determines the quality of what Externalization produces. When the tacit knowledge being externalized is deep — the product of years of engaged, friction-rich practice — the AI-assisted Externalization produces genuine insight. The connection between adoption curves and punctuated equilibrium is revealing because it illuminated something true about the phenomenon, something that neither the raw data nor the biological concept alone could have disclosed. The insight emerged from the collision of deep tacit understanding with a well-chosen explicit-knowledge frame, and the depth of the tacit understanding is what made the collision productive rather than merely decorative.

When the tacit knowledge is shallow, AI-assisted Externalization produces something that mimics insight without delivering it. Segal documents this failure mode with the Deleuze episode: Claude drew a connection between Csikszentmihalyi's flow state and Deleuze's concept of "smooth space" that was rhetorically elegant and philosophically wrong. The passage sounded like Externalization — it had the structure and the confidence of genuine insight crystallizing into form. But the tacit foundation was insufficient. Neither Segal nor Claude possessed the deep tacit understanding of Deleuze's philosophy that would have flagged the error. The explicit-knowledge scaffolding (the reference to Deleuze) was available. The tacit knowledge that would have evaluated whether the scaffolding was appropriate was not.

This asymmetry — productive when the tacit base is deep, misleading when it is shallow — has a structural implication that connects Externalization to every other mode of the spiral. The quality of AI-assisted Externalization depends on the quality of the tacit knowledge being externalized. The quality of the tacit knowledge depends on Socialization (the transmission of tacit knowledge through shared experience) and Internalization (the conversion of explicit knowledge into tacit skill through practice). If Socialization and Internalization are atrophying — as the previous chapter argued they are — then the tacit knowledge available for Externalization will progressively thin. AI-assisted Externalization will still occur. It will still produce outputs that look like insight. But the genuine-to-spurious ratio will shift, because the tacit foundation from which genuine Externalization draws its substance will be less deep.

The spiral's interconnection makes this degradation difficult to detect in real time. The externalized outputs — the articulated insights, the crystallized concepts, the metaphors and models that emerge from human-AI dialogue — continue to arrive at a steady pace, because Claude's explicit-knowledge scaffolding is inexhaustible. What changes is not the quantity of Externalization but its quality — the degree to which the externalized knowledge reflects genuine tacit understanding versus plausible pattern-matching between an imprecise human description and an available explicit-knowledge frame. The difference between the two is precisely the difference between Segal's adoption-curves insight (genuine, because grounded in deep tacit knowledge of technology markets) and the Deleuze passage (spurious, because lacking the tacit philosophical understanding that would have prevented the misapplication).

Nonaka's framework suggests a test for this quality: Can the person who produced the externalized knowledge defend it under challenge? Can she trace the connection back to the tacit understanding from which it emerged? Can she explain not just what the insight is but why it is true — what embodied experience, what felt sense of the domain, what accumulated judgment produced the recognition that this particular explicit-knowledge frame was the right one for this particular tacit insight? If she can, the Externalization is genuine. If she cannot — if the insight was accepted because it sounded right rather than because it was grounded in felt understanding — then the Externalization is spurious, regardless of how polished the output appears.

This test is precisely what Segal describes performing when he catches himself almost accepting Claude's smoother, emptier version of an argument about democratization. The prose had outrun the thinking. The explicit formulation was elegant, but the tacit foundation — the personal conviction, the felt sense of what he actually believed about the moral significance of expanding who gets to build — was not there. He deleted the passage and spent two hours at a coffee shop with a notebook, writing by hand until he found the version that was his. Rougher. More qualified. More honest about what he did not know. In Nonaka's terms, he rejected a spurious Externalization and performed a genuine one — at the cost of speed, polish, and the seductive ease of accepting the machine's output.

The organizational implication is that AI-assisted Externalization requires a discipline of verification that previous forms of Externalization did not demand with the same urgency. When a human externalizes tacit knowledge unaided, the quality control is built into the difficulty of the process: finding language for what resists language is hard enough that the result, when it arrives, tends to reflect genuine understanding. The difficulty itself is a filter. When AI assists the process by providing explicit-knowledge scaffolding on demand, the difficulty drops dramatically — and with it, the built-in quality control. Externalization becomes easier, faster, more fluent, and more prone to producing outputs that simulate insight without containing it.

The discipline required is not skepticism toward AI but honesty about the tacit foundation. The practitioner who uses AI to externalize tacit insight must be willing to ask: Is this insight grounded in something I know through experience, or is it a connection that merely sounds plausible? Do I recognize this as true because my embodied understanding confirms it, or because the prose is convincing? Can I defend this under challenge from someone who knows the domain deeply?

These questions are uncomfortable. They slow the process. They introduce friction into a mode that AI has made frictionless. And they are essential, because without them, AI-assisted Externalization degenerates from a genuinely new form of collaborative knowledge creation into the most sophisticated generator of plausible emptiness that organizational life has ever produced — output that looks like knowledge, reads like knowledge, and functions as knowledge until the moment it is tested against the reality it purports to describe.

The promise of AI-assisted Externalization is real: a new mode of knowledge conversion that extends Nonaka's framework into territory he could not have foreseen, enabling practitioners to articulate tacit insights that might otherwise have remained locked inside individual experience. The promise depends, with a precision that admits no exception, on the depth of the tacit knowledge being externalized. Which depends, in turn, on the Socialization and Internalization modes that build and maintain that depth. The spiral is whole, or it produces nothing worthy of the name knowledge.

Chapter 6: Combination at Scale — The Mode That Spins Too Fast

In 1995, when Nonaka and Takeuchi described the Combination mode of the SECI spiral, they characterized it as the most straightforward form of knowledge conversion: the reconfiguration of existing explicit knowledge into new explicit knowledge through sorting, adding, recategorizing, and recontextualizing. A financial analyst who aggregates data from multiple reports into a new market assessment is performing Combination. A researcher who synthesizes findings from several studies into a literature review is performing Combination. A programmer who assembles existing libraries and frameworks into a new application is performing Combination. The mode operates entirely in the explicit domain — codified inputs produce codified outputs — and, in Nonaka's original assessment, it was the least creative of the four modes, the one most amenable to systematization and most dependent on the other modes to supply it with meaningful raw material and to convert its outputs into something practitioners could embody.

Thirty years later, this least creative mode has become the most powerful force in organizational knowledge management, because AI has turned it into something Nonaka's framework never anticipated: a Combination engine of essentially unlimited scale and speed.

The statistics that The Orange Pill documents tell a quantitative story. Google reports that twenty-five to thirty percent of its new code is AI-assisted. Microsoft reports similar figures. Industry aggregates suggest that over forty percent of all code produced in 2025 involved AI tools, with projections pointing past fifty percent within months. Anthropic's CEO projected ninety percent AI-written code within a near-term horizon. Whether any specific number proves precise, the direction is unambiguous: the majority of explicit-knowledge artifacts in the software industry — the code, the documentation, the test suites, the configuration files — are being produced through Combination at a pace that doubles and redoubles on timescales measured in months.

But the quantitative story is the less important one. The qualitative story — what happens to the SECI spiral when one mode accelerates beyond any historical precedent while the others do not — is what Nonaka's framework uniquely illuminates.

Consider what Claude does when it produces working software from a natural language description. It takes explicit knowledge from multiple sources — the training data that includes millions of code repositories, documentation sets, and architectural patterns; the user's natural language description, which is itself an explicit-knowledge artifact; the context of the conversation, which provides additional explicit constraints — and recombines these explicit inputs into a new explicit output: working code. The recombination is not trivial. It requires sophisticated pattern-matching, contextual inference, and a fluency with code structures that produces outputs often more elegant than what a typical human programmer would generate. But the operation, however sophisticated, remains within the explicit domain. Codified inputs produce codified outputs. This is Combination.

The scale of this Combination is what transforms it from the modest mode that Nonaka described into a force that restructures organizational knowledge dynamics. Before AI, Combination was bounded by human processing capacity. A programmer could consult a finite number of Stack Overflow answers, read a finite quantity of documentation, review a finite set of existing codebases. The explicit-knowledge inputs were limited by human bandwidth. The outputs were correspondingly bounded.

AI removes the bandwidth constraint. Claude's training data encompasses a substantial fraction of all publicly available code, documentation, and technical writing in existence. Its ability to retrieve, cross-reference, and recombine this knowledge in response to a specific request operates at a scale that no human or team of humans can approach. The Combination mode, previously bounded by human processing limits, has become effectively unbounded in its access to explicit-knowledge inputs and its speed of generating explicit-knowledge outputs.

The twenty-fold productivity multiplier that Segal documents in Trivandrum is a measurement of this unbounding. Features that previously required weeks of human Combination — assembling libraries, configuring dependencies, writing boilerplate, debugging integration issues, consulting documentation for APIs and protocols — are produced in days or hours because the machine performs the Combination at a pace that compresses weeks of human explicit-knowledge processing into minutes of computation.

The SaaS Death Cross that Segal analyzes in The Orange Pill is the market-level expression of the same phenomenon. When the cost of producing code — the most visible explicit-knowledge artifact in the software industry — approaches zero, the market reprices companies according to what cannot be replicated through Combination alone. A trillion dollars of market value vanished from software companies in early 2026, not because the products ceased to function but because the market recognized that the explicit-knowledge artifacts those products consisted of — the code, the interfaces, the features — were no longer scarce. Any competent practitioner with Claude could reproduce them. What remained scarce was the layer above the code: the judgment about what to build, the institutional trust accumulated over decades of deployment, the ecosystem of integrations and data that constituted genuine organizational knowledge rather than mere explicit-knowledge artifacts.

The market, in other words, discovered the limit of Combination. Code is Combination. Ecosystems are not. Judgment is not. The tacit knowledge that allows a practitioner to evaluate whether a particular feature serves a genuine need, whether an architectural choice will hold under load, whether a product direction aligns with the lived reality of its users — this knowledge is the product of the full spiral, not just one mode. The market repriced accordingly, punishing companies whose value was primarily in Combination (thin applications solving singular problems) and preserving companies whose value resided in the layers that Combination cannot reach (deep ecosystems, institutional trust, accumulated organizational knowledge).

Nonaka's framework predicts this repricing with a precision that market analysts, operating without the SECI vocabulary, struggled to articulate. In the SECI model, Combination is dependent on the other three modes for both its inputs and the conversion of its outputs into genuine organizational knowledge. Combination takes explicit-knowledge inputs and produces explicit-knowledge outputs. But the quality of those inputs depends on Externalization — the creative articulation of tacit insight into explicit form — which in turn depends on the depth of tacit knowledge built through Socialization and Internalization. And the organizational value of Combination's outputs depends on Internalization — the conversion of the explicit outputs back into tacit, embodied skill that practitioners can exercise with judgment and confidence. Without these surrounding modes, Combination is a machine spinning in the explicit domain, producing novel configurations of existing knowledge at breathtaking speed without the tacit foundation that would make those configurations genuinely new understanding and without the embodied practice that would convert them into durable organizational capability.

The organizational pathology this produces has a specific signature: accelerating output with decelerating understanding. The team ships more features but has less confidence in their architectural soundness. The firm produces more analyses but has less conviction about their strategic implications. The individual practitioner generates more code but has less embodied understanding of why the code works, where it might fail, and how it relates to the larger system it inhabits. Output increases. Understanding does not. The gap widens with each Combination cycle that is not accompanied by corresponding cycles of Socialization, Externalization, and Internalization.

This pathology maps precisely onto what Segal describes as his most dangerous moments working with Claude — the moments when "the prose had outrun the thinking," when the explicit output was polished and plausible but the underlying understanding had not kept pace. The smoothness of the Combination output — its syntactic correctness, its structural competence, its surface-level adequacy — conceals the absence of the tacit depth that would distinguish genuine knowledge from competent recombination. Byung-Chul Han's diagnosis of the "aesthetics of the smooth," which Segal takes seriously across Part Three of The Orange Pill, is, in Nonaka's vocabulary, a diagnosis of Combination without Internalization: explicit-knowledge artifacts produced at speed, polished to a frictionless finish, lacking the depth that only the full spiral can provide.

The comparison to earlier phases of knowledge management technology is instructive. The knowledge management systems of the 1990s and early 2000s — the intranets, the document repositories, the enterprise search engines — were Combination tools of modest power. They stored explicit knowledge and allowed rudimentary recombination through search and retrieval. Nonaka's criticism of these systems was that they captured explicit knowledge while ignoring the tacit dimension, producing knowledge bases that grew in size while the organization's capacity for genuine innovation did not improve. The critique was validated by the widespread disillusionment with knowledge management initiatives that followed: organizations invested heavily in systems for capturing and sharing explicit knowledge and found that the knowledge thus captured and shared was, on its own, insufficient to generate the understanding and innovation that the investments were supposed to produce.

AI is a vastly more powerful Combination tool than the databases and intranets of the 1990s. It does not merely store and retrieve explicit knowledge but actively recombines it, generating new explicit-knowledge configurations with a fluency and sophistication that the old systems could not approach. The improvement is genuine and consequential. But the underlying epistemological limitation is the same: Combination, however powerful, operates in the explicit domain alone, and the explicit domain alone does not produce the knowledge that organizations need most — the tacit understanding, the embodied judgment, the practical wisdom that determines whether the explicit-knowledge artifacts the Combination engine produces are genuinely useful or merely novel arrangements of existing information.

The risk is not that Combination will fail. It will succeed — is succeeding — at producing explicit-knowledge artifacts of extraordinary quantity and often remarkable quality. The risk is that the overwhelming success of Combination will create the organizational illusion that the other modes are dispensable, that the explicit domain is sufficient, that the spiral can run on one mode alone. This illusion is more seductive than the equivalent illusion produced by 1990s knowledge management, because AI's Combination outputs are vastly better — more fluent, more integrated, more immediately useful. The temptation to treat these outputs as complete, as sufficient, as equivalent to genuine organizational knowledge creation is correspondingly greater.

Nonaka's framework is the corrective. It insists, with a rigor that no amount of impressive output can override, that Combination is one mode of a four-mode process, that it depends on the other three for the quality of its inputs and the organizational value of its outputs, and that accelerating it without maintaining the balance of the full spiral produces not a more productive organization but a distorted one — an organization that generates more while understanding less, that ships faster while knowing less deeply, that fills the market with explicit-knowledge artifacts while the tacit foundation on which their value depends quietly erodes beneath the surface of the productivity metrics.

The mode spins faster than ever. The spiral, to remain whole, must find ways to keep the other modes in balance with a Combination engine that has no natural tendency toward restraint.

Chapter 7: Internalization Interrupted — The Missing Practice of Embodiment

There is a moment in the acquisition of any skill when the explicit instruction stops mattering and the body takes over. The pianist who has practiced a passage a thousand times no longer thinks about the notes. The experienced driver no longer consults a mental checklist before merging into traffic. The senior software architect no longer reasons through every architectural decision from first principles — she feels the right structure, and the feeling is reliable because it rests on thousands of hours of decisions made, evaluated, failed, and revised.

This moment — the moment when explicit knowledge disappears into the body and becomes tacit skill — is what Nonaka calls Internalization, the fourth and final mode of the SECI spiral. Internalization closes the cycle: the explicit knowledge produced through Externalization and Combination is converted back into personal, embodied, tacit knowledge through the irreducibly physical process of practice. Reading the manual produces explicit knowledge of the procedure. Performing the procedure, repeatedly, under varying conditions, with the full engagement of the body and the nervous system, converts that explicit knowledge into something qualitatively different: a tacit capacity that can be exercised without conscious deliberation and that responds to the situational nuances that no manual can anticipate.

Internalization is the geological process. Segal's metaphor — every hour of debugging deposits a thin layer of understanding, the layers compound over years into the solid ground on which expert intuition stands — is not a metaphor at all, in Nonaka's framework. It is a description of the Internalization mechanism operating correctly. Each encounter with a failing system, each debugging session that forces the developer to trace the execution path through unfamiliar code, each unexpected behavior that compels the construction of a new mental model — these experiences are the friction that converts explicit knowledge (the error message, the documentation, the observed behavior) into tacit knowledge (the embodied feel for how systems behave, the architectural intuition that detects structural weakness before analysis confirms it).

The friction is not incidental to the conversion. It is the conversion. Without the resistance of the material — without the error that forces re-examination, the failure that forces revision, the confusion that forces the construction of understanding — the explicit knowledge remains external. It sits in the documentation. It exists in the codebase. It is available for retrieval. But it has not been internalized. It has not become part of the practitioner's tacit repertoire. She can look it up. She cannot feel it.

AI interrupts this process at the precise point where the friction would occur.

Consider the concrete mechanics. A junior developer is building a feature that requires interaction between a database, an API layer, and a user interface. In the pre-AI workflow, she begins writing code. The code does not compile. She reads the error message. The message is cryptic — error messages are written by and for people who already understand the system, and their cryptic quality is itself a form of friction that forces the beginner to develop the interpretive skill that experts exercise without effort. She consults the documentation. The documentation describes the general case; she must figure out how it applies to her specific situation. She tries a solution. It partially works. The partial success generates a new error, more specific this time. She traces the execution path. She discovers a misunderstanding about how the API handles authentication. She revises her mental model. She tries again. By the time the feature works — hours or days later — she has deposited multiple layers of tacit understanding: about the database's behavior under concurrent access, about the API's authentication flow, about the way the user interface handles asynchronous data. These layers will compound with the layers deposited by future projects. In five years, she will look at a system diagram and feel the architectural weakness that a junior colleague cannot see, and the feeling will be reliable because it rests on the accumulated deposits of thousands of hours of exactly this kind of friction-rich practice.

In the AI-assisted workflow, she describes the feature to Claude. Claude produces working code. The code compiles, connects to the database correctly, handles the API authentication, renders the user interface. She reviews the output, confirms that it meets the specification, and ships it. The feature works. The explicit-knowledge artifact is identical — or possibly superior — to what she would have produced through the friction-rich process. But the Internalization has not occurred. The layers have not deposited. The error she never encountered did not force her to develop the interpretive skill. The documentation she never consulted did not build her understanding of the general case. The misunderstanding about API authentication that she never had did not generate the mental model revision that would have deepened her architectural intuition.

The output is the same. The practitioner is different. Specifically, her tacit knowledge base has not grown. The explicit-knowledge artifact exists in the world. The personal, embodied understanding that the friction-rich process would have deposited in her nervous system does not.

Multiply this by thousands of interactions over months and years. Each AI-assisted task that bypasses the friction of implementation represents an Internalization event that did not occur — a layer that was not deposited, a tacit capability that was not built. The individual sessions are trivial. The cumulative effect is not. After two years of AI-assisted development, the practitioner has produced an impressive portfolio of explicit-knowledge artifacts — features shipped, systems deployed, products launched. Her tacit knowledge base has grown only through the shrinking fraction of her work that AI has not yet reached — the judgment calls, the strategic decisions, the occasional encounter with a problem novel enough that Claude's training data cannot resolve it.

The organizational consequence is a workforce that is explicitly productive and tacitly impoverished. The productivity metrics — features per sprint, tickets closed, deployment frequency — look excellent. The tacit indicators — the quality of architectural decisions, the reliability of expert judgment under ambiguity, the depth of understanding that allows practitioners to evaluate AI outputs with genuine confidence — are harder to measure and may be declining.

Nonaka warned against this dynamic in his broader critique of the knowledge management movement. In the 2008 strategy+business interview, he argued that organizations that treat knowledge management as a branch of IT "don't understand how human beings learn and create." The statement was directed at systems that stored explicit knowledge without providing the conditions for its Internalization — the practice, the experimentation, the learning-by-doing that converts what is in the system into what is in the practitioner. The critique applies with even greater force to AI, because AI does not merely store explicit knowledge. It processes and generates it, creating explicit-knowledge artifacts of such quality that the practitioner may never encounter the friction that would have forced Internalization.

The medical analogy Segal introduces — laparoscopic surgery displacing open surgery — illuminates the Internalization interruption with clinical precision. Open surgeons developed tactile tacit knowledge through years of direct, hands-in-body engagement with tissue. Laparoscopic surgeons, trained on instrument-mediated techniques, developed different capabilities but lost the specific tactile dimension that direct contact had deposited. The knowledge that lived in the open surgeon's hands was not transferable through laparoscopic training, because laparoscopic training removed the specific friction — the direct resistance of tissue against skin — through which that knowledge was built.

Segal frames this as ascending friction: the difficulty relocates to a higher cognitive level. The laparoscopic surgeon faces harder challenges of spatial reasoning, instrument coordination, and two-dimensional interpretation of three-dimensional space. Nonaka's framework does not dispute the ascending friction thesis. It adds a qualification: the tacit knowledge deposited by the old friction is different in kind from the tacit knowledge deposited by the new friction. What is gained and what is lost are not the same currency. The open surgeon's tactile intuition and the laparoscopic surgeon's spatial reasoning are both forms of tacit knowledge, both deposited through Internalization, both genuinely valuable. But they are not interchangeable. The ascending friction deposits new layers while the old layers stop forming. Whether the new layers compensate fully, partially, or inadequately for the old is an empirical question that can only be answered over time, as practitioners trained exclusively in the new paradigm encounter the situations where the old tacit knowledge would have been decisive.

The same empirical uncertainty applies to AI-assisted development. The practitioner who uses Claude for implementation and devotes her freed bandwidth to architectural judgment, product strategy, and system design is developing new tacit knowledge at a higher cognitive level. The judgment she exercises about what to build, the strategic intuition she develops through experience directing AI tools, the design sense she cultivates through evaluating outputs rather than producing them — these are genuinely valuable forms of tacit knowledge, deposited through a new kind of friction at a new cognitive altitude. Nonaka's framework acknowledges this possibility. The ascending friction thesis is not incompatible with the SECI model.

But the framework also insists on a question the ascending friction thesis does not fully answer: Is the new tacit knowledge sufficient to evaluate the explicit-knowledge artifacts that the Combination engine produces? The architect who has never debugged a concurrency issue — who has never felt in her hands the specific way that race conditions manifest — may lack the tacit foundation to evaluate whether Claude's concurrent code is genuinely safe or merely syntactically correct. The product strategist who has never built a feature from scratch — who has never experienced the friction of discovering, through implementation, that the specification was incomplete or the user model was wrong — may lack the tacit understanding to evaluate whether the AI-generated feature actually serves the user's need or merely satisfies the specification.

The gap between evaluating at the explicit level (Does the code compile? Does it pass the tests? Does it meet the specification?) and evaluating at the tacit level (Does this feel right? Will this hold under stress? Does this serve the user in the way the specification intended?) is precisely the gap that Internalization fills. Without the deposited layers of friction-rich practice, the practitioner evaluates at the explicit level only — and the explicit level, as Nonaka's entire intellectual career argued, is not where the most important organizational knowledge lives.

The prescription that emerges from this analysis is not the rejection of AI-assisted work. The Combination gains are too large and too consequential to forgo. The prescription is the deliberate maintenance of Internalization practices — structured opportunities for practitioners to engage in friction-rich, hands-on, implementation-level work without AI assistance, not as the primary mode of production but as a developmental practice that deposits the tacit knowledge layers on which the quality of all AI-assisted work depends. The analogy is physical exercise: the knowledge worker who uses AI for implementation and practices hands-on coding for professional development is doing for her tacit knowledge base what the office worker who exercises daily does for her physical health — maintaining a capacity that the primary work activity does not build but that the primary work activity depends on.

Nonaka's framework does not prescribe how much Internalization practice is sufficient. That is an empirical question that organizations are only beginning to ask. But the framework is unambiguous about the principle: the spiral requires all four modes. When Internalization is bypassed, the spiral does not accelerate. It erodes, producing more output from less understanding, each cycle spinning on a thinner tacit foundation, until the foundation can no longer support the weight of the explicit-knowledge structures built upon it.

Chapter 8: Ba — The Shared Space That Machines Cannot Create

The Japanese philosopher Kitarō Nishida introduced the concept of basho — roughly, "place" — in the 1920s as a philosophical framework for understanding how experience and consciousness arise within a relational field rather than inside an isolated subject. Nishida was grappling with a problem that Western philosophy had struggled with since Descartes: if the self is the starting point of knowledge, how does the self come to know anything beyond itself? Nishida's answer was to dissolve the boundary between self and world, arguing that consciousness does not exist inside a container but emerges within a shared space — a place of mutual arising in which knower and known, subject and object, self and other are co-constituted rather than separately given.

Nonaka borrowed and adapted this concept for organizational theory, transforming it from a philosophical abstraction into a practical framework for understanding the conditions under which knowledge creation occurs. In Nonaka's usage, ba is the shared context — physical, virtual, or mental — in which knowledge conversion takes place. It is not merely a location. It is a quality of interaction defined by mutual trust, shared purpose, the kind of caring for others and for the work that allows participants to share not just information but experience, vulnerability, and the incompletely formed ideas that constitute tacit knowledge in transit.

The distinction between a space and a ba is the distinction between a room full of people and a room in which knowledge is being created. Both contain humans. Both involve communication. But in one, the communication operates at the explicit level — information is exchanged, decisions are made, tasks are assigned. In the other, something deeper occurs: tacit knowledge flows between participants through channels that formal communication cannot construct. The junior absorbs the senior's patterns of attention. The engineer from manufacturing and the engineer from design develop, through sustained mutual engagement, an intuitive understanding of each other's constraints that no specification document could convey. The team, working together through shared difficulty, develops a collective tacit knowledge — a shared sense of what matters, what works, what feels right — that belongs to no individual but emerges from the quality of their interaction.

Nonaka identified four types of ba, each corresponding to a mode of the SECI spiral. Originating ba is the space of face-to-face interaction where Socialization occurs — where tacit knowledge flows between individuals through shared physical presence, observation, and the embodied empathy that requires bodies in proximity. Dialoguing ba is the space of peer-to-peer conversation where Externalization occurs — where tacit insights are articulated through dialogue, metaphor, and the constructive friction of intellectual exchange. Systemizing ba is the virtual or networked space where Combination occurs — where explicit knowledge from multiple sources is reconfigured into new explicit knowledge through the systematic processes that information technology supports. Exercising ba is the individual-in-context space where Internalization occurs — where the practitioner converts explicit knowledge into tacit skill through the embodied practice of doing the work.

Each type of ba has its own conditions and its own fragilities. Originating ba requires physical co-presence, trust, and unstructured time — conditions that are expensive, inefficient by conventional metrics, and resistant to the optimization that organizational life constantly demands. Dialoguing ba requires intellectual openness, a tolerance for incompleteness, and the willingness to share half-formed ideas that may be wrong — conditions that depend on psychological safety and cultural permission. Systemizing ba requires effective information infrastructure and shared access to explicit-knowledge resources. Exercising ba requires protected time for practice, tolerance for the inefficiency of learning by doing, and organizational patience with the slow deposition of tacit knowledge.

AI participates naturally and powerfully in systemizing ba. This is where its Combination capability operates: reconfiguring explicit knowledge from across vast repositories into new configurations in response to user requests. The systemizing ba that AI creates is richer, faster, and more comprehensive than any previous information technology could provide. A practitioner working with Claude inhabits a systemizing ba of extraordinary density — access to a significant fraction of humanity's codified knowledge, retrievable and recombinant on demand.

AI also contributes meaningfully to dialoguing ba, as the analysis of Externalization in Chapter 5 describes. The human-AI conversation can function as a space of productive dialogue in which tacit insights are articulated through the back-and-forth of description, response, refinement. Segal's account of working with Claude — the late-night sessions in which half-formed ideas were held, reflected back, and gradually crystallized into explicit formulations — describes a dialoguing ba with AI as a conversational partner. The ba is real. The Externalization it produces can be genuine.

But AI cannot create originating ba. And this limitation is not a technical deficiency to be resolved by future model improvements. It is a structural feature of what originating ba requires.

Originating ba depends on shared embodied experience — the physical co-presence that allows tacit knowledge to flow through observation, imitation, and the mutual attunement that develops between people who face the same difficulties in the same space at the same time. The baker and the apprentice kneading dough together. The senior and junior engineers debugging a system side by side. The design team sketching on a whiteboard, where the gesture of drawing — the hesitation, the revision, the spatial proximity that allows each participant to see where the others' attention is directed — creates conditions for tacit knowledge transmission that no virtual medium can fully replicate.

These conditions are constitutive of originating ba, not merely correlates of it. The shared physical experience is not a convenient channel through which tacit knowledge happens to flow. It is the mechanism by which the flow occurs. Remove the physical co-presence and the specific kind of tacit knowledge transmission that originating ba enables does not merely weaken. It disappears, because the channels through which it operated — the observation of embodied practice, the absorption of attention patterns, the mutual vulnerability of shared difficulty — are not available in any other form.

AI is not physically co-present. It does not share vulnerability. It does not face difficulty. It does not attend to the problem in a way that a human observer could absorb and learn from. The junior who watches Claude solve a problem observes a text output, not a process of expert engagement. She sees the answer. She does not see the attention patterns, the diagnostic heuristics, the embodied judgment that a senior human practitioner would have displayed in the process of arriving at the answer. The observation channel that originating ba depends on is absent.

The Trivandrum training room that Segal describes is originating ba at its most vivid. Twenty engineers in the same physical space, experiencing the same transformation, sharing confusion and excitement in real time. The conversations that continued after the formal sessions — over meals, during walks, in the informal spaces where people process intense shared experiences — were not incidental to the training. They were the primary channel through which the deepest learning occurred. The engineers were not merely receiving explicit instruction in how to use Claude Code. They were Socializing: absorbing from each other the tacit sense of what the tool could and could not do, developing through shared practice the collective intuition about when to trust AI output and when to question it, building the shared tacit knowledge that would enable them to function as a team whose judgment was greater than the sum of its individual judgments.

No remote training session could have produced this result. Not because remote sessions lack information content — they can deliver explicit knowledge effectively. But because originating ba depends on the embodied, trust-rich, vulnerability-sharing conditions that physical co-presence uniquely provides. The engineer who admits confusion face-to-face, in a room where others are also confused, creates a micro-moment of shared vulnerability that deepens the trust on which Socialization depends. The same admission in a chat window, even a video call, does not carry the same relational weight. The body language, the shared physical context, the inability to exit the interaction without the social friction of standing up and walking out — these conditions create the sustained engagement that originating ba requires.

The builder working with Claude at three in the morning inhabits a different kind of ba — a human-machine ba that is productive and real but categorically different from originating ba. The systemizing function is extraordinary: Claude provides access to vast explicit-knowledge resources, recombined on demand. The dialoguing function is genuine: the conversation can crystallize tacit insight into explicit form. But the shared embodied experience that originating ba provides is absent. There is no shared vulnerability. There is no mutual attunement. There is no channel for the flow of tacit knowledge through observation of embodied practice. The builder is alone with a very powerful tool, and the aloneness — the absence of the relational dimension that originating ba supplies — has consequences for the quality and completeness of the knowledge created.

Nonaka's framework suggests that organizations in the AI age face a specific architectural challenge: the deliberate construction and maintenance of originating ba in an environment that is structurally hostile to it. AI tools reward individual use. They are available at any hour, from any location, without the coordination costs and social friction that shared physical work entails. The economic logic of the AI-augmented organization pushes toward distributed, asynchronous, individually mediated work — the opposite of the conditions that originating ba requires.

The vector pods that Segal describes — small groups of three or four people whose job is to decide what should be built — are an attempt to construct ba at the organizational level. A well-functioning vector pod operates as both originating ba (the members develop shared tacit understanding through sustained co-engagement with strategic problems) and dialoguing ba (the members externalize their tacit insights through the constructive friction of peer debate). The pod's output — specifications that AI tools execute — is the explicit product of knowledge conversion that depends on the tacit foundation built within the ba. Remove the ba — replace the pod with individual contributors working independently with AI — and the explicit output may continue, but the tacit quality that makes the output strategically sound rather than merely technically competent will erode.

Nonaka described the capacity for creating ba as emerging from "understanding and empathizing with others through daily verbal and nonverbal communication, reading the context to judge the best timing for interaction, and being able to elicit empathy in return." Every element of this description is relational, embodied, and dependent on the kind of sustained mutual engagement that only physical or deeply trusting co-presence provides. AI can participate in ba once it has been created. It can augment the explicit-knowledge resources available within a ba. It can serve as a tool that the ba's participants use to externalize and combine their knowledge more effectively. What it cannot do is create the relational conditions — the trust, the shared vulnerability, the embodied empathy — that constitute ba in the first place.

The organizational challenge, then, is not to choose between AI and ba but to maintain ba as the foundation on which AI-augmented knowledge creation rests. The organizations that will create the most valuable knowledge in the AI age will not be the ones that deploy AI most aggressively. They will be the ones that maintain the richest originating ba — the deepest shared tacit knowledge, the strongest relational trust, the most robust conditions for the human-to-human knowledge conversion that AI cannot perform — and then deploy AI within that ba as a tool that amplifies the knowledge the ba creates. The tool is powerful. The space in which it operates determines whether that power produces genuine knowledge or merely more explicit-knowledge artifacts from an increasingly thin tacit foundation.

The space must be built by humans, maintained by humans, and inhabited by humans who share enough of their embodied experience with each other that the tacit knowledge on which all the explicit knowledge depends can continue to flow, accumulate, and deepen. No amount of computational power can substitute for this. The ba is the dam that the spiral depends on. Without it, the river of Combination floods the lowlands but irrigates nothing.

Chapter 9: Phronesis in the Age of the Amplifier

In the sixth book of the Nicomachean Ethics, Aristotle drew a distinction that has survived twenty-four centuries of philosophical revision because it names something real about the structure of human knowledge. He identified three intellectual virtues: episteme, theoretical knowledge of universal truths that cannot be otherwise; techne, the craft knowledge of how to make things, the skill of production; and phronesis, practical wisdom — the capacity to deliberate well about what is good and beneficial for human beings in particular, contingent, unrepeatable situations.

The distinction matters because these three forms of knowledge have different relationships to codification, and therefore different relationships to artificial intelligence.

Episteme — knowledge of universals, of laws, of regularities — is the form of knowledge most amenable to explicit representation and therefore most accessible to AI. The laws of physics, the theorems of mathematics, the established findings of empirical science: these are explicit, codified, and available in the training data of any large language model. AI does not merely store episteme. It recombines it with extraordinary fluency, producing novel configurations of theoretical knowledge that can be useful, surprising, and sometimes genuinely illuminating.

Techneproductive skill, the knowledge of how to make — is more complex. In its explicit dimension, techne is highly accessible to AI: the documented procedures, the design patterns, the architectural recipes that constitute the codified portion of craft knowledge are well-represented in training data and are precisely what AI recombines when it produces working code from a natural language description. But techne also has a tacit dimension — the experienced craftsman's feel for the material, the seasoned developer's intuition for code quality — that is deposited through Internalization and resists explicit capture. AI can simulate techne at the explicit level with remarkable competence. The tacit dimension eludes it.

Phronesis is different in kind from both. Practical wisdom is not knowledge of universals. It is not productive skill. It is the capacity to perceive what a particular situation requires — to see the relevant features of an unrepeatable moment, to weigh competing goods that cannot be reduced to a common metric, to act well under conditions of genuine uncertainty where no rule, no precedent, no algorithm can determine the right course of action.

Phronesis is the doctor who decides, in the specific circumstances of this patient's life, that the technically optimal treatment is not the right treatment. It is the leader who senses, from cues that cannot be decomposed into data points, that the organization is approaching a crisis of trust and changes course before any metric confirms the intuition. It is the parent who recognizes that the question the child is asking is not the question the child needs answered. Phronesis does not apply rules to cases. It perceives the particular, in its full specificity, and responds with judgment that is at once moral, practical, and situational.

Nonaka turned to phronesis in the later phase of his career because he recognized that episteme and techne, however important, were insufficient to explain the most consequential organizational decisions. In "The Wise Leader," published in the Harvard Business Review with Hirotaka Takeuchi in 2011, he argued that what distinguishes great leaders from merely competent ones is not superior theoretical knowledge or superior technical skill but practical wisdom — the capacity to judge what is good for the organization and its stakeholders in specific, unprecedented circumstances, and to act on that judgment effectively.

The wise leader, in Nonaka's account, possesses six capabilities: the ability to judge goodness, the ability to grasp the essence of particular situations, the ability to create shared contexts for knowledge creation (ba), the ability to communicate the essence of situations to others through metaphor and narrative, the ability to exercise political judgment in the use of power, and the ability to foster practical wisdom in others. Every one of these capabilities is tacit, situational, and dependent on the kind of embodied experience that Socialization and Internalization build over decades of engaged practice.

AI provides episteme at a scale that Aristotle could not have imagined. The entirety of human theoretical knowledge, recombinant on demand, available through natural language conversation. AI provides techne — or at least the explicit dimension of techne — with a competence that transforms what individuals and organizations can produce. These contributions are real and consequential. But AI does not provide phronesis, and the gap between what AI provides and what phronesis requires is not a technical limitation to be resolved by larger models or better training data. It is a categorical boundary.

Phronesis depends on perception of the particular. AI operates on patterns derived from the general. Phronesis requires the capacity to see what is unique about this situation — the features that distinguish it from all apparently similar situations, the contextual details that make the standard response inappropriate, the moral dimensions that no dataset can encode because they emerge from the intersection of competing values in a specific, unrepeatable moment. AI recognizes patterns. Phronesis recognizes exceptions. AI operates where the general applies. Phronesis operates precisely where it does not.

Phronesis depends on moral judgment. Not ethical theory — AI can recombine ethical theories with impressive sophistication. Moral judgment: the capacity to perceive what is good in this situation for these people under these circumstances. This perception is not a computation. It is a form of seeing that develops through the accumulation of moral experience — situations in which the practitioner had to choose between competing goods, live with the consequences of the choice, and develop, over many such cycles, the felt sense of what matters that constitutes practical wisdom. AI has no moral experience. It has no consequences. It has no stakes. It processes ethical reasoning. It does not exercise moral judgment.

Phronesis depends on embodied engagement with the world. The wise leader's capacity to read a room, to sense organizational mood, to perceive the moment when a conversation has shifted from productive to defensive — these perceptions depend on the body, on the nervous system's capacity to register social and emotional signals that no explicit instrument can capture with equivalent fidelity. The embodied dimension of phronesis is not a romantic addition to an otherwise cognitive capacity. It is constitutive. Practical wisdom operates through the body's engagement with the social world, and the body's engagement is built through Socialization — through the sustained physical co-presence with other practitioners that allows the signals of social and organizational life to be absorbed, processed, and converted into the tacit dimension of wise judgment.

The Orange Pill makes the argument that judgment has become the scarce resource in an age of abundant execution capability. Nonaka's framework specifies what kind of judgment matters most and how it develops. The judgment that matters is not analytical judgment — the ability to evaluate options against explicit criteria, which AI can perform with reasonable competence. The judgment that matters is phronesis: the ability to perceive what this situation, in its full particularity, requires of us. And phronesis develops through the modes of the SECI spiral that AI threatens most directly.

Socialization builds phronesis by immersing the practitioner in the tacit knowledge of others who have exercised practical wisdom over long careers. The junior who works alongside a wise senior absorbs not just technical skill but moral orientation — the patterns of attention, the sensitivity to stakeholder needs, the instinct for when efficiency must yield to fairness — that constitute the tacit substrate of practical wisdom. When AI reduces the occasions for this Socialization, the channels through which phronesis is transmitted between generations narrow.

Internalization builds phronesis by converting the explicit lessons of moral experience — the observed consequences of decisions, the articulated principles of good practice, the documented cases of judgment exercised well or poorly — into embodied, tacit disposition. The leader who has internalized the consequences of a poorly handled layoff does not merely know, in the explicit sense, that layoffs require careful communication. She feels it — in her body, in her nervous system, in the visceral discomfort that arises when she encounters a situation that rhymes with the past experience. This embodied memory is the foundation of moral judgment. It is deposited through the friction of lived consequence. When AI smooths the friction — when the consequences of decisions are absorbed by automated systems rather than felt by the people who made them — the Internalization of moral experience is interrupted.

The result is a specific kind of organizational deficit: abundant techne, expanding episteme, eroding phronesis. The organization can build anything. It can access any information. But the wisdom to determine what should be built, for whom, and at what cost to competing values — this wisdom is concentrated in the generation that built their phronesis through pre-AI experience and is not being replenished at the rate required, because the conditions that build it are being displaced by the tools that make episteme and techne abundant.

Segal's account of the boardroom conversation about headcount reduction illustrates the stakes of phronesis precisely. The arithmetic was clear: if five people can do the work of one hundred, why not have five? The analytical judgment — the explicit calculation of efficiency and margin — pointed unambiguously toward reduction. The phronesis required to see what the arithmetic could not — the organizational knowledge that would be lost, the Socialization channels that would be severed, the message about human value that the decision would send, the long-term erosion of the tacit knowledge base that short-term margin optimization would produce — this phronesis was exercised against the arithmetic, not with it.

Not every leader will exercise phronesis in that situation. The gravitational pull of the explicit calculation — clean, defensible, quantifiable — is stronger than the gravitational pull of the tacit perception that something important will be lost. Phronesis does not have numbers. It has perception. And in organizational cultures that privilege the quantifiable, perception loses to numbers more often than it should.

Nonaka's late-career insistence on phronesis as the highest form of organizational knowledge was not an academic preference. It was a diagnosis of what organizations most need and most lack. And the diagnosis has become more urgent, not less, as AI provides episteme and techne at unprecedented scale while the conditions for developing phronesis — sustained Socialization with wise practitioners, friction-rich Internalization of moral experience, embodied engagement with the consequences of decisions — are quietly eroding under the pressure of the very efficiency that AI provides.

The question that Nonaka's framework leaves for organizations to answer is whether they will treat phronesis as a resource to be consumed or a capacity to be cultivated. The generation of leaders who possess it — who built their practical wisdom through decades of pre-AI experience — is a finite and depleting resource. Unless the conditions for its development are deliberately maintained, phronesis will become scarcer in each successive generation. The organization's techne will increase. Its episteme will expand. And its wisdom about what to do with all that knowledge and capability will diminish, not because wisdom has been disproved but because the conditions that produce it have been optimized away in pursuit of the efficiency that phronesis itself would have cautioned against.

Chapter 10: The Knowledge Spiral Worthy of Amplification

The argument that has developed across these nine chapters reduces to a single proposition: artificial intelligence has massively accelerated one mode of the knowledge-creation spiral while creating conditions that threaten the other three, and the organizational challenge of the AI age is to maintain the dynamic balance of the full spiral against the gravitational pull of its most powerful mode.

The proposition is not anti-technology. It does not counsel resistance, refusal, or nostalgia for a pre-AI organizational world that had its own severe limitations. The proposition is structural. It identifies where in the knowledge-creation process the distortion is occurring, specifies the mechanisms by which the distortion produces its effects, and points toward the organizational design principles that could maintain the spiral's integrity even as one mode operates at speeds no previous technology could approach.

What would it mean, concretely, for an organization to maintain the full spiral?

It would mean recognizing, at the level of strategic intent, that the productivity multiplier AI provides is a Combination multiplier — a gain in the speed and scale of explicit-knowledge recombination — and that this multiplier, however impressive, operates within one quadrant of the knowledge-creation process. The other three quadrants — Socialization, Externalization, Internalization — produce the tacit knowledge on which the quality of Combination depends: the embodied judgment that evaluates whether recombined explicit knowledge is genuinely useful or merely novel, the shared understanding that allows teams to direct AI tools toward problems that matter, the practical wisdom that determines what should be built and for whom.

An organization that treats the Combination multiplier as the whole story optimizes for the wrong metric. Output rises. Understanding may not. The dashboard shows more features shipped, more code generated, more analyses produced. The tacit indicators — architectural soundness, strategic coherence, the reliability of expert judgment under ambiguity — are harder to measure and may be declining. The organization becomes explicitly productive and potentially tacitly impoverished, and the impoverishment is invisible until the moment it is tested.

The spiral worthy of amplification begins with deliberate Socialization. Nonaka's framework is specific about what Socialization requires: shared physical experience, mutual trust, the kind of sustained co-presence that allows tacit knowledge to flow through channels that formal communication cannot construct. This is expensive. It is inefficient by conventional metrics. It resists the optimization that organizational life relentlessly demands. And it is non-negotiable, because the tacit knowledge that Socialization produces is the foundation on which everything else rests.

In practice, this means that organizations must invest in what cannot be justified by quarterly metrics: face-to-face collaboration sessions where AI tools are deliberately set aside so that the shared experience of confronting difficulty without automated assistance can deposit the tacit understanding that AI-assisted work depends on. Apprenticeship structures that pair junior practitioners with senior colleagues not for information transfer — Claude handles that — but for the absorption of patterns of attention, diagnostic heuristics, and moral orientation that only sustained co-presence can transmit. Cross-functional immersions that dissolve role boundaries not through AI-mediated individual expansion but through shared physical work in unfamiliar domains, where the discomfort and mutual vulnerability of being a beginner creates the trust-rich conditions that originating ba requires.

The vector pod structure that Segal describes in The Orange Pill is a promising organizational form for Socialization at the strategic level. A well-functioning pod of three or four people who meet regularly, debate intensively, and develop through sustained mutual engagement a shared tacit sense of what their organization should build — this pod is originating ba instantiated as organizational structure. Its value is not in the specifications it produces (those are explicit-knowledge artifacts that could be generated by individuals working with AI) but in the shared tacit understanding from which those specifications emerge: the collective intuition, developed through months or years of working together, about what matters, what will work, what the market needs but cannot articulate, and what the organization is uniquely positioned to provide.

The spiral worthy of amplification invests in Externalization quality. AI-assisted Externalization — the use of machine dialogue to help practitioners articulate tacit insights that might otherwise remain locked in individual experience — is one of the most genuinely valuable applications of AI in the knowledge-creation process. But its value depends, with a precision that admits no exception, on the depth of the tacit knowledge being externalized. The organizational response is twofold: cultivate deep tacit knowledge through Socialization and Internalization so that practitioners bring substance to the Externalization process, and develop the verification discipline that distinguishes genuine insight from plausible pattern-matching.

The verification discipline is specific and learnable. It involves asking, after every AI-assisted Externalization: Can I trace this insight back to an embodied experience? Can I defend it under challenge from someone who knows the domain? Does it illuminate something I already felt to be true, or does it merely sound convincing? The discipline is uncomfortable because it reintroduces friction into a process that AI has made frictionless. But the friction is not pointless. It is the mechanism by which spurious Externalization is filtered and genuine Externalization is confirmed.

The spiral worthy of amplification uses Combination at full power. There is no case for constraining AI's Combination capability. The explicit-knowledge recombination that AI provides — the synthesis of research across traditions, the generation of code from natural language specifications, the assembly of complex analyses from distributed data sources — is too valuable to forgo and too powerful to artificially restrict. The appropriate organizational stance toward Combination is not restraint but context: use it fully, use it aggressively, and recognize that its outputs are explicit-knowledge artifacts that require evaluation by practitioners whose tacit knowledge base is deep enough to judge whether the artifacts are genuinely useful or merely technically competent.

This recognition has a structural corollary: the evaluation of AI-generated output should be performed by practitioners whose tacit knowledge was built through the friction-rich practice that precedes AI, or by junior practitioners who have maintained enough Internalization practice to develop their own tacit foundation. The worst organizational configuration is one in which AI outputs are evaluated exclusively by people whose entire experience has been AI-mediated — people who have never felt the friction from which evaluative judgment develops. Such a configuration places the quality-control function in the hands of the least qualified to exercise it, because the qualification is tacit and the tacit dimension has not been built.

The spiral worthy of amplification maintains Internalization deliberately. Nonaka's framework insists that the spiral closes through Internalization — the conversion of explicit knowledge into tacit skill through the embodied practice of doing. When AI handles the doing, Internalization must be maintained through deliberate practice: structured opportunities for practitioners to engage in friction-rich, hands-on work without AI assistance, not as the primary production modality but as a developmental discipline that deposits the tacit layers on which evaluative judgment depends.

The analogy to physical exercise is precise enough to be useful. The knowledge worker who uses AI for production and practices hands-on work for development is maintaining a capacity that her primary work activity does not build but that her primary work activity depends upon. The practice is not nostalgic. It is not a refusal of AI. It is a recognition that the tacit knowledge deposited by friction-rich practice is a resource that must be actively replenished because the primary work activity — AI-mediated Combination — does not replenish it.

The proportion of work time allocated to Internalization practice is an empirical question that organizations are only beginning to investigate. But the principle is clear: zero is the wrong answer. Some non-trivial fraction of a practitioner's time should be spent in friction-rich, hands-on engagement with the kind of work that AI can now handle, for the explicit purpose of maintaining the tacit knowledge base that allows her to evaluate AI's output with genuine confidence. The Berkeley researchers' concept of "AI Practice" — structured pauses and sequenced workflows that protect time for non-AI-mediated engagement — is a nascent version of this principle.

The organizational challenge is cultural as much as structural. In a culture that rewards visible productivity, the practitioner who spends Friday afternoon debugging by hand rather than shipping features with Claude appears to be wasting time. The explanation — that she is depositing the tacit knowledge layers on which her evaluative judgment depends — is not legible in the productivity metrics that organizational culture monitors. The cultural shift required is analogous to the cultural shift that made physical fitness an accepted part of executive development: the recognition that an activity whose benefits are not immediately visible in performance metrics is nevertheless essential to long-term performance.

Nonaka's final public statement — "Innovation is a collective process of creating new meaning and value for the future. It is brought about by humans, not just by science and technology" — is not a nostalgic claim about human superiority. It is a precise description of the knowledge-creation process that his framework formalizes. Innovation requires the full spiral: tacit knowledge transmitted through shared experience, articulated through creative metaphor, recombined through systematic processing, and re-embodied through effortful practice. Each mode feeds the next. The spiral ascends. Science and technology contribute — powerfully, consequentially, indispensably. But the bringing about requires the full cycle, including the modes that are distinctly human and that only human organizational structures can maintain.

The knowledge spiral worthy of amplification is one that holds all four modes in dynamic balance: Socialization rich enough to transmit tacit understanding between generations, Externalization honest enough to distinguish genuine insight from plausible pattern-matching, Combination powerful enough to exploit the full range of AI's recombinant capability, and Internalization disciplined enough to maintain the tacit foundation on which the quality of everything else depends.

This is the spiral that The Orange Pill reaches for when it argues that the amplifier amplifies whatever it receives, and that the question of the AI age is whether the signal being amplified is worthy of the power now available to carry it. Nonaka's framework specifies what "worthy" means in operational terms: a signal produced by the full knowledge-creation spiral, grounded in tacit understanding, articulated with honesty, combined with comprehensive power, and re-embodied through deliberate practice. That signal is worth amplifying. A signal produced by Combination alone — explicit-knowledge recombination without the tacit depth, shared experience, and embodied practice that give knowledge its genuine substance — is not worth amplifying no matter how loud the amplifier can make it.

The spiral can be maintained. It can be designed for. It requires organizational intent, structural investment, and the cultural willingness to value what cannot be measured over what can. These are difficult conditions to meet. They are also the conditions on which the future of genuine organizational knowledge creation depends — the conditions that determine whether the most powerful knowledge-processing technology in human history will produce an age of unprecedented understanding or an age of unprecedented output from an ever-thinning foundation of genuine knowing.

The distinction between these two futures is the distinction between a spiral that is whole and a spiral that is broken. The tools do not decide which future arrives. The organizations — and the people who lead them, teach within them, and build their lives inside them — decide. And the decision is being made now, in every organizational design choice, every training investment, every allocation of time between AI-mediated production and friction-rich practice, every moment in which a leader chooses to maintain the conditions for the full spiral or to accept the easier, faster, more measurable gains of the accelerated mode alone.

Nonaka's framework does not prescribe the choice. It illuminates the consequences. The spiral is the engine. The balance is the design parameter. The amplifier is ready. The question remains what it has always been: whether the knowledge being amplified is worthy of the power now available to carry it forward.

Epilogue

The layer I keep thinking about is the one that almost didn't form.

There is a moment in The Orange Pill — I wrote it at some ungodly hour over the Atlantic, and it arrived with the sharp specificity of something lived rather than reasoned — where I describe an engineer on my team who noticed, months after we adopted Claude Code, that she was making architectural decisions with less confidence than before. She could not explain why. The features still shipped. The code still compiled. The dashboards still glowed green. But something had thinned beneath her, and she could feel the thinning without being able to name it.

Nonaka's framework gave me the name.

That thinning was the interruption of Internalization — the slow, friction-dependent process by which explicit knowledge becomes embodied understanding. Every hour she had spent debugging before Claude had deposited a layer. The layers, compounding over years, became the ground she stood on when she evaluated a system and felt, before any analysis confirmed it, that something was structurally unsound. Claude removed the debugging. The layers stopped forming. The ground thinned. The confidence eroded — not because she knew less in any explicit sense, but because the tacit dimension, the felt sense of how things hold together, was no longer being fed by the experiences that produced it.

I had described this in The Orange Pill as a geological metaphor. Nonaka's framework showed me it was not a metaphor at all. It was a mechanism. A specific mode of knowledge conversion — Internalization, explicit to tacit, through the friction of practice — was being interrupted by the very tool that made everything else faster. The diagnosis was precise in a way my intuition had reached for but could not achieve alone.

That precision is what Nonaka's thinking offers this moment. Not a verdict for or against AI. A map. Where in the knowledge-creation process is the distortion occurring? Which modes are accelerating? Which are atrophying? What organizational structures maintain the balance that genuine understanding requires?

The SECI spiral is not a theory you admire from a distance. It is a tool you use to see what you are actually doing when you build, teach, lead, parent. And what it revealed to me — uncomfortably, precisely — is that the twenty-fold productivity multiplier I celebrated in Trivandrum was largely a Combination multiplier: a gain in the speed of recombining explicit knowledge, real and valuable, but operating within one quadrant of a four-quadrant process. The other three quadrants — the shared experience that transmits tacit understanding, the creative articulation that makes tacit insight communicable, the embodied practice that converts the explicit back into the felt — were not accelerating at the same rate. Some of them were decelerating.

What haunts me most is phronesis. Practical wisdom: the capacity to perceive what a particular, unrepeatable situation requires of you. Not what the data says. Not what the algorithm recommends. What this moment, with these people, under these circumstances, demands. Nonaka spent his final years insisting that phronesis was the knowledge organizations most needed and least cultivated. AI makes the insistence more urgent, not less. Because AI provides episteme and techne at unprecedented scale — it gives you knowledge and capability in abundance — while the conditions that develop phronesis are quietly eroding under the pressure of efficiency.

Every time I chose to keep and grow my team rather than convert the productivity multiplier into headcount reduction, I was exercising phronesis — or trying to. The arithmetic said reduce. The felt sense of what that reduction would cost, in organizational knowledge, in trust, in the tacit foundation that makes the next round of building possible, said otherwise. That felt sense was built through decades of building and breaking and rebuilding, through the specific friction of leading people through transitions that no spreadsheet can capture. It is the kind of knowledge that Nonaka's framework identifies as irreplaceable and that the AI-accelerated organization is structurally at risk of not developing in the next generation.

I think about my engineers in Trivandrum. They were extraordinary in that room, discovering capabilities they had not imagined. But the room itself — the physical co-presence, the shared meals, the late-night conversations where confusion was admitted and tacit insight flowed between people who were facing the same transformation together — that room was ba. It was the shared space in which something deeper than skill transfer occurred. And no amount of computational power could have replicated what happened there, because what happened there depended on bodies in proximity, on mutual vulnerability, on the specific quality of trust that only shared difficulty builds.

Nonaka died on January 25, 2025, from pneumonia. He was eighty-nine years old. He died in the opening weeks of the generative AI revolution that represents the ultimate test of his life's work. He left behind a framework that names, with a precision no other organizational theory provides, exactly what is at stake when the most powerful knowledge-processing tool in human history enters organizations designed for human knowledge creation. He left behind the insistence — his final public words — that innovation is brought about by humans, not just by science and technology.

The spiral must be whole. That is the message, and it is not sentimental. It is structural. Combination without Socialization, without Externalization, without Internalization, produces output without understanding. The amplifier amplifies whatever it receives. If the signal is produced by the full spiral — grounded in shared experience, articulated with honesty, combined with power, re-embodied through practice — then the amplification produces genuine knowledge at a scale that no previous technology could achieve. If the signal is produced by Combination alone, the amplification produces more of what already exists, arranged differently, polished to a frictionless sheen, mistaken for understanding by organizations that have lost the tacit capacity to tell the difference.

I know which signal I want to feed the amplifier. Maintaining the full spiral is harder than letting Combination run alone. It requires investment in things that cannot be measured, patience with processes that cannot be accelerated, and the organizational wisdom to value what is invisible over what is quantifiable. These are the conditions. They are not easy. They are worth it.

The layers must keep forming.

Edo Segal

AI made your organization twenty times faster.
Nonaka's question is whether it's still creating knowledge --
or just rearranging what it already knows.

** The productivity revolution is real. AI recombines the world's explicit knowledge at breathneck speed -- code, analysis, strategy documents -- producing more output than any previous technology. But Ikujiro Nonaka spent forty years proving that output is not knowledge. Knowledge is created through a spiral that includes what machines cannot touch: the tacit understanding absorbed through shared experience, the embodied intuition deposited through years of friction-rich practice, the practical wisdom that perceives what no dataset contains. This book applies Nonaka's SECI framework to the AI moment with diagnostic precision, revealing that the twenty-fold multiplier operates in one quadrant of a four-quadrant process -- and that the other three quadrants are quietly eroding. The spiral must be whole. This is the map that shows you where it's breaking and how to maintain what the amplifier depends on but cannot build.

Ikujiro Nonaka
“** "No matter how much AI technology advances, the essence of knowledge creation, in which tacit knowledge is the source of new knowledge, will not change."”
— Ikujiro Nonaka
0%
11 chapters
WIKI COMPANION

Ikujiro Nonaka — On AI

A reading-companion catalog of the 15 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Ikujiro Nonaka — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →