Externalization is the mode where tacit knowledge seeks communicable form. The canonical example is Honda's use of 'automobile evolution' to crystallize an emergent concept about what a compact urban vehicle should be — a metaphorical phrase that was not a specification but a provocation, preserving enough of the tacit insight to guide engineering toward a design that no dataset contained. The mode resists routine because its essence is the creative act of finding language for what resists language. AI enters this mode with a capability the original framework did not anticipate: serving as a conversational partner that provides explicit-knowledge scaffolding — concepts, connections, vocabulary — which a human's tacit insight needs to crystallize into communicable form. The quality of AI-assisted Externalization depends with exacting precision on the depth of the tacit knowledge being externalized: deep tacit knowledge externalizes into genuine insight, while shallow tacit knowledge externalizes into what Segal identifies as 'confident wrongness dressed in good prose.'
The difficulty of Externalization is not that people lack tacit knowledge — most skilled practitioners possess it in abundance. The difficulty is that tacit knowledge, by definition, resists explicit formulation. The surgeon knows where to cut but cannot fully explain her confidence. The senior engineer feels that an architectural choice will fail under load but cannot, in the moment, decompose the feeling into an explicit argument. The capacity to externalize — to find the metaphor, the model, the precise formulation that converts felt understanding into communicable form — is a separate skill, and many practitioners who possess deep tacit knowledge lack it.
AI provides what had never existed before: on-demand explicit-knowledge scaffolding drawn from essentially the entire codified output of human civilization. Segal's account in The Orange Pill of working late with Claude to articulate why AI adoption curves revealed something deeper than product quality illustrates the mechanism. Segal possessed the tacit insight (the felt sense of what the adoption patterns meant); Claude provided the explicit scaffolding (the concept of punctuated equilibrium from evolutionary biology). Neither alone would have produced the insight. The Externalization occurred in the collision of a human's tacit knowledge with a machine's explicit-knowledge retrieval.
This distributed Externalization is genuinely new. It extends Nonaka's framework into territory his original formulation, built for exclusively human knowers, could not anticipate. Contemporary scholars — Böhm and Durst's GRAI framework, Ogawa's GenAI SECI model — have attempted to formalize the extension, recognizing AI as an auxiliary means rather than an independent agent of knowledge conversion.
The asymmetry — productive with deep tacit foundations, misleading with shallow ones — has a structural implication. AI-assisted Externalization requires verification discipline: asking, after every crystallization, whether the insight traces back to embodied experience, whether it can be defended under challenge, whether it illuminates something already felt to be true or merely sounds convincing. Segal describes performing this test himself, rejecting Claude's polished passage about Deleuze's smooth space and spending two hours at a coffee shop rewriting by hand until he found the version that was his. Without such discipline, AI-assisted Externalization degenerates into the most sophisticated generator of plausible emptiness organizational life has ever produced.
The mode was articulated in The Knowledge-Creating Company (1995) as one of four modes in the SECI spiral. Nonaka drew on the Japanese tradition of metaphorical management language — Honda's 'Tall Boy' concept for the City car, Canon's 'Let's gamble' slogan for the personal copier — to demonstrate that figurative articulation often produced more productive externalization than literal specification, because metaphor preserved the ambiguity that tacit knowledge requires to travel between minds.
Metaphor is the primary instrument. Figurative representation preserves enough of the tacit insight to guide further development without collapsing into false precision.
Tacit possession differs from externalization capacity. Many practitioners know more than they can say — and the saying is a separate skill from the knowing.
AI-assisted Externalization is genuinely new. The collision of human tacit insight with machine explicit-knowledge retrieval creates a mode of conversion that neither participant could perform alone.
Quality depends on tacit depth. Deep foundations produce genuine insight; shallow foundations produce plausible emptiness.
Verification discipline is non-negotiable. The friction that previously filtered Externalization through difficulty must be reintroduced deliberately when AI makes the process frictionless.