The fallacy operates at two levels. At the individual level, it treats the production of expert-like output as evidence that the machine has replicated expert knowledge — ignoring the fact that expert knowledge includes capacities the machine does not possess: the ability to recognize when rules fail, the judgment to weigh incommensurable values, the stake in outcomes that grounds responsibility. At the institutional level, the fallacy treats the automation of expert output as equivalent to the automation of the function expertise serves — ignoring the fact that the function depends on the developmental process, not just the output.
The ten minutes case Segal describes — the engineer whose routine plumbing contained the rare moments of formative struggle — illustrates the operational consequence. When AI automates the plumbing, it removes the tedium and the formative moments together, because from the system's perspective the entire four hours is plumbing, defined functionally rather than developmentally. The substitution appears successful because the output is maintained. The phronetic erosion it produces is invisible to any measurement that evaluates substitution by output rather than by the developmental trajectory of the person producing it.
The fallacy has an institutional analog in the commoditization of code that produced the 2026 Software Death Cross. Organizations whose value propositions rested on the scarcity of technical capability discovered that the capability had been commoditized and their valuations had to be repriced. But the substitution was real only at the techne layer. The phronetic layer — institutional knowledge, customer relationships, workflow patterns, regulatory expertise, cultural understanding — remained. The companies that mistook the techne commoditization for the substitution of their entire value proposition learned the lesson painfully; those that recognized the distinction survived.
The corrective is methodological vigilance. Every claim that AI has replaced a category of human work must be examined for the substitution fallacy: has the output been replicated, or has the capacity been replaced? These are different questions, and conflating them produces the systematic overestimation of AI substitution that now pervades both corporate strategy and public discourse. The honest answer in most cases is that the output has been replicated while the capacity has been partially eroded and partially preserved in ways that vary by context and that only phronetic analysis can detect.
The concept is implicit throughout Flyvbjerg's phronesis work and is made explicit in his 2025 AI writings. Related arguments appear in Hubert Dreyfus's long engagement with AI and in Shannon Vallor's work on moral deskilling.
Output versus process. Functional equivalence of output does not entail substantive equivalence of capability — the developmental process is a separate dimension.
Invisible erosion. When substitution replaces the developmental process, the phronetic erosion is invisible to measurement that evaluates only output.
Institutional consequence. Organizations that mistake output commoditization for capability substitution misprice their own value and their competitors'.
Generational compounding. Each generation of practitioners trained through substituted processes loses the capacity the process developed, producing cumulative erosion invisible within any single generation.
Corrective vigilance. Every substitution claim must be examined separately for output replacement versus capacity replacement, because the two diverge systematically.