The AI transition is precisely such an environment. Capabilities expand faster than organizations can plan, and the only way to discover what works is to try things that might not. The organization that does not experiment is not playing it safe. It is guaranteeing its own obsolescence. You On AI is, among other things, a document of intelligent failure in practice. The Deleuze error — Claude producing a philosophically incorrect reference that worked rhetorically but broke under scrutiny — meets every criterion. It occurred during genuine experimentation with a new collaborative process. No protocol existed. The failure was detected through review. And the lesson about the danger of eloquent emptiness is enormously valuable for anyone using these tools for serious work.
The framework's power lies in its refusal of two common organizational pathologies. The first is blame-all culture, which treats every failure identically and thereby suppresses precisely the experimentation that generates learning. The second is blameless culture, which treats every failure as acceptable and thereby forfeits the discipline that distinguishes generative experiments from careless ones. Intelligent failure is the middle path: specific criteria that separate failures worth celebrating from failures worth preventing, applied with the seriousness that makes experimentation a professional practice rather than a permission slip.
Intelligent failures are valuable only when honestly reported, carefully analyzed, and widely shared. The connection to psychological safety is direct. In organizations that punish failure — that treat all failure as evidence of incompetence — intelligent failures are concealed, rationalized, attributed to external factors. The Deleuze error would be quietly corrected. The hollow prose silently replaced. The lessons lost. A safe environment expects, protects, analyzes, and values intelligent failure. An unsafe one treats all failure identically, and in doing so suppresses the exploration the AI transition demands.
The AI transition has a temporal dimension that intensifies demands on this capability. Before these tools the cycle from experiment to failure to learning to improved experiment ran in weeks or months. AI compresses it dramatically. An experiment that previously took a month can now be completed in a day. This means the rate of intelligent failure increases — more experiments in less time, more failures to process in less time. Organizations need what Edmondson calls failure fluency: a shared vocabulary for discussing failure, shared practices for processing it, and a collective emotional resilience that absorbs the impact without losing confidence or momentum.
Edmondson developed the three-category failure taxonomy across decades of hospital and industrial research, culminating in Right Kind of Wrong: The Science of Failing Well (2023). The book synthesized work that had appeared in scattered form since her 2011 HBR article 'Strategies for Learning from Failure,' which first popularized the distinction between preventable, complex, and intelligent failures.
Three kinds. Preventable failures need prevention; complex failures need systemic redesign; intelligent failures need celebration.
Genuine experimentation required. An intelligent failure occurs in genuinely uncertain territory with thoughtful design — not in known domains where the answer was already available.
The right to experiment, the obligation to learn. Freedom to try is paired with responsibility to document, analyze, and share what happened.
Undifferentiated treatment kills exploration. Organizations that treat all failure identically suppress intelligent failure along with the preventable kind.
Failure fluency. Fast cycles require collective capacity to process failure quickly and constructively.