A mid-sized software company in Austin adopted AI coding tools across its entire engineering organization in the fall of 2025. Within eight weeks, the metrics were spectacular: lines of code per engineer per week up fourfold, feature velocity doubled, the backlog shrinking for the first time in the company's history. The CTO presented the results at an all-hands meeting with a celebratory slide deck. Six months later, the defect rate had tripled, two senior engineers had quietly resigned, and the backlog was growing again because the rapidly shipped features were generating cascades of bugs that consumed more engineering time to fix than the features had taken to build. The metrics had been accurate. The interpretation had been catastrophically wrong.
The case functions as the anchor empirical example for Schein's three-level framework applied to AI adoption. The artifacts — lines of code, features shipped, backlog reduction — changed rapidly and dramatically. The espoused values aligned with the new reality: engineering excellence, customer value, augmentation. The basic underlying assumptions — that output volume equals engineering quality, that shipping speed is the primary metric, that defects are individual failures rather than systemic consequences — remained unchanged.
The mechanism beneath the deterioration was specific. Pre-AI engineering workflows had produced understanding as a byproduct of struggle. Debugging sessions, failed attempts, iterative refinement — each deposited layers of embodied knowledge that no documentation could convey. The AI tool removed the friction, and with the friction, the understanding. The engineers reviewed code they had not built through struggle. The code compiled and passed tests. The quality failures that would have been caught by embodied judgment passed through unnoticed until they accumulated into customer-visible disasters.
The two senior engineers who resigned were diagnostic. They were the people with the deepest architectural understanding — the tacit knowledge that allowed them to feel something was wrong before they could articulate what. Their departure was not visible in the dashboard metrics. It was invisible until the cascade of consequences surfaced their absence.
The pattern is not unusual. It is characteristic of AI adoption failure, and it is the pattern Schein's framework predicts with uncomfortable precision. The organizations that have avoided the Austin pattern are those that invested in psychological safety, permission to not know, and managed cultural evolution before or during deployment rather than hoping the metrics would substitute for the culture.
The case is presented as a composite representing a widely observed pattern rather than a single identified firm. The specific combination of metrics — quadrupled output, tripled defects, senior resignations, backlog rebound — tracks patterns documented across multiple AI adoption postmortems from 2025 and 2026.
Accurate data, catastrophic interpretation. The numbers were real; the cultural frame that interpreted them was obsolete.
Understanding is not a measurable output. It lived in the relationship between engineer and codebase, and that relationship was what the AI tool eliminated.
The senior resignations were diagnostic. The people with the deepest tacit knowledge left first — a signal visible only in retrospect.
The backlog rebound is the signature. Initial gains followed by accelerating bug-fix consumption is the distinctive pattern of artifact-level adoption failure.
The pattern is predictable. Schein's framework identifies the conditions that produce it with clinical precision — which is why the pattern remains preventable if recognized early.