Change metrics tell you whether the implementation succeeded. Transition metrics tell you whether the people did. William Bridges argued that organizations are sophisticated about measuring change — adoption rates, productivity gains, error reduction, time savings — and nearly blind to transition. The blindness is structural: transition involves interior psychological states that resist quantification. But the difficulty of measurement is not an excuse for ignoring the dimension entirely. Bridges proposed that organizations track qualitative and quantitative signals of transition health: employee engagement surveys that ask about purpose and meaning (not just satisfaction), turnover rates among high-performers (the canary in the coal mine), innovation pipeline activity (genuine new ideas vs. incremental optimization), and the presence of neutral-zone behaviors (experimentation, provisional identity formation, articulated uncertainty). These metrics do not replace productivity tracking. They provide the second eye that prevents the implementation trap — the condition in which quantitative success masks qualitative erosion.
The case for transition monitoring is strongest when stated negatively: without it, organizations operate blind in the dimension that determines long-term health. A company can watch productivity rise quarter after quarter while the workforce's capacity for deep work, genuine innovation, and ethical judgment silently degrades. The degradation is invisible to output metrics because AI tools allow depleted workers to maintain high output — the machine does the execution, the human provides minimal direction, and the result is quantitatively impressive and qualitatively hollow. By the time the hollowness becomes visible (a critical failure, a talent exodus, a reputational crisis), the transition deficit has accumulated to the point where recovery requires years. Transition monitoring creates early warning signals. A sustained drop in engagement scores, even while productivity rises, indicates the implementation trap. A spike in high-performer departures signals that deep expertise is leaving because the transition has not been supported. A decline in genuine experimentation (not busywork but boundary-crossing creative attempts) signals that the neutral zone has been compressed or bypassed, eliminating the phase that produces innovation.
Implementing transition monitoring requires confronting a tension that makes quantitative-minded leaders uncomfortable: the metrics are 'soft,' subjective, and narrative-dependent. Engagement is a felt state. Identity clarity requires self-report. Purpose is not directly observable. And yet these soft variables predict hard outcomes. Bridges documented this across hundreds of transitions: organizations that tracked and acted on transition metrics retained talent, sustained innovation, and navigated subsequent changes more smoothly than organizations that tracked only implementation success. The correlation was empirical, replicable, and ignored by the vast majority of organizations because the metrics did not fit the dashboard aesthetic of quantified objectivity. The AI moment makes ignoring transition metrics more expensive than ever, because the speed and scale of the transition mean that the lag between when the deficit accumulates and when it becomes visible has shortened from years to months.
Bridges introduced transition monitoring in the 1991 edition of Managing Transitions and refined it through the 2003 and 2009 editions. The concept emerged from his frustration that organizations measured change success obsessively (project milestones, adoption rates, productivity) while the transition dimensions that determined whether the change would be sustained (morale, engagement, identity coherence) went entirely unmeasured. He developed a practical toolkit of surveys, interview protocols, and observational checklists that managers could deploy to assess transition health, and he demonstrated that organizations using these tools caught transition problems early and corrected them before they became crises.
Change metrics are necessary but insufficient. They tell you the tools are being used; they do not tell you whether the people using them are psychologically whole.
Transition metrics are qualitative and quantitative. Engagement surveys, turnover rates among high-performers, innovation pipeline health, frequency of genuine experimentation — all signal transition health.
The metrics must inform action. Tracking transition health is valuable only if the organization is prepared to intervene when the metrics signal trouble — slowing adoption, increasing support, acknowledging unmanaged endings.
Early signals prevent late catastrophes. Transition problems detected early (rising disengagement, declining experimentation) can be addressed; problems detected late (talent exodus, innovation collapse) require years to repair.
The dashboard must include what cannot be fully quantified. Organizations must expand their conception of what counts as organizational performance to include the interior states that determine whether people can sustain the work.