Normalized rationalizations are the stock phrases that make silence feel reasonable in organizational settings. They are not lies; each contains a grain of truth, which is precisely what makes them effective as silencing mechanisms. The unprepared professional encounters them in the moment of ethical challenge and is temporarily disabled by their plausibility. Gentile's methodology treats them as a known and finite set — a taxonomy that can be learned in advance, analyzed in advance, and counter-scripted in advance. The prepared professional recognizes a rationalization when she hears it and deploys a prepared reframing that neither accepts the rationalization nor rejects it crudely, but expands the frame within which it operates. In the AI transition, the set is compact and predictable: AI is inevitable. If we don't build it, someone else will. The productivity gains outweigh the displacement costs. The technology is neutral. We'll fix it in the next version.
The rationalizations share a structural feature: they shift the frame from the specific decision at hand to a more abstract level where individual agency appears to dissolve. AI is inevitable moves the question from should we deploy this particular system in this particular way to can the technological tide be reversed, where the answer is obviously no. The rhetorical sleight is the substitution — a question no individual can answer in place of a question the individual is about to answer, with the implication that the first substitutes for the second.
Counter-scripting works by refusing the substitution. AI is inevitable becomes the technology is inevitable; how we deploy it is a choice, and the choice we make now will determine whether we're the company that deployed responsibly or the company that has to explain to regulators why we didn't. The counter-script does not deny the rationalization's partial truth. It reframes the space within which the truth operates, restoring the specific decision to the agenda the rationalization had removed it from.
The technology industry's rationalization set is distinctive in its moral valence. Where pharmaceutical rationalizations tend to minimize harm — the side effects are rare — technology rationalizations tend to invoke progress: we're building the future. The ethical voice that opposes progress is coded as reactionary, nostalgic, or afraid. Counter-scripting must therefore accomplish something pharmaceutical counter-scripting did not need to: it must reclaim the progress frame for the ethical position. The future we want is the one that works for users, workers, and the institutions that will govern it — not the one that arrives fastest.
Gentile's research has consistently shown that the rationalizations are remarkably stable across industries. Workers at Enron used the same phrases as workers at Purdue Pharma, with adjustments for specific content. The persistence suggests the rationalizations are cultural technologies — inherited scripts of conformity that propagate because they solve a recurring social problem (how to participate in something you privately doubt) rather than a specific industrial problem. This is useful news: the set is finite, known, and therefore trainable against.
The taxonomy emerged from Gentile's post-Harvard interviews with executives and mid-career professionals. Asked what they had said to themselves in moments of ethical compromise, respondents produced a remarkably consistent set of phrases. The consistency surprised Gentile; she had expected idiosyncratic self-justifications. The discovery that rationalizations traveled in a finite set was the foundation of what became the GVV approach to counter-scripting.
Rationalizations are scripts, not reasoning. They operate at the speed of social pressure rather than the pace of deliberation. Recognizing them as scripts is the first step toward answering them as scripts.
The partial truth is the mechanism. Each rationalization contains enough truth to be hard to dismiss outright. Counter-scripting works with the partial truth rather than against it.
The set is small and learnable. AI-transition rationalizations are finite: inevitability, competitive necessity, neutrality, future correction, aggregate benefit. Preparation across the set covers most encounters.
Counter-scripts reframe rather than reject. They expand the frame within which the rationalization operates, making visible what the original framing had obscured.