The discourse that formed around artificial intelligence in the winter of 2025 and early 2026 offers a natural experiment in the dynamics of dissonance at scale. Opinions formed under conditions of high emotional arousal, were published immediately through digital infrastructure that made every provisional assessment permanent and globally visible, and hardened with a speed that was itself diagnostic. Two variables produced the outcome: the magnitude of the identity threat and the publicness of the initial response. The combination compressed opinion formation from months to hours, producing camps that crystallized before most members had spent serious time with the tools they were debating.
The AI transition arrived not as incremental improvement but as qualitative transformation — what observers described as a phase transition, the way water becomes ice. A machine that could hold a conversation, interpret intention, and produce working software from natural-language description was a categorically different instrument from previous tools. Categorical difference produces categorical threat to every cognitive framework that assumed the old categories were stable, which translates directly into dissonance of extraordinary magnitude.
Social media compressed the cycle of opinion formation from months to hours. A person encountering an AI tool could compose and publish a reaction within minutes — visible to thousands, archived indefinitely, retrievable long after the person had developed a more nuanced view. The reaction, once published, became a cognition carrying the weight of public commitment. The compounding began before the person had engaged with the technology at a depth that could support considered judgment.
Three camps crystallized: triumphalists who resolved dissonance by embracing change and identifying with it; skeptics who diminished AI's capability to protect expertise investments; and resisters who refused engagement entirely. Each camp's position mapped onto one of Festinger's reduction strategies with mechanical precision. The silent middle — those holding contradictory truths without resolving them — possessed the most accurate assessment and the least social reward.
The calcification resisted correction even as evidence accumulated. Identity investment underlying each position raised the cost of revision to catastrophic levels. A senior engineer's assessment of AI's capability was not a detachable opinion but a load-bearing element in a structure including professional identity, social standing, self-concept, and the accumulated meaning of decades of effort. Threatening one structure threatened the entire edifice.
The pattern was documented contemporaneously by practitioners including Edo Segal, whose The Orange Pill traced the formation of the camps in real time, and by researchers studying AI adoption at UC Berkeley, the University of Pennsylvania, and other institutions. The acceleration was predicted by Festinger's framework even though the framework itself predated the internet by decades.
High threat plus public commitment. The two variables that, acting together, compress the normal timeline of opinion formation.
Camps as reduction strategies. Triumphalist, skeptic, and resister positions each map onto a Festingerian reduction strategy.
Identity investment sustains calcification. Positions become load-bearing elements of professional identity, raising the cost of revision to catastrophic levels.
The silent middle pays the cost. Those who hold contradictory truths without resolving them receive the least social reward despite producing the most accurate assessments.