The most frequently repeated mistake in the analysis of communication revolutions is the assumption that the revolution, once identified, can be bounded — that there is a 'before' and an 'after,' and the analyst's task is to describe the transition between them. Eisenstein spent her career demonstrating this assumption is false. The transition from scribal to print culture unfolded over two centuries, and the process was not linear. It moved in lurches and reversals, through periods of rapid change and apparent stability, producing consequences that were sometimes immediate and sometimes delayed by generations. The first generation saw the obvious changes — cheaper books, wider distribution, displacement of scribal labor. The second saw the Reformation, made possible when ordinary laypeople could read Scripture for themselves. The third and fourth saw the Scientific Revolution, built on the institutional infrastructure that the second had created. The change that ultimately mattered most was the change after the change — the consequences of consequences that contemporaries could not have imagined.
This generational unfolding is the most important feature of the print revolution that the AI discourse has not yet absorbed. The contemporaries of a communication revolution see the first-generation effects: the obvious, immediate, predictable consequences of the technology's capabilities. They do not see the second-generation effects: the institutional adaptations, the new social forms, the emergent practices that arise from the interaction between the technology and the society that adopts it. And they cannot see the third- and fourth-generation effects: the consequences of consequences, the developments that are built on the institutional infrastructure that the second generation created, and that bear no visible resemblance to the technology that set the process in motion.
The AI discourse in 2026 is almost entirely first-generation. It is focused on immediate, visible, measurable effects: faster code production, expanded capability, the displacement of certain kinds of labor, the democratization of certain kinds of access. These effects are real — the equivalent of 'cheaper books' in the print revolution. But they are not the revolution. They are its beginning. The revolution is the process that will unfold over decades and generations as the technology interacts with human creativity, institutional adaptation, social contestation, and the accumulation of second-order consequences no one currently foresees.
What can be said with confidence is that the institutions that ultimately manage the AI transition will bear little resemblance to the institutions currently existing. The educational systems that train people for a world of AI-augmented work will not look like current schools and universities with AI modules added. They will be fundamentally different institutions, organized around different principles, teaching different skills, measuring different outcomes. The regulatory frameworks that govern AI deployment will not look like current technology regulations with AI-specific provisions appended. They will be new frameworks, developed in response to problems that current regulations cannot anticipate. The social norms that govern AI use in creative work, professional practice, education, personal life will not be the norms currently proposed. They will emerge from the accumulated experience of millions of people using these tools over years and decades, discovering through practice what works and what does not.
The change after the change is the one that matters. It is the one that the current generation is building the conditions for, whether it knows it or not. The institutions being built now — the beaver's dams in Segal's language — are first-generation institutions. They are necessary. They are also, almost certainly, inadequate to the second- and third-generation consequences that are coming. But they are what the current generation can build, and the quality of the building determines the trajectory of everything that comes after.
The concept of generational unfolding appears throughout Eisenstein's work but was most fully developed in her analysis of how the print revolution's effects accumulated over time. The framework has been extended by historians of other communication transitions — notably the development of the broadcast era's institutions over the twentieth century and the internet era's institutions still emerging in the twenty-first.
The specific framing here — the change after the change — is borrowed from Eisenstein's analytical approach and applied explicitly to the AI transition. The argument is that Eisenstein's generational framework is the sharpest available tool for understanding where the AI transition actually is in its trajectory and what kind of institutional work is possible at this stage.
Revolutions unfold in generations. Each generation experiences and responds to different effects, building on the institutional adaptations of previous generations.
First-generation effects are visible. Cheaper books, faster code, displaced professionals — the obvious consequences predicted in advance.
Second-generation effects emerge from adaptation. The Reformation, the Scientific Revolution, and their AI analogs emerge from interactions between technology and institutional responses to first-generation effects.
Third- and fourth-generation effects are the revolution. The consequences of consequences — effects built on institutions that are themselves built in response to earlier effects.
The current generation builds the conditions. Institutions built now will shape what the next generation can build, whether the current builders recognize this or not.
First-generation dams are necessary but inadequate. Current institutional responses will not prevent second-generation consequences; they can only shape the trajectory.
The central question about generational unfolding in the AI case is whether the compressed timescales will compress institutional development proportionally, or whether institutional adaptation will lag so far behind technological change that the generational pattern breaks down. Some argue that AI itself can accelerate institutional adaptation — that new institutions can be built with AI tools faster than their print-era predecessors. Others argue that institutional development requires social negotiation, trust-building, and experimentation that no technology can accelerate. The answer will shape whether the AI transition produces a manageable evolution or a disorienting rupture.