The informating dividend is Zuboff's term for the knowledge-creation potential that accompanies every automating technology. Computerized paper mills automated physical digester operation while simultaneously generating continuous data streams about temperatures, pressures, chemical compositions—information that enabled understanding at granularity and precision no hands-on operator could achieve. The dividend is real: new knowledge genuinely becomes available. But possibility and realization diverge. Zuboff's empirical finding across four decades: the informating potential is systematically unrealized because institutions choose the cheaper path—automation without the investment in human development required to capture new knowledge. AI produces the largest informating dividend in history—revealing patterns, generating hypotheses, enabling integrated cross-domain understanding—yet follows the same institutional trajectory toward extraction without elevation.
Zuboff's framework dissolves the false binary between technology as threat and technology as savior. Every smart machine simultaneously automates and informates—destroys old knowledge forms while creating potential for new ones. The question is never whether to adopt technology but how to deploy it: whether institutions invest in the human capacities required to engage with new knowledge or whether they extract cost savings and move on. The paper mill that computerized could have trained floor workers to interpret production data, giving them analytical tools their embodied knowledge would have enriched. Most did not. The training investment was expensive, the payoff uncertain, the cheaper path obvious: move experienced workers to monitoring roles, hire fewer new workers, capture automation's cost reduction. The informating potential—the deeper process understanding, the diagnostic capacity, the knowledge that could have elevated the work—was left unrealized.
The informating dividend of AI operates at civilizational scale. The developer in Lagos gains access to coding capability previously available only at well-funded firms. The student can explore connections between domains no curriculum bridges. The researcher can formulate hypotheses that would take human teams months. But Edo Segal's account reveals the dividend's asymmetric distribution: it flows disproportionately to those who already possess deep domain knowledge. The senior engineer's twenty years of embodied coding experience, when amplified by Claude Code, produces outputs the junior developer cannot match because AI amplifies proportionally—more expertise in, more capability out. The floor rises but the ceiling rises faster, and the gap between what experienced and inexperienced practitioners can accomplish widens despite universal tool access.
Capturing the dividend requires institutional structures Zuboff's research shows are rarely built. First, practice preservation—maintaining constructive engagement opportunities even after automation makes them productively unnecessary, because evaluative skill depends on experiential foundations that evaluation alone cannot build. Second, extraction governance—establishing clear rights over the cognitive behavioral data generated by AI interaction. Third, sorting transparency—making visible the mechanisms classifying workers by AI-tool competence. Fourth, dividend distribution—public investment in training that develops evaluative intellective skill at scale. None of these structures exists at the scale the AI transition demands, and their absence ensures the dividend follows historical precedent: captured by those positioned to claim it, denied to those who generated it.
The concept originates in In the Age of the Smart Machine (1988), where Zuboff first distinguished information technology's dual capacity. The distinction drew on her observation that computerization created measurement and visibility into production processes that hands-on operation could never provide—but that the visibility was systematically underutilized. The informating function was acknowledged as a secondary benefit while the primary function—cost reduction through automation—drove deployment decisions. Zuboff's innovation was recognizing these were not secondary and primary functions but co-equal potentials whose relative realization depended entirely on institutional choice.
Dual function is simultaneous. Automation and informating are not alternatives—every automating technology simultaneously generates information; the question is whether institutions invest in capturing it or discard it as byproduct.
Unrealized by default. Zuboff's four-decade empirical record: informating potential is systematically squandered because the investment required to realize it (training, organizational redesign, authority redistribution) exceeds the return within quarterly timescales.
Requires practice preservation. The new knowledge demands new skills whose development depends on old practices—eliminating constructive engagement while demanding evaluative judgment produces the surface competence that cannot detect when systems fail.
Distribution follows power. The dividend flows to those who already possess institutional authority and educational background—managers and engineers in paper mills, senior developers in AI transitions—amplifying existing inequality rather than flattening it.
Scale mismatch is structural. Individual organizational choices (one company preserving teams, one researcher proposing pauses) operate at firm level while forces they contend with (market competition, quarterly earnings pressure) operate at market level—adequacy requires institutional design at scale of the current.