Evaluative intellective skill is the AI-era evolution of Zuboff's intellective skill concept: the capacity to judge whether understanding that has already been built is sound rather than building understanding from components. When large language models generate analyses, drafts, solutions optimized for plausibility, the human's cognitive work shifts from construction to evaluation—from writing code to assessing whether Claude's code is correct, from drafting arguments to determining whether GPT's reasoning holds. This is not simplification. Evaluation is more demanding than construction in critical dimensions because errors wear truth's clothing: the Deleuze fabrication Segal caught was elegant, well-structured, philosophically sophisticated, and wrong in ways only deep domain knowledge could detect. The skill requires independent knowledge built through the constructive practice that AI's efficiency eliminates—the paradox at the transition's heart.
The demand emerges from AI's characteristic failure mode: confident wrongness dressed in competent prose. A digital display showing incorrect temperature is detectably wrong by any worker with basic domain knowledge—the number either matches process reality or doesn't, a binary easily checked. An AI system producing incorrect but plausible analysis is detectably wrong only by someone possessing deep enough expertise to identify the divergence point concealed beneath linguistic fluency. The paper mill worker who read displays could detect errors because embodied knowledge provided an independent verification system—a felt sense of the process against which screen representations could be tested. As experienced workers retired and were replaced by workers trained exclusively on digital systems, the independent verification disappeared. New workers read displays competently but could not detect when displays were wrong because they lacked knowledge external to the monitoring system.
The AI transition compresses this generational erosion from decades to years. The senior developer whose embodied coding knowledge enables catching Claude's architectural fragilities is not retiring over twenty-year cycles—the developer's implementation practice is being eliminated now, in real time, by the tool whose output requires that practice to evaluate. The erosion is simultaneous with the demand: practice and evaluation both present in the same moment, with institutional pressure to eliminate practice (because machines are faster) while preserving evaluation demand (because machines are fallible). This simultaneity is what makes AI transitions structurally different from previous smart machine transitions—the experiential foundation erodes while the evaluative demand intensifies, not sequentially.
Developing evaluative intellective skill at scale requires what Zuboff calls institutional design for the informating dividend: deliberate preservation of constructive practice opportunities that serve no immediate productive function but maintain the experiential substrate evaluation depends on. Medical training provides the template—residents learn on cadavers and simulators before operating on patients; the practice serves no productive purpose (cadavers don't benefit) but builds embodied knowledge residents need when stakes are real. AI-era professional development may require equivalent structures: coding exercises, design challenges, implementation projects maintained alongside evaluative workflows, protected from productivity pressure, valued as formative rather than dismissed as inefficient. No major organization has implemented such programs at scale; the institutional response has been deploying tools and expecting workers to develop evaluative skill through trial and error.
The concept is implicit in Zuboff's 1988 work but becomes explicit only in the AI age when the reversal from constructive to evaluative work becomes the dominant pattern. The intellectual precursors include Ryle's knowing-how versus knowing-that distinction, Schon's reflective practitioner framework (evaluation as reflection-in-action), and critical pedagogy's emphasis on evaluation as higher-order thinking. Zuboff's synthesis recognizes that evaluation is not merely a cognitive skill but an epistemological stance—a relationship to knowledge characterized by skepticism, independence, and the disciplined refusal to accept plausibility as proof.
Reversal from constructive to evaluative. Pre-AI intellective skill assembled understanding from components; AI-era intellective skill assesses understanding already assembled—a qualitative change in cognitive direction, not merely a quantitative increase in abstraction.
More demanding, not less. Detecting sophisticated errors concealed beneath confident fluency requires deeper domain knowledge than constructing adequate solutions from specifications—the adversary is more capable, the detection threshold higher.
Depends on eliminated practice. The capacity to evaluate code, analysis, or design that one has never attempted to produce is structurally compromised—you cannot reliably judge what you have never built, and AI eliminates the building that makes judgment possible.
Surface versus depth evaluation. Without independent domain knowledge, evaluation degrades to plausibility checking—does output look right, sound professional, fit expected patterns—rather than accuracy verification—is the reasoning sound, are conclusions supported, will implementation hold under stress.
Generational erosion compressed. Paper mills had decades before embodied knowledge retired; AI transitions have years or months—the timeline compression prevents the gradual adaptation that made previous transitions navigable for individual workers and institutions.