The Surrender is the central warning of Collins's 2018 book Artifictional Intelligence: Against Humanity's Surrender to Computers. It is not a prediction that machines will defeat us through superior capability. It is a diagnosis of a human failure: the gradual erosion of the evaluative vigilance that would allow us to distinguish mimeomorphic surface from polimorphic substance. The Surrender is the slow, individual-by-individual, interaction-by-interaction process of accepting fluent output as competent output because evaluating the difference requires expertise the evaluator may not possess.
Collins's framing inverts the standard AI-risk discourse. The popular concern is that machines will become too powerful. Collins's concern is that humans will become too deferential — that we will surrender our evaluative capacity not because the machines deserve it but because exercising the capacity is difficult and the machines' outputs are reassuring. The Surrender is a behavioral drift, not a dramatic capitulation. It happens in the hundredth interaction when you stop checking, in the thousandth when you forget you were supposed to check, in the ten-thousandth when the muscle of checking has atrophied entirely.
The mechanism is precise. Each interaction in which the machine's output is confirmed as correct — each time the code compiles, the prose reads well, the analysis cites correctly — raises the baseline expectation and lowers vigilance. The cost of checking is immediate and the benefit of catching an error is infrequent, so the rational individual response is to check less over time. But the aggregated effect of many individuals making this rational choice is a collective surrender of the evaluative infrastructure that human institutions depend on.
Segal's The Orange Pill catches this dynamic in action when he describes almost keeping a passage from Claude that sounded better than it thought. The discipline required to reject such output — to recognize that plausibility is not the same as truth — is what Collins's framework identifies as the scarce resource the AI age is depleting. And the depletion is invisible to anyone who lacks the expertise to see through the surface, which means that as machine output proliferates, the proportion of output evaluated by someone capable of detecting the Surrender decreases.
Collins named the phenomenon in Artifictional Intelligence (Polity, 2018). The framing reflects Collins's long-standing concern with the sociology of expertise and its institutional preservation — a concern that predates the LLM era but found its sharpest expression in response to GPT-scale language models.
Not about machine power. The Surrender is a diagnosis of human behavior, not machine capability.
Asymmetric verification. Confirming AI output requires the same expertise as producing it, creating a structural vulnerability.
Incremental drift. The Surrender happens not through dramatic capitulation but through the gradual erosion of evaluative vigilance.
Invisible until late. The consequences manifest only when the vigilance has already eroded — by which time rebuilding the evaluative infrastructure requires reconstructing social practices that may no longer exist.
Critics argue that the Surrender framing overstates the pathology — that humans remain appropriately skeptical of AI output and that evaluation practices are adapting rather than degrading. Collins's response is that the empirical evidence supports his diagnosis: studies of AI use in professional contexts consistently find reduced checking behavior, and the institutional structures that historically supported evaluation (peer review, editorial oversight, professional accountability) are being strained by output volumes they were not designed to handle.