The AI does not bring an adversarial orientation in any meaningful sense. It generates test suites that cover standard categories of failure — the statistical center of what has been tested in the training corpus. But the adversarial imagination that distinguishes excellent QA from adequate QA lies precisely in envisioning failures that have not yet been widely tested — the edge cases that appear when specific combinations of factors coincide, the security vulnerabilities that emerge from subtle interactions between components, the usability failures that surface only when users are stressed, confused, or adversarial.
The team that included a skilled QA specialist built systems that anticipated failure from the outset, because the adversarial orientation shaped architectural choices throughout the development process — not merely at a testing phase added at the end. The AI generates tests but does not embed the adversarial orientation in the architecture's foundation, because its outputs represent what has typically been built rather than what should have been built to anticipate what typically fails.
This connects to the judgment bottleneck: the builder must now supply adversarial imagination herself, as one of the many cognitive orientations concentrated in a single person. For builders whose training was primarily constructive rather than adversarial — which is most builders, because the two orientations require different cognitive investments — supplying adversarial imagination is a demand that exceeds trained capacity. The consequence is systems that are more fragile in their failure modes than team-built systems, a fragility that may not become visible until the systems encounter the conditions the adversarial imagination would have anticipated.
The latent failures embedded in AI-augmented work connect to Charles Perrow's normal accident theory: complex, tightly coupled systems produce catastrophic failures whose specific modes cannot be predicted in advance but whose statistical occurrence is structurally inevitable. Adversarial imagination is the cognitive counterweight to this structural tendency — the deliberate, trained practice of imagining what the system could do that it should not.
The specific term "adversarial imagination" is less formalized in Hutchins's direct writing than in subsequent applications of his framework to security, safety-critical systems, and organizational resilience. What Hutchins's ethnography established was that skilled practitioners in high-reliability domains develop cognitive orientations that differ not just in content but in kind from those their colleagues develop — orientations the system as a whole depends upon even when no individual specialist recognizes the dependency.
Orientation as training. Adversarial imagination is not a personality trait but a trained perception built through years of systematic engagement with failure modes.
Architectural consequence. Teams with strong adversarial orientation build systems that anticipate failure from the outset, not merely at a testing phase.
AI's training-center bias. AI outputs represent the statistical center of training distributions — the typical, not the adversarial margins.
The absorption without replacement. AI absorbs implementation labor without absorbing the distinct orientations that diverse human specialists brought.
Fragility as latent property. Systems built without adversarial imagination may appear robust until they encounter the specific conditions adversarial imagination would have anticipated.