The concept enters the AI discussion through the recognition that large language models produce surfaces by default — syntactic polish, rhetorical fluency, apparent confidence — without guaranteeing the referential accuracy that would make those surfaces fair. The mechanism is optimized for surface coherence because its training objective rewards plausibility-of-continuation. Whether the surfaces it produces honestly represent their depths is not within the mechanism's operational parameters.
The result is a systematic bias toward unfair output. Not always. Not inevitably. But structurally. The mechanism will produce, with regularity, passages that achieve surface beauty without depth correspondence. The Deleuze error that Edo Segal catches in Chapter 7 of You On AI — Claude's elegant connection between Csikszentmihalyi's flow state and a Deleuzian concept of 'smooth space' that bore no resemblance to what Deleuze actually wrote — is the paradigmatic instance of unfair output. The surface was polished. The depth was fabricated.
Scarry's framework reveals what is at stake. Unfair surfaces do not merely transmit inaccurate information. They degrade the perceptual ecology in which beauty and justice operate. Each unfair surface that passes unchallenged teaches the perceiver that examination is not rewarded — that the closer look will be betrayed by what lies beneath. Over time, this learned cynicism destroys the willingness to examine at all, collapsing the perceptual foundation on which both beauty and justice depend.
The concept thus supplies the criterion that Byung-Chul Han's critique of smoothness cannot quite reach. Han diagnoses what is wrong with frictionless surfaces but treats all absence of friction as pathological. Scarry's framework draws the crucial distinction: the problem is not smoothness per se but unfairness — the decoupling of surface quality from depth quality. A passage that achieves syntactic elegance while maintaining genuine correspondence to its sources is smooth and beautiful. A passage that achieves smoothness while misrepresenting its sources is unfair. The relevant variable is the honesty of the surface's relationship to what it claims to represent.
Scarry develops the concept most fully in the second essay of On Beauty and Being Just, 'On Beauty and Being Fair,' where the double sense of 'fair' is traced through aesthetic, moral, and legal contexts. The concept's application to AI-generated output is a contemporary extension that draws directly on Scarry's structural framework.
Scale-invariance. Beautiful things remain beautiful across levels of magnification; their correspondence between surface and depth is genuine at every scale of examination.
Examination rewarded. Fair surfaces invite close attention and provide additional precision when the attention is given; unfair surfaces invite acceptance and collapse under scrutiny.
Trust relationship. Every surface establishes an implicit contract with the perceiver; fair surfaces honor the contract, unfair surfaces exploit the perceiver's willingness to trust the signal of quality.
Systemic consequence. Unfair surfaces degrade the perceptual ecology beyond the individual encounter, teaching perceivers that examination is unrewarded and progressively eroding the capacity for the attention beauty requires.
Not identical with smoothness. The distinction fair/unfair cuts across the distinction smooth/rough; smoothness that honestly represents depth is fair, while roughness that misrepresents is unfair.
Critical discussions of AI-generated content have often focused on factual accuracy as a binary property — content is either correct or incorrect. Scarry's framework suggests the binary is insufficient: what matters is the relationship between surface presentation and depth quality, which admits gradations that simple accuracy checks cannot capture. A technically correct statement wrapped in rhetorical confidence exceeding what the evidence supports is unfair in Scarry's sense even if it is not strictly false. The distinction matters for governance and design of AI systems: fairness requires not only accuracy but calibration of surface confidence to actual epistemic warrant.