Ersatz is the German word for a substitute that performs the function of the original without being the original — ersatz coffee made from chicory and grain, tasting enough like coffee to serve the function but not being coffee. Crawford borrows the term to name what AI-generated output is: competent simulation of expertise lacking the three characteristics that constitute genuine knowledge in his framework — experiential grounding, testing against reality, and earning through difficulty. The distinction is not a nostalgic preference but a structural diagnosis. Plausibility is a surface property. Understanding is a depth property. And a culture that cannot tell them apart has lost the standard against which its own output can be independently evaluated.
The distinction rests on three characteristics Crawford identifies as constitutive of genuine knowledge. First, grounding in experience — not the thin experience of having encountered information, but the thick experience of having engaged with a subject bodily, materially, through the specific friction of working with things that respond to touch in ways no textbook anticipates. Second, testing against reality — the continuous, unforgiving confirmation or refutation of hypotheses by the behavior of the systems one is working with. Third, earning through difficulty — the specific cognitive deposit that accumulates when a plan fails, the practitioner understands why, and her next attempt reflects the lesson the failure administered.
AI-generated output lacks all three. It is not grounded in experience but in the statistical processing of descriptions of experience. It is not tested against material reality but against functional requirements defined in advance — requirements that may or may not capture the full complexity of the situation. It is not earned through engagement with resistant materials but delivered through an interface designed, with extraordinary sophistication, to eliminate the resistance genuine engagement requires.
Crawford framed this directly in "AI as Self-Erasure" (2024), through the story of a father who prompted ChatGPT to write a wedding toast for his daughter. The output was decent — "maybe better than what he would have written." The father did not use it. To use the AI-generated speech, Crawford observed, "would have been to absent himself from this significant moment in the life of his daughter." The toast would have been a performance with no performer behind it, words with no one's weight of feeling pressed into them. The father's refusal was not a rejection of quality but of self-erasure.
The concept connects to Crawford's broader diagnosis of replacism — the assumption, developed in "Ownership of the Means of Thinking," that "every particular thing can be replaced by its standardized double, and thus made more amenable to the application of machine logic." Ersatz expertise is the cognitive manifestation of this replacist metaphysics: the conviction that understanding is reducible to a pattern that can be extracted, encoded, and reproduced without the specific engagement that originally produced it.
The distinction is implicit throughout Crawford's work but receives its most explicit articulation in his AI essays of 2024-2025. Shop Class as Soulcraft established the positive case for genuine knowledge through the figure of the skilled mechanic. The AI essays complete the framework by identifying its counterfeit — output that satisfies the market definition of expertise while lacking its substance.
Crawford's philosophical debt here is to Aristotle's distinction between techne and phronesis, and to Alasdair MacIntyre's account of practices and their internal goods — traditions Crawford engages explicitly in Shop Class and develops implicitly throughout his subsequent work.
Three constitutive characteristics. Experience-grounded, reality-tested, effort-earned — the triple signature of genuine knowledge that distinguishes it from its plausible double and reveals the specific ways AI output falls short.
Plausibility as corruption. Smooth, confident, well-articulated output is the specific form of corruption in domains where evaluation depends on surface reading rather than material verification — and AI is optimized precisely for this surface quality.
The self-erasure mechanism. Using AI-generated output for moments that require the practitioner's own presence is not merely a productivity choice but an act of withdrawal from the situation that demanded her engagement.
Geological deposition. Understanding accumulates in layers through friction-full engagement with material reality — each hour of struggle depositing substrate that later judgment stands on. AI bypasses the deposition while producing the surface output, thinning the ground beneath the practitioner's feet.
Circular vulnerability. The tool's effective use depends on judgment; judgment depends on engagement; the tool eliminates engagement — so the tool progressively undermines the conditions for its own effective use.
The hardest challenge to the distinction comes from the observation that AI output sometimes is better than what human practitioners produce — that the prototype works, the brief is persuasive, the analysis reads like expertise. Crawford's response is that this empirical observation is precisely the danger his framework identifies. If AI produces output that functions better than human output while lacking the foundation that would allow errors to be caught, the apparent quality masks a structural fragility that will be revealed only when the system fails in ways the corruptible tests did not anticipate. The question is not whether AI output works today but whether the practitioners who direct it possess the embodied understanding to catch the failures that will eventually occur — failures the tool, by its nature, cannot warn them about.