Between 2024 and 2025, Gitelman delivered lectures at the University of Pennsylvania and the University of Virginia analyzing the specific failure mode of image-generating AI: systems like DALL-E produce images whose typographic elements are garbled even when the surrounding visual context is coherent. The lectures refused to treat these failures as mere errors to be fixed. Instead, Gitelman approached them as documents that reveal the specific kind of knowledge that statistical pattern-matching produces. Her characteristic question — is there something that DALL-E 3 knows about typography, in short, that we don't? — treated the machine's output not as a success or failure relative to human standards but as evidence of the medium's distinctive operations. The typographical hallucination is a small-scale instance of a large-scale phenomenon: AI systems produce outputs plausible within their own statistical framework but failing when measured against the culturally embedded knowledge human practitioners bring to the same domain.
A typographer knows that letterforms obey specific geometric and historical constraints. DALL-E knows that letterforms appear in specific visual contexts — on signs, in books, on screens — but does not know the constraints that make a given letterform correct or incorrect. The result is text that looks like text from a distance but dissolves into incoherence upon close inspection. The format is right. The content fails.
The structure is analogous to AI-generated prose that sounds like scholarship but contains errors only domain-specific knowledge can detect. In both cases, the format promises something the content cannot deliver. In both cases, the promise is inherited from a medium in which the relationship between format and content was reliable.
Gitelman's method here is characteristic: she reads the failure as revealing rather than as defective. The hallucinated letterform is not an embarrassment for the system; it is an epistemic object that documents the specific shape of the system's knowledge. What the system knows is statistical regularity across billions of images; what it does not know is the cultural and craft knowledge that constitutes legible typography.
The framework applies directly to text generation. A hallucinated citation, a mischaracterized philosophical concept, a confidently wrong historical claim — each is a typographical hallucination at the semantic level, a document revealing the statistical knowledge of the system and the cultural knowledge it lacks.
The analysis emerged from Gitelman's 2024–2025 lecture series and her ongoing work at NYU's Digital Theory Lab, extending the framework of Paper Knowledge into the domain of generative AI.
Failure as document. The hallucinated letterform is not just an error but an artifact that reveals the statistical character of the system's knowledge.
Format without culture. The system produces the format of typography without the cultural constraints that make typography legible.
Statistical versus embedded knowledge. The system knows what appears with what; it does not know what is constrained by what.
Semantic analog. The same structure characterizes AI-generated text — fluent format without the cultural-epistemic constraints that make claims accountable.
Diagnostic lens. Gitelman's method treats AI outputs as cultural documents to be read rather than as technical products to be assessed for correctness.