Semantic information, in Moles's framework, is the component of a message that survives translation across codes. A scientific claim can be expressed in English, French, mathematical notation, or a labeled diagram; the semantic content is preserved. This distinguishes it sharply from aesthetic information, which does not survive translation. The significance for AI is structural: large language models are extraordinarily good at producing semantic information — correct answers, working code, defensible arguments — and this competence is the most easily measured and most frequently celebrated dimension of their output. It is also the dimension most likely to be mistaken for full authorship.
There is a parallel reading that begins not with information theory but with the material conditions of semantic production. The distinction between semantic and aesthetic information, while analytically useful, obscures the political economy that determines which semantics get produced and validated. AI systems don't merely excel at semantic information—they excel at producing the specific semantic forms that have been deemed valuable enough to digitize, store, and train upon. This is semantic information as defined by those with the resources to build training corpora: academic publishers, technology companies, governments. The "correctness" these systems produce is correctness within a particular regime of truth, one that systematically excludes vernacular knowledge, oral traditions, and the semantic systems of the global majority.
More troubling is how this framing naturalizes a hierarchy between semantic and aesthetic information that mirrors existing power structures. The "reliable" production of semantic information by AI is reliable precisely because it reproduces dominant epistemologies—the peer-reviewed paper format, the Silicon Valley pitch deck, the bureaucratic report. Workers whose knowledge is primarily embodied, communities whose truths are primarily sung, and traditions whose logic is primarily relational find their semantic contributions deemed "aesthetic" and therefore untranslatable, unverifiable, and ultimately invaluable in the age of AI. The real loss isn't aesthetic information slipping away from individual creators; it's entire semantic systems being rendered invisible because they don't pattern-match to what the training data has encoded as "correct." The receiver's problem isn't distinguishing semantic from aesthetic content—it's recognizing that semantic adequacy itself is a historically specific construction that AI systems are now ossifying at unprecedented scale.
The semantic/aesthetic distinction does a great deal of work in Moles's aesthetics, and it does even more work in an AI context. When a language model produces a correct proof, a functioning program, or a persuasive argument, it is operating in the semantic register where its training distribution has the most traction. The pattern it has learned — what correct proofs look like, what working code looks like, what persuasive arguments look like — is substantially encoded in the statistical regularities of its training corpus.
The problem is that semantic competence can be mistaken for aesthetic competence, and the mistake has consequences. The engineer who accepts AI-generated code without understanding it is relying on semantic correctness that may not transfer across codes — the code works in the context the model predicted, but may fail in the context the engineer actually inhabits. The writer who accepts AI-generated prose is often accepting semantic adequacy (the sentences parse, the arguments follow) at the cost of the aesthetic specificity that would have made the prose hers.
The Orange Pill captures this in its account of productive addiction: the output is real, the semantic content is genuine, and yet something is being lost that the productivity metrics do not capture. Moles would name the missing component precisely. What is lost is aesthetic information — the specific voice, the earned judgment, the trace of a particular person having thought a particular thing.
The practical implication for institutional design is that measuring AI output by semantic criteria alone will systematically over-value the tool's contribution and under-value the human contribution. The receiver's problem in the AI age is, in part, the problem of reading past semantic correctness to evaluate whether a message carries the aesthetic information that distinguishes genuine human communication from competent redundancy.
Moles formalized the distinction in his 1958 treatise, drawing on the Shannon-Weaver model of communication but insisting that Shannon's purely quantitative measure of information was insufficient for cultural analysis. The semantic/aesthetic split was his proposed correction — a way of preserving Shannon's mathematical rigor while addressing the specifically cultural dimensions of meaning that Shannon had explicitly set aside.
It survives translation. Semantic information is definable as the component of a message preserved across any adequate recoding.
It is what benchmarks measure. Every test of AI capability — from code correctness to factual accuracy to argumentative coherence — operates primarily in the semantic register.
AI produces it reliably. This is the most robust and least controversial capability of contemporary language models.
It can be mistaken for more. Semantic adequacy often disguises itself as aesthetic adequacy, especially when the reader lacks the expertise to distinguish them.
Its abundance creates the curation problem. When semantic information is cheap, the scarce resource becomes the judgment that separates signal from redundancy.
The tension between Moles's framework and its materialist critique depends entirely on which question we're asking. If we're analyzing how individual users experience AI outputs, the semantic/aesthetic distinction is nearly perfect (95% Edo)—it precisely names why generated text feels both correct and empty. The framework elegantly explains the uncanny valley of AI prose: semantically adequate, aesthetically vacant. But shift the question to whose semantics get encoded as "correct," and the materialist view dominates (80% contrarian). The training corpora aren't neutral archives; they're highly curated selections that encode specific cultural assumptions about valid knowledge.
The synthesis emerges when we recognize that both dynamics operate simultaneously. AI systems are remarkably good at producing semantic information within the epistemological boundaries of their training data—this is both their profound capability and their fundamental limitation. The semantic/aesthetic split remains analytically powerful, but it needs supplementing with an account of semantic plurality. Different communities operate with different semantic systems, and AI's apparent universality masks its specificity. The "correctness" it produces is correctness within a particular tradition of formalization, one that privileges written over oral, explicit over tacit, propositional over relational knowledge.
The practical implication cuts both ways. Yes, institutions that measure only semantic adequacy will systematically undervalue human contribution (100% Edo)—but they'll also systematically reproduce the semantic hierarchies embedded in their measurement systems (100% contrarian). The receiver's problem isn't just distinguishing semantic from aesthetic information; it's recognizing that semantic information itself comes in varieties, and AI systems have been trained to produce only certain kinds. The framework needs expansion: beyond semantic and aesthetic, we need to track whose semantics are being preserved in translation.