The chasm of mediocrity is Brian Eno's compressed diagnosis of the fundamental problem with AI-generated creative work. When Eno tested a song generator trained to produce material in his own style, he found the outputs not too bad — competent, recognizable, adequate — but none good enough to release. The failure mode, he explained, was structural: the system was designed to produce the statistically probable, and the statistically probable, for a system trained on the aggregate of human creative output, is by mathematical necessity the average. The first task of any creative practitioner using such tools, Eno concluded, is to stop it going down into the chasm of mediocrity that it will always want to go into, because that's the way it's set up. The phrase captures, with characteristic concision, why AI's competence is the opposite of its creativity.
The phrase emerged from Eno's direct experimentation with generative AI tools trained on his own work. He found that the systems could reliably produce outputs recognizable as Brian Eno music — atmospheric, slowly evolving, texturally sophisticated — but that none of the outputs deserved release. The competence was genuine. The aesthetic was wrong. The system produced what an Eno track typically sounds like, which is by definition not what an interesting Eno track sounds like.
The diagnosis connects to a broader observation about how language models and other generative systems operate. These systems optimize for the most probable next token given their training distribution. The most probable output is, necessarily, the most average output — the mean of the distribution, the configuration most similar to the largest number of training examples. This is not a flaw in the architecture; it is the architecture. The system cannot help gravitating toward the statistical middle, because the statistical middle is what the optimization target defines.
The creative act, in Eno's framework, is precisely the deviation from the probable. The unexpected note, the unprojected texture, the error that becomes a signature — these are the features of work that lasts, and they are all, by definition, improbable given the training distribution that preceded them. A system that produces only what the distribution predicts can produce nothing new, in the sense that matters. It can only produce remixes of the average.
The implication is not that AI tools are useless for creative work but that using them well requires active resistance to their default behavior. The practitioner who asks the machine for a song will receive a song — competent, generic, forgettable. The practitioner who asks the machine for something wrong, something broken, something that should not exist, may receive the seed of something that deserves to. The chasm is gravitational; climbing out requires sustained effort against the pull.
The phrase surfaced in interviews Eno gave in 2024 and 2025 about his experimentation with AI music tools. It represents the distillation of his empirical findings after direct engagement with generative systems, and it has become the most quoted single formulation in his AI-related commentary.
Probability is the architecture. Language models optimize for the most probable next token; the statistical average is the structural outcome, not a bug to be fixed.
Competence is the enemy of interest. The chasm is not a failure of capability — the outputs are competent — but a failure of distinctiveness; adequate work is forgettable work.
Active resistance is required. The practitioner must work against the system's gravitational pull toward the average; passive use produces passive-average results.
The aggregate is not the exception. A system trained on the full range of human creativity produces the average of that range; it cannot produce the exceptional, because exceptional means statistically improbable.
Critics of the chasm diagnosis argue that temperature controls and advanced prompting techniques can push models toward less probable outputs, and that Eno's test understates what skilled prompt engineering can produce. The response, consistent with Eno's broader framework: the tools that push models off the average require practitioners who know what off-average looks like, which requires precisely the cultivated taste the tools cannot provide.