Every large language model has a statistical center of gravity: the patterns, phrasings, approaches, and solutions that appear most frequently in its outputs because they appeared most frequently in its training corpus. This center is not bias in the pejorative sense — it is mathematical reality. The model has learned the distribution of human output and generates from the center of that distribution more readily than from its tails. The structural consequence is that unconventional solutions, experimental approaches, and edge-case possibilities are not impossible to elicit but are improbable in the technical sense: generated less frequently, offered less readily, and requiring deliberate effort against the statistical grain to surface.
The center of gravity is the mechanical core of the cognitive filter bubble. It operates through the ordinary mathematics of probabilistic generation: the model predicts the most probable next token given the preceding context, and the high-probability region of the distribution is, by definition, the region containing the most conventional continuations. Temperature settings modulate this tendency — higher temperatures broaden the distribution and increase the probability of unusual outputs — but temperature cannot selectively increase the probability of valuable surprises while leaving unhelpful randomness unchanged.
The builder who works with the model receives outputs drawn from the center. These outputs are genuinely valuable — they represent the distilled conventional wisdom of the training corpus, the accumulated best practices of the fields on which the model was trained. The cost is at the margins: the solutions lying at the edges of the distribution, the experimental approaches that might fail spectacularly or produce a breakthrough, are statistically suppressed by the generation mechanism.
The suppression is not deliberate. It is structural. The model assigns lower probability to edge cases, generates them less frequently, and offers them less readily. A builder who wants the unconventional must work against the model's statistical grain to obtain it, and most builders, most of the time, do not — because the conventional solution is right there, competent and immediate, and the deadline is real. The center of gravity thus operates through the mechanics of satisficing: the immediate adequacy of the convergent output forecloses the search that would have revealed the divergent possibilities.
This dynamic produces the chasm of mediocrity that Brian Eno diagnoses in adjacent terms — the statistical average toward which AI-generated creative work inexorably gravitates. The chasm is not a failure of the models. It is the models operating as designed, which is why merely improving the models will not close it.
The concept draws on established statistical properties of neural language models, formalized across the literature on next-token prediction, sampling strategies, and the relationship between training distribution and output distribution. Its application as a diagnostic framework for cognitive confinement originates in the filter-bubble tradition and the broader critique of algorithmic systems as shapers of human capability rather than neutral tools.
Probability is the bubble's mechanism. The model generates from the center of the distribution because that is what probabilistic generation means; no deliberate bias is required.
Temperature cannot substitute for taste. Broadening the distribution produces more variance, not more value; genuine serendipity requires sagacity that temperature settings do not supply.
The center is genuinely valuable. Conventional wisdom is not nothing — it is the product of millions of hours of accumulated human effort, and receiving it has real worth.
The cost is at the margins. What is suppressed is not the average but the unusual, which is exactly where breakthroughs historically live.