Biological monocultures are efficient and catastrophically fragile. A field of uniform wheat produces predictable yield and collapses completely when a pathogen matched to its single vulnerability arrives. The optimization that produced the efficiency is the same optimization that eliminated the redundancy needed to survive the crisis. Pye observed a structurally identical pattern in the history of making. Risk workmanship produces diverse objects — not because the maker intends variation but because variation is a structural consequence of the process. Certainty workmanship produces uniform objects; uniformity is the point. When AI becomes the dominant mode of knowledge work, the diversity that risk work produces as a byproduct is replaced by the competent average the model produces by design.
There is a parallel reading that begins not with the outputs of AI systems but with the material conditions of their existence. The competent average is not merely a technical artifact of statistical learning—it is the only output possible given the political economy of AI development. Training large models requires data centers that consume as much electricity as small nations, hardware supply chains controlled by a handful of companies, and capital concentrations that make the Gilded Age look egalitarian. This infrastructure does not accidentally produce uniformity; uniformity is its business model.
The lived experience of those whose work AI replaces tells a different story than the abstract concern about diversity. The junior programmer in Bangalore, the paralegal in Detroit, the copywriter in Manchester—they do not experience the loss of "risk workmanship" as aesthetic deprivation but as economic annihilation. When their employers adopt AI tools, the question is not whether the output retains handmade charm but whether they retain employment. The competent average becomes a ceiling not through gradual cultural shift but through immediate material pressure: firms that don't adopt it fail, workers who can't exceed it starve. The monoculture Pye fears is already here, not as convergence toward a statistical mean but as convergence toward the only forms of production that trillion-dollar infrastructure can sustain. The breakthrough innovations celebrated as emerging from the tails—Xerox PARC, the World Wide Web—themselves required institutional support and material resources that are now being redirected toward AI development. The substrate determines the superstructure, and the substrate of AI is a concentration of computational power that makes diversity not unlikely but impossible.
The competent average is not incompetent. This is what makes it difficult to criticize. It is better than the worst human output, comparable to the median, produced at a fraction of the cost. By every metric that evaluates output as commodity — speed, consistency, adequacy — the competent average is a triumph of engineering. But it is worse than the best human output, and the difference is qualitative. The specifically excellent — work that could only have been produced by this particular mind grappling with this particular problem — lives in the tails of the distribution, not at the center.
The model's architecture moves toward the probable, and the probable is, by mathematical definition, not the exceptional. The greatest innovations in every creative field emerged from the tails: the graphical user interface at Xerox PARC, Tim Berners-Lee's specific frustration that produced the World Wide Web, Alexander Fleming's observation of the contaminated petri dish. No optimization algorithm would have generated these, because they created new categories rather than optimizing within existing ones. Risk workmanship produces these moments as a structural byproduct of its nature. Certainty workmanship does not, because the apparatus is designed to eliminate exactly the kind of variation from which they emerge.
When an entire profession shifts toward certainty workmanship — when most code, briefs, essays, and analyses are generated by the same apparatus trained on the same data — the cultural output converges. Not catastrophically, not immediately, but gradually, the way a river delta silts up: each deposit barely perceptible, the cumulative effect transforming the landscape. The convergence affects not only what is produced but what is conceived. Practitioners accustomed to the model's range internalize it as the horizon of the possible.
The economic logic reinforces the convergence. The practitioner who maintains risk practice — who periodically returns to manual work to preserve the tacit foundation — is, by every visible metric, less productive than the practitioner who cedes everything to the apparatus. The market does not reward the maintenance. The quarterly review cannot see its value. Monoculture wins on short-term economics, which is why every monoculture wins until the pathogen arrives.
Pye developed the diversity-versus-uniformity distinction through his analysis of what he called the aesthetic of diversity — the specifically hand-made quality that the factory cannot replicate because the factory's virtue is the elimination of variation. He insisted this was not a sentimental preference for imperfection but a structural observation about what different production modes make possible.
The extension to AI-era knowledge work is direct. The chasm of mediocrity Brian Eno identified in language-model output names the same phenomenon from the artist's perspective: the statistical average toward which generative systems inexorably gravitate, the comfortable professional middle that the architecture is structurally engineered to produce.
Monoculture efficiency, monoculture fragility. Uniformity maximizes output under stable conditions and fails completely under perturbations it was not optimized to resist.
The competent average. AI output clusters around a professional median that is adequate for most purposes and incapable of the specifically excellent.
The tails are where breakthrough lives. Historical innovations emerged from the improbable, which is structurally outside the model's range.
Conception narrows with production. Practitioners calibrated to the model's range find the specifically improbable harder to conceive, not forbidden but foreclosed.
Economic reinforcement. Markets reward the center and cannot price the diversity that risk work produces; short-term incentives accelerate the convergence.
Optimists argue that AI expands the creative horizon by lowering barriers — that more people can now produce work, including work at the tails. The Pye framework grants the point about access and presses the harder question: whether the tools that democratize production structurally favor the center enough to extinguish the tails even as more people reach the center.
The tension between Edo's cultural-evolutionary account and the infrastructure critique resolves differently depending on which timescale and which actors we examine. For immediate economic impacts (next 5 years, affected workers), the infrastructure reading dominates—perhaps 80/20. The political economy of AI development does create winner-take-all dynamics that compress possibility spaces faster than any cultural drift. When asking "what determines AI's uniformity?" the answer is 70% material conditions, 30% statistical architecture.
Yet when we shift to longer timescales and examine how cognitive practices evolve, Edo's framework gains explanatory power—perhaps 60/40 in his favor. The infrastructure critique explains why uniformity emerges but not how practitioners internalize it as natural. The Pye framework captures something the political economy misses: how tools reshape not just what we produce but what we imagine producing. Both views are fully correct (100%) when identifying monoculture as the outcome; they differ in locating its engine.
The synthesis requires holding both the substrate and the statistical together. AI's competent average emerges from a double bind: technically, because models minimize loss functions that favor probable outputs; politically, because only massive capital concentrations can build these models. The cultural convergence Edo tracks and the material concentration the infrastructure critique identifies are not competing explanations but mutually reinforcing processes. Perhaps the right frame is "infrastructural lock-in"—where technical architecture and political economy create a gravity well that pulls both production and conception toward a center that becomes harder to escape not because it is optimal but because it controls the means of computation. The tails still matter, but they matter differently when the infrastructure to reach them is owned by entities whose business model depends on the center.