Detroit in 1950 was the richest city in America per capita. Its single industry — automobile manufacturing — produced wages, tax revenues, and civic infrastructure that were the envy of every other metropolitan area. By every metric the economists used, Detroit was thriving. Jacobs looked at Detroit and saw a city dying. What she saw was monoculture — an economy so dominated by a single industry that the diversity from which genuine economic vitality springs had been systematically suppressed. The automakers were so large, so profitable, and so dominant that they absorbed the talent, capital, and civic attention that a diverse economy requires. The small enterprises Jacobs identified as the seedbed of innovation could not compete. When the automobile industry contracted, Detroit had nothing to fall back on.
The monoculture risk in AI is not the risk of a single industry dominating a single city. It is the risk of a single tool — or a small number of functionally similar tools — mediating the creative and productive output of an entire economy. When the marketing manager, the teacher, the architect, the developer, the writer, the designer, the analyst, and the strategist all use the same AI system to produce their work, the diversity of their outputs converges. Not toward the best possible output in each domain, but toward the outputs the AI system is best equipped to produce — which is to say, toward the patterns most heavily represented in the training data.
The convergence operates at multiple reinforcing levels. At the level of style, AI-mediated work develops a recognizable aesthetic: prose in a specific register, visual design with a specific look, code following conventions common in the largest training datasets. At the level of approach, AI suggests solutions drawn from the statistical likelihood of the training distribution — not necessarily the best solutions but the most common ones. The developer who accepts the suggestion without considering alternatives is not lacking capability; she is following the path of least resistance, and the path's direction is determined by the model's weights.
Jacobs observed this dynamic in cities dominated by a single large employer. The employer's approach became the city's approach. Not because alternatives were forbidden, but because the dominant enterprise set the terms of professional culture so thoroughly that alternatives became invisible. The young engineer in Detroit did not consider alternative manufacturing methods because the methods used by Ford were the only methods she encountered. The monoculture was self-reinforcing: dominance suppressed visibility of alternatives, and invisibility reinforced dominance.
AI monoculture operates through the same mechanism, accelerated by feedback. When a tool suggests an approach, the suggestion becomes the default. The default is adopted. The adoption reinforces the default. The data generated enters the training pipeline. The next version produces the default with greater confidence. The cycle tightens. The creative economy converges on the patterns the model knows best, and the patterns the model knows best are the patterns the economy has already converged on. This is the concern Segal raised through his engagement with Byung-Chul Han's critique of smoothness; the Jane Jacobs framework sharpens it by specifying the economic mechanism.
The fragility of monoculture is structural. An economy in which all builders use the same approaches is an economy in which all builders share the same blind spots, same vulnerabilities, same failure modes. When the approach fails — when the pattern the model favors turns out to be wrong, or when conditions change in ways the training data does not represent — the entire economy fails in the same way, at the same time, for the same reason. This is not hypothetical. The Irish potato famine was a monoculture failure. The 2008 financial crisis was a monoculture failure: the same risk models, trained on the same data, deployed by every major institution. Model collapse is the specifically AI-native mechanism by which the same pathology might tighten over successive training generations.
Jacobs developed the concept in The Economy of Cities (1969), with Detroit as the paradigmatic case. The framework was extended to urban renewal more broadly in her later writing, where she identified monoculture as one of the reliable signatures of catastrophic failure. The AI-era application builds on Segal's engagement with convergence in The Orange Pill and on the model-collapse literature documenting statistical pathologies in AI training pipelines.
Monoculture looks like success. By every growth metric, Detroit was thriving when it was structurally most fragile.
The mechanism is absorption, not suppression. Alternatives are not forbidden; they are starved of talent, capital, and attention.
Convergence operates at multiple levels. Style, approach, method, and infrastructure all narrow simultaneously, each reinforcing the others.
Self-reinforcing feedback. Dominance suppresses alternatives, which reinforces dominance, at a tempo AI training pipelines accelerate.
Fragility is the cost of efficiency. Shared dependencies produce shared failure modes.
Defenders of current AI development argue that convergence concerns are overstated: open-source alternatives exist, capable humans remain capable of independent judgment, and the historical record suggests that dominant technologies produce their own counter-movements. Critics — including many in the AI safety and interpretability communities — respond that the empirical evidence of convergence in AI outputs is already substantial and that the feedback-amplification mechanism is structurally distinct from previous technology transitions because the tool participates in its own training data. The Jane Jacobs volume's argument does not depend on predicting which trajectory will prevail; it specifies the institutional conditions under which genuine diversity can be maintained regardless.