In evolutionary biology, adaptation is not a metaphor. It is a technical concept with precise meaning: an adaptation is a trait shaped by natural selection to perform a specific function in a specific environment. The polar bear's white fur was selected for camouflage in snow. The hummingbird's long bill was selected for extracting nectar from tubular flowers. Each adaptation is a solution to a specific challenge, and the solution is specific to the challenge. Change the environment, and the adaptation may become a liability. Mayr emphasized this specificity throughout his career, because the tendency to treat adaptation as a general property obscures the most important fact: adaptation is always relational. An organism is not adapted in the abstract. It is adapted to an environment.
There is a parallel reading that begins not with adaptation as metaphor but with the physical requirements of intelligence itself. The biological framework assumes transitions happen between niches that share fundamental properties — oxygen remains oxygen, physics remains physics, energy flows remain thermodynamic. But the shift to AI represents something categorically different: a transition from carbon-based cognition operating at room temperature with minimal energy to silicon-based computation requiring vast server farms, rare earth minerals, and continental-scale power grids. The adaptation framework treats this as merely another niche transition, but it may be more like expecting gills to function in vacuum.
The material substrate matters because it determines what kinds of adaptation are even possible. Human expertise evolved within specific energetic constraints — our brains consume 20 watts, operate on glucose, and repair themselves continuously. AI systems consume megawatts, depend on global supply chains, and require constant human maintenance. When we speak of developers "ascending" to higher-level work, we assume the substrate supporting that work will remain stable. But if AI's physical dependencies create systemic fragilities — grid failures, chip shortages, geopolitical disruptions — then the entire framework of niche transition becomes moot. The developers who retained "obsolete" skills in local computation, edge processing, and low-dependency systems may find themselves not maladapted but prescient. The evolutionary metaphor breaks down when one niche requires a functioning global industrial system and the other does not. Adaptation assumes environments change gradually enough for selection to operate. When the environment can collapse overnight due to substrate failure, the organisms perfectly adapted to AI-mediated work may discover they've specialized for conditions that were never sustainable.
This relational character has a direct and uncomfortable application to Segal's ascending friction thesis — the argument that AI removes mechanical difficulty at one level and relocates it to a higher cognitive level. The thesis is well-supported by historical analogy. Each major abstraction in computing destroyed a form of expertise and created demand for expertise at a higher level. Assembly gave way to compilers. Compilers gave way to frameworks. Cloud infrastructure replaced server management.
But the ascending friction thesis carries an assumption Mayr's framework makes explicit: that people who excelled at the lower level can ascend to the higher level. The senior developer whose debugging built architectural intuition is supposed to redirect that intuition toward product judgment. Mayr's framework challenges this directly. Adaptation is specific to the niche. A fish's gills are superb for extracting oxygen from water. They are fatal on land. The transition from aquatic to terrestrial life required not the transfer of aquatic adaptations but the development of entirely new ones.
Some developers will make the transition. The ones whose existing skills happen to include components useful in the new environment — judgment, breadth, the habit of asking why before asking how — will ascend naturally. The ones whose skills are narrowly adapted to the old environment may find their adaptations, superb in the old niche, irrelevant in the new one. This is not failure. It is a consequence of the specificity of adaptation.
The adaptability paradox follows: perfect adaptation to current conditions produces vulnerability to future conditions. The organism perfectly adapted to a stable environment has no slack, no extraneous capabilities, no traits useless now but useful later. Applied to organizations, the paradox explains why the most successful companies are often the most vulnerable to disruption. The Software Death Cross illustrates this: SaaS companies losing value were often those perfectly adapted to the pre-AI environment, with refined code, specialized teams, optimized processes, and no slack for radical reorientation.
Mayr's emphasis on the specificity of adaptation ran throughout his career but crystallized in his engagement with the 1970s–1980s debates over adaptationism. The adaptability paradox itself has deep roots in population ecology and was articulated most forcefully by Richard Levins and others in the 1960s.
Adaptation is always to an environment. Traits do not make organisms fit in the abstract; they make organisms fit to specific conditions that may or may not persist.
The adaptability paradox. Optimization for current conditions eliminates the variation needed to adapt to new conditions. Perfect fit today produces vulnerability tomorrow.
Niche transitions are radical. Moving from one niche to another rarely involves transferring existing adaptations; it typically requires entirely new ones, which the organism may or may not have in reserve.
Variation as insurance. Organizations and individuals who maintain diverse, seemingly inefficient capabilities are better positioned to survive environmental change than those who have fully optimized.
Consciousness is an adaptation. Human intelligence evolved to solve specific problems — social coordination, prediction, extended planning — and its continuation depends on conditions that are contingent, not universal.
Strong adaptationists, including Richard Dawkins and Daniel Dennett, have argued that selection typically produces near-optimal solutions and that Mayr's emphasis on niche-specificity understates the generality of many traits. Weaker adaptationists — a position closer to Mayr's — argue that while selection produces good solutions, the solutions are always constrained by history, developmental possibility, and niche specificity.
The right frame depends entirely on the timescale and scope of analysis. For individual career transitions over 5-10 years, Edo's adaptation framework dominates (80%) — people really do face niche transitions where past expertise becomes obsolete, and the biological metaphor captures this personal disruption accurately. The contrarian substrate view matters here mainly as tail risk (20%) — most developers won't face sudden AI infrastructure collapse, though some might experience regional or temporary disruptions that advantage those who maintained "primitive" skills.
At the civilization scale over 20-50 years, the weighting inverts. The contrarian's substrate concerns claim 70% validity because AI's material dependencies really do create novel fragilities unlike any previous technological transition. The adaptation framework still explains 30% — how individuals and organizations respond within whatever infrastructure exists — but it can't address whether that infrastructure itself will persist. The question "can developers adapt to AI?" assumes AI remains available to adapt to.
The synthesis emerges when we recognize these are different questions operating at different scales. The adaptation framework excels at explaining psychological and economic transitions within stable technological regimes — why some experts thrive and others struggle when tools change. The substrate critique excels at identifying systemic dependencies that could invalidate entire technological regimes — why civilizations sometimes lose capabilities entirely. Both are correct within their scope. The complete picture requires tracking adaptation dynamics at the human scale while monitoring substrate stability at the system scale. The developers who will thrive might be those who adapt to AI while maintaining fallback capabilities, treating the current regime as one possible niche rather than evolutionary destiny. This isn't hedging; it's recognizing that niche transitions and substrate shifts operate on different timelines with different reversibility.