There is a parable about a king and a chessboard: one grain of rice on the first square, two on the second, four on the third, doubling each time. The king agrees, thinking the cost trivial. By the thirty-second square, the debt exceeds four billion grains. By the sixty-fourth, the total outweighs the world's annual rice production. The parable illustrates the power of exponential growth. Moore's career illustrates something more subtle: exponential growth is invisible to the person living through it. Each doubling, experienced in real time, feels like incremental improvement. The cumulative effect is astonishing only in retrospect.
There is a parallel reading of the exponential narrative that begins not with the elegance of Moore's chessboard but with who pays for the doubling. Each generation of semiconductor advancement required not just clever engineering but exponentially increasing capital expenditure, energy infrastructure, and material throughput. Taiwan Semiconductor's 2nm fab costs exceed $20 billion; the water consumption of a single advanced facility rivals a mid-sized city. The 'invisible' exponential is invisible only if you're not the one building the power plants, mining the rare earths, or living downwind of the chemical waste streams. The astonishment Moore's engineers never felt was displaced onto populations who experienced the curve as disruption without consultation.
The AI training runs that now exceed the computational capacity of entire previous decades do not occur in a vacuum—they occur in data centers whose energy draw is equivalent to small nations, trained on corpus scraped without consent, deployed to automate labor at a pace that outstrips any social capacity for adjustment. The phenomenology that matters is not the subjective smoothness experienced by engineers on the inside but the material rupture experienced by everyone else. When exponential growth in capability meets linear growth in social adaptation, the gap is not a cognitive failure to grasp curves—it is a governance failure to regulate them. The chessboard parable ends with the king unable to pay; the modern version ends with externalized costs the curve's beneficiaries never had to account for.
Intel's 4004 microprocessor (1971) contained 2,300 transistors. The 8080 (1974) contained 4,500 — a doubling. A meaningful engineering achievement, nothing that felt like revolution to the engineers who accomplished it. The 8086 (1978) contained 29,000. The 80286 (1982) contained 134,000. Each generation represented roughly a doubling; each felt, from inside, like the natural next step. The engineers were methodical, not astonished. They had a roadmap; they followed it. The cumulative effect of following that roadmap for five decades is a modern processor containing tens of billions of transistors — more computational power than all computers existing on the planet when Moore wrote his 1965 paper.
The people most surprised by the AI transition of 2025 were, paradoxically, the people who had lived through every previous increment. They were the experienced technologists who had watched each interface transition arrive and concluded, reasonably, that they understood the trajectory. They had been on the chessboard for decades. What they did not grasp is where on the chessboard they were sitting. The first half is manageable: each doubling adds quantity within human intuition. The second half exceeds any reference frame the mind can supply. The AI transition sits squarely on the second half of the computational chessboard.
Consider the numbers: total compute used to train AI models has doubled approximately every six months since 2012, far faster than Moore's Law's two-year cadence. Stanford researchers measured the post-2012 doubling time at 3.4 months for the most compute-intensive runs. Jensen Huang described the progression as 'Moore's Law squared.' By 2025, computational power applied to training a single frontier model exceeded what the entire semiconductor industry produced annually in the decade when Moore first made his observation. These numbers are not secrets — but they are second-half-of-the-chessboard numbers, and stating them does not convey their implications.
Moore's career provides a corrective to this cognitive failure: respect the trend, even when it exceeds intuition. In engineering, the discipline takes the form of the roadmap. The International Technology Roadmap for Semiconductors existed precisely because engineers could not intuit exponential trajectories — they needed the roadmap to organize the present around a trajectory too steep for individual intuition. AI has no equivalent coordinated roadmap, which is closer to the early semiconductor industry before coordination emerged. Moore's experience suggests the transition from competitive chaos to coordinated roadmap is inevitable for any technology on an exponential curve, because the economics of each doubling eventually exceed what single companies can bear.
The phenomenological observation is as old as exponential mathematics — the chessboard parable dates to medieval Persia — but its application to technology management emerged through the semiconductor industry's hard-won discipline of roadmap-based planning. Moore discussed the pattern implicitly across multiple interviews, noting that intuitions calibrated to linear progress systematically mislead on exponential curves.
Each doubling feels incremental. The subjective experience of exponential growth is linear; only the cumulative effect, viewed in retrospect, reveals the exponential character.
The chessboard has two halves. The first half is within human intuition; the second half exceeds it. The AI transition sits on the second half.
Roadmaps are cognitive prosthetics. The semiconductor industry's coordinated planning existed precisely because engineers could not intuit the trajectory.
Surprise is a failure of intuition. The people closest to the curve are often the most surprised by its consequences, because incremental familiarity obscures cumulative magnitude.
Preparation, not astonishment. The appropriate engineering response is to plan for the next doubling, not to marvel at the last one.
The core insight about subjective linearity holds completely (100%): engineers living through each doubling genuinely experienced incremental progress, not revolution. This is a reliable cognitive fact that explains systematic underestimation across domains. The corrective—build roadmaps, respect the trend—is correct engineering practice for anyone positioned inside the curve's production.
But the analysis is entirely from the vantage of the curve's architects. For populations experiencing exponential change as disruption rather than implementation, the phenomenology reverses: each increment arrives as shocking discontinuity because they lack the roadmap that makes change legible. A factory worker whose skill becomes obsolete doesn't experience a smooth doubling—they experience rupture. Here the contrarian reading carries full weight (90%+): the 'invisible' exponential is visible as violent transition to anyone outside the engineering cohort. The cognitive failure isn't inability to grasp curves; it's the mismatch between production tempo and social absorption capacity.
The synthesis the concept requires: exponential trajectories produce dual phenomenologies depending on position. Inside the curve (research labs, semiconductor fabs, AI companies), each step feels incremental and roadmaps provide orientation. Outside the curve (labor markets, institutions, populations), each crossing of a capability threshold produces discontinuous impact because no equivalent roadmap exists for social adaptation. The real governance challenge isn't teaching people to intuit exponentials—it's building the social infrastructure that makes exponential change navigable for those who didn't choose to be on the chessboard. Moore's discipline was admirable; it was also entirely internal to the industry producing the curve, which is precisely what made it insufficient as the curve's consequences exceeded technical domains.