Moore's Law began as the most modest kind of scientific claim: a trend line drawn through six data points in the spring of 1965, published in Electronics magazine by a thirty-six-year-old chemist at Fairchild Semiconductor. Moore did not propose a theory, a mechanism, or an equation derived from first principles — he noticed a pattern, stated it plainly, and extrapolated it ten years forward. The prediction of sixty-five thousand components per chip by 1975 proved almost exactly correct, and the line held for fifty years. What made the observation a law was not the physics but the economics: each doubling reduced cost per transistor, cheaper computation enabled larger markets, larger markets justified larger investments in the next doubling, and the cycle synchronized an entire global industry to a metronome Moore had set by drawing six points on semi-logarithmic paper.
The distinction that mattered most about Moore's original framing — and the distinction the AI discourse has been slowest to absorb — is that he was not predicting capability. He was predicting cost. The 1965 paper was not about what integrated circuits could do; it was about what they would cost. More components per chip meant lower cost per component, and the article's actual argument was economic: integrated circuits were going to become so cheap that they would be embedded in everything. Home computers, automatic controls for automobiles, and personal portable communications equipment — every prediction came true, not because the chips became powerful but because they became cheap.
The observation acquired the force of a law through a self-reinforcing feedback loop that cybernetics would recognize immediately. Semiconductor companies planned research investments around the expectation of the next doubling. Equipment manufacturers designed lithography tools to meet the timeline. Software developers wrote programs that would require the processing power the next generation of chips would deliver. The entire technology ecosystem synchronized itself to a schedule Moore had merely observed. The river of intelligence that Edo Segal describes in The Orange Pill — the cognitive substrate on which AI now runs — flows through channels Moore's doublings carved.
Moore himself was characteristically modest about what the observation meant. In a 2015 interview marking the fiftieth anniversary of the paper, he said simply: 'I just extrapolated. At the time I wrote the article, I thought I was just showing a local trend.' The imagination-to-artifact ratio that Segal identifies as the AI revolution's central compression is, in Moore's framework, the latest stage of a cost reduction that began with the first integrated circuit. Each layer of abstraction — from assembly language to compilers to frameworks to cloud infrastructure to natural language — reduces the translation cost between human intention and machine execution. AI did not invent this compression. It completed it for a significant class of work.
The framework also contains a warning the AI discourse has been slow to absorb: a self-fulfilling prophecy is not a law of nature. Moore's Law held for fifty years not because physics guaranteed it but because the semiconductor industry found ways to keep the cycle profitable. When the cycle threatened to break — quantum effects at nanometer scales, thermal limits, escalating fabrication costs — the industry reorganized itself to sustain the trajectory. New materials. New architectures. New business models. The observation survived because the ecosystem adapted. The AI scaling laws will face analogous pressures, and their survival will depend on comparable adaptation.
Moore's 1965 paper, 'Cramming More Components onto Integrated Circuits,' was commissioned by the editor of Electronics magazine as a short forecast of what the integrated circuit business would look like over the next decade. Moore looked at the data available since the invention of the planar integrated circuit in 1959: four components in 1962, eight in 1963, sixteen in 1964. He plotted the points on semi-logarithmic paper, drew a line through them, and extended the line forward. The extrapolation gave sixty-five thousand components per chip by 1975, a number so far beyond what seemed plausible that Moore himself assumed the prediction would hold for a decade at most.
The observation was later refined — from annual to two-year doubling — and named 'Moore's Law' by Caltech professor Carver Mead in the 1970s. Moore spent the rest of his career inside the industry his observation organized, co-founding Intel in 1968 and serving as its CEO and chairman for decades. He died in 2023, months before the AI systems his doublings had made possible began reshaping the global economy.
Cost over capability. Moore's 1965 paper was an economic prediction, not a performance one. More components per chip meant lower cost per component, and cost — not capability — determined who would use the technology.
Self-fulfilling prophecy. The observation organized investment, research, and manufacturing schedules across the entire semiconductor industry, ensuring that the doublings continued because everyone planned for them.
Physics sets ceiling, economics sets floor. What was possible at any moment was determined by physics; what was profitable was determined by economics. The interplay between the two sustained the curve for fifty years.
Not a law of nature. Moore's Law was a social phenomenon sustained by economic incentives. When dimensions saturated, the industry rotated — from clock speed to parallelism, from 2D to 3D transistor structures — rather than abandoning the trajectory.
The framework transfers. The same self-reinforcing cycle of cost reduction, market expansion, and reinvestment operates in every exponential technology, including the AI scaling laws that now drive hundreds of billions of dollars in investment.
The question that divides contemporary commentators is whether Moore's Law is ending, transforming, or continuing in disguise. Traditional transistor-density scaling has slowed since the mid-2010s, leading some to declare the law dead. Others — including Jensen Huang — argue that the AI scaling laws represent 'Moore's Law squared,' a continuation of the exponential through different units of measure. Moore himself, interviewed repeatedly in his later years, was characteristically measured: the law would not hold indefinitely, 'simply due to the nature of exponentials,' but the industry had repeatedly found ways to rotate onto new dimensions when old ones saturated.