David Edgerton — On AI
Contents
Cover Foreword About Chapter 1: The Innovation Illusion: Why Novelty Is Not the Point Chapter 2: What People Actually Do with These Tools Chapter 3: The Persistence of the Old Chapter 4: Maintenance and Repair — The Invisible Majority Chapter 5: Creole Technologies — How Users Transform AI Chapter 6: Production Over Innovation in the AI Economy Chapter 7: The Significance of the Mundane Chapter 8: War, Crisis, and the Misdirection of Technological History Chapter 9: The Global Deployment Gap Chapter 10: Invention Is Easy; Maintenance Is Everything Epilogue Back Cover
David Edgerton Cover

David Edgerton

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by David Edgerton. It is an attempt by Opus 4.6 to simulate David Edgerton's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The thing nobody talks about is Day 31.

I have told the story of building Napster Station in thirty days so many times it has become a set piece. The compressed timeline. The AI-augmented sprint. The product standing on the CES floor, talking to hundreds of strangers. It is a genuinely extraordinary story, and I believe every word of it.

But I never tell you what happened on February 1st. The day after. The code that needed patching. The edge cases nobody anticipated. The hardware that shipped with a component that overheated in certain venues. The conversational model that handled English beautifully and stumbled on regional accents we hadn't tested for. The slow, unglamorous, entirely uncelebrated work of keeping the thing alive once the bright lights went off.

I never tell that part because it doesn't fit the arc. The arc demands creation. Breakthrough. Phase transition. The thirty-day miracle. Nobody writes a book about the eighteen months of maintenance that follow.

David Edgerton would write that book. He has spent his career arguing that we have been telling the story of technology backwards — starting with the invention and ignoring the use, celebrating the creation and rendering invisible the maintenance, fixating on the frontier and forgetting the vast, dark map where most people actually live and work.

His challenge is not that AI is overhyped. His challenge is more uncomfortable than that. It is that the things we are counting — adoption curves, productivity multipliers, market valuations — are the wrong things to count. Or rather, too few of the things worth counting. That the mundane uses matter more than the dramatic ones. That old technologies persist alongside new ones for decades, not because people are afraid of progress, but because the old technologies still work. That the gap between a demonstration and a deployment is filled with infrastructure, maintenance, and institutional adaptation that operates on timescales the innovation narrative cannot see.

I needed this lens. Badly. Because the story I told in *The Orange Pill* is true but incomplete. The sunrise from the roof is real. The tower holds only because someone tends the joints. And tending the joints is work I described too briefly and celebrated not at all.

Edgerton does not contradict the view from the roof. He completes it — by insisting that you look down at the foundation, out at the darkness between the bright spots, and back at the people whose labor begins the day after the manifesto is published.

The frontier is where the light is brightest. Edgerton asks what happens in the dark.

-- Edo Segal ^ Opus 4.6

About David Edgerton

1959-present

David Edgerton (1959–present) is a British-Uruguayan historian of technology and professor at King's College London, where he holds the Hans Rausing Chair in the History of Science and Technology. Born in Uruguay and educated at the University of Oxford and Imperial College London, Edgerton has spent over three decades challenging the innovation-centered narratives that dominate public understanding of technology. His landmark work *The Shock of the Old: Technology and Global History Since 1900* (2006) argued that the most significant technologies in any era are not the newest but the most used — the bicycle over the automobile, the sewing machine over the computer, the corrugated iron sheet over the skyscraper. His earlier book *Warfare State: Britain, 1920–1970* (2005) reframed British twentieth-century history around military-industrial production rather than welfare-state mythology. Edgerton's key concepts include the "use-centered" history of technology, the distinction between innovation and production, the persistence of old technologies alongside new ones, and the systematic invisibility of maintenance in technological narratives. His testimony before the UK House of Lords Select Committee on Artificial Intelligence in 2017, in which he called AI hype "ahistorical, crude nonsense," exemplifies his insistence that technological rhetoric be measured against historical evidence. Edgerton remains one of the most rigorous and contrarian voices in the history of technology, widely cited across the social sciences.

Chapter 1: The Innovation Illusion: Why Novelty Is Not the Point

In December 2017, eight years before Edo Segal stood in a room in Trivandrum watching his engineers build at twenty times their previous speed, a historian of technology named David Edgerton sat before the United Kingdom's House of Lords Select Committee on Artificial Intelligence and said something that nobody in the room wanted to hear.

The committee had convened to discuss the fourth industrial revolution — the phrase that had become, by then, the mandatory opening incantation of every policy document, every conference keynote, every breathless magazine profile of a technology founder. Artificial intelligence was going to transform everything. The question before the Lords was how to prepare.

Edgerton told them the entire framing was wrong. The talk of a fourth industrial revolution, he said, was "just reheated rhetoric from years ago." He read aloud from Harold Wilson's 1963 speech about the "white heat of the technological revolution" — a speech delivered more than half a century earlier, about an entirely different set of technologies — and noted that the words could be transposed, unchanged, to the present day. The same promises. The same breathless urgency. The same insistence that this time the transformation would be total, immediate, and unprecedented.

Then he delivered the line that should have reframed the entire conversation: hyping artificial intelligence, he said, was "ahistorical, crude nonsense."

The committee moved on. The AI Act was drafted. The hype continued. Nobody quoted Edgerton in the manifestos that followed.

This is the pattern. Edgerton has spent his career identifying a structural error in how societies think about technology, and the error is so deeply embedded in the culture that pointing it out has roughly the same effect as pointing out the existence of water to a fish. The fish does not deny the water. It simply cannot see it, because it has never experienced anything else.

The error is this: we tell the history of technology as a history of invention. The printing press. The steam engine. The telegraph. The computer. The large language model. Each invention arrives as a rupture, a phase transition, a before-and-after that reorganizes civilization. The narrative is a parade of breakthroughs, each one rendering the previous paradigm obsolete, each one arriving with a speed and completeness that transforms how human beings live and work within a generation or less.

Edgerton's central claim, developed across thirty years of meticulous historical research at King's College London, is that this narrative is not merely incomplete. It is systematically wrong about what matters. The history of technology, as actually lived by the vast majority of human beings, is not a history of invention. It is a history of use — and the things that are most used are almost never the things that are newest.

The horse remained the dominant mode of transport in most of the world long after the automobile was invented. More cargo moved by sailing ship in 1900 than in 1800, decades after the steamship was supposed to have rendered sail obsolete. The bicycle — mechanically simple, invented in the nineteenth century, unglamorous to the point of invisibility — saved more lives in the developing world than any pharmaceutical breakthrough of the twentieth century, because it allowed health workers and midwives to reach patients in villages that no motorized vehicle could access. The corrugated iron sheet, a technology so mundane that no innovation narrative has ever celebrated it, reshaped more human shelter than any architectural movement in history.

These are not footnotes. They are the actual material fabric of civilization, visible only to those who look at what people use rather than what inventors invent.

The Orange Pill opens with an innovation narrative of extraordinary power. "In the first week of December 2025," Segal writes, "a Google principal engineer sat down with Claude Code and described, in plain English, a problem her team had just spent the past year trying to solve." One hour later, a working prototype. "I am not joking," she wrote on X, "and this isn't funny." Segal calls it a phase transition — "the way water becomes ice: the same substance, suddenly organized according to different rules."

The language is precise, vivid, and deeply familiar to anyone who has studied the rhetoric of technological rupture. It is the same language that greeted the telegraph ("the annihilation of space and time"), the radio ("the technology that will unite the world"), the atomic bomb ("the force from which the sun draws its power"), and the internet ("the most transformative technology since the printing press"). Each of these technologies was described, at the moment of its emergence, as a phase transition. Each was supposed to reorganize civilization on a timeline measured in years, not decades.

Edgerton's research reveals a different pattern. Technologies arrive. They coexist with older technologies, often for generations. Their adoption is uneven — geographically, economically, institutionally, culturally. The gap between what a technology can do in a demonstration and what it actually does in the hands of ordinary users in ordinary conditions is vast, and this gap persists far longer than any innovation narrative acknowledges.

The printing press is Segal's own example, cited in Chapter 17 of The Orange Pill as the second stage in his five-stage pattern of technological transition. But consider the actual timeline. Gutenberg produced his first printed Bible around 1455. A century later, in 1550, manuscript production had not disappeared. It had evolved. Wealthy patrons still commissioned hand-copied books for prestige and aesthetic reasons. Scriptoria continued to operate. Literacy rates in Europe remained below twenty percent. The printing press had not yet reached the majority of the population, and would not reach them for centuries.

The innovation narrative compresses this into a single dramatic moment — Gutenberg's press arrives, the monks are displaced, knowledge is democratized. The use-centered narrative reveals a far messier reality: decades of coexistence, adaptation, resistance, uneven adoption, and the persistence of older practices alongside newer ones, long past the point where the innovation narrative declared them dead.

The automobile tells the same story. Henry Ford's Model T entered production in 1908. Two decades later, in 1928, the United States had the most motorized economy on Earth — and still had twenty-five million horses and mules in use. In India, the bullock cart remained the primary mode of goods transport into the 1970s. The automobile had not replaced the horse. It had been added to a transportation ecology that already included horses, bicycles, railroads, canals, and human feet, and each of these older technologies persisted because it served needs that the automobile could not.

Edgerton's point is not that the automobile was unimportant. It is that the framework of replacement — the idea that new technologies supersede old ones in a clean, linear progression — fundamentally misrepresents how technological change actually works. Technologies accumulate. They layer. The landscape of use at any given moment contains technologies from every era, coexisting in patterns that the innovation narrative cannot see because it is looking only at the leading edge.

Applied to artificial intelligence, the use-centered lens generates an uncomfortable prediction: the AI tools that Segal celebrates will not replace existing practices on the timeline the innovation narrative suggests. They will be added to an existing ecology of tools, practices, institutions, and habits that is far more durable than any frontier demonstration can reveal. The senior software engineer who learned assembly language thirty years ago still thinks in patterns shaped by that training, even when working with Claude Code. The organization that adopted AI tools in 2025 still operates within bureaucratic structures — hiring processes, performance reviews, reporting hierarchies, procurement cycles — designed for an entirely different technological paradigm. The educational institutions that Segal criticizes as "calcified" will not transform overnight, because they are embedded in networks of accreditation, funding, faculty tenure, parental expectation, and regulatory oversight that operate on timescales measured in decades.

This does not mean AI is unimportant. Edgerton is not a denier. He is a historian who insists on looking at what actually happens rather than what the promoters say will happen. And what actually happens, every time, is messier, slower, more uneven, and far less dramatic than the innovation narrative suggests.

The very speed of adoption that Segal celebrates — ChatGPT reaching fifty million users in two months, Claude Code's run-rate crossing $2.5 billion — is, on Edgerton's account, a measurement of interest, not of transformation. Downloading an application is not the same as integrating it into practice. A fifty-million-user adoption curve tells you how many people tried something. It does not tell you how many people changed how they work. It does not tell you whether the change was durable. It does not tell you what the tool displaced, if anything, or what older practices persisted alongside it.

The distinction between adoption and integration is the gap where the innovation illusion lives. Inside that gap, the actual work of technological transition takes place: the slow, unglamorous, institution-by-institution process of figuring out what the new tool is actually good for, what it actually replaces, what it fails at, and what it requires in terms of infrastructure, training, maintenance, and institutional reorganization before it can deliver on even a fraction of its promise.

Segal knows this, in part. His account of the Trivandrum training — twenty engineers, five days, the transformation of a team's working practices — is precisely the kind of ground-level integration work that Edgerton would recognize as the real substance of technological change. The difference is that Segal tells it as a breakthrough story. Edgerton would tell it as the beginning of a process that will take years, involve setbacks, and produce results far more mixed than the five-day arc suggests.

When Edgerton told the House of Lords that AI hype was "ahistorical, crude nonsense," he was not being dismissive. He was making a precise historical claim. The rhetoric of transformation — the insistence that this technology is different, that this time the change really will be total and immediate, that the previous paradigms really are dead — is identical in structure to rhetoric that has accompanied every major technology of the past century. And in every previous case, the rhetoric overstated the speed of the transition, understated the persistence of older practices, and rendered invisible the actual, messy, decades-long process of integration that determines whether a technology improves human life or merely entertains those who can afford it.

The promoters of technology, Edgerton observed, have made this argument for over a hundred years: "We absolutely need this one, two or three new machines and they will transform our world." He continued: "That is a very familiar story. In fact, there is hardly anything original about it. All that changes is the particular machine. So once the radio would bring the world together, later it was television and now it's the Internet. Wars will be abolished by new explosives, by airplanes, and by atomic bombs. It's a very familiar kind of story that's told. It's extraordinary really that people still get away with giving the impression that this is an original story."

This is the innovation illusion: the belief that novelty is the point, that the newest technology is the most important technology, that the drama of invention is the drama of history. Edgerton's life's work is the demonstration that it is not. The drama of history is the drama of use — of what people actually do with the technologies available to them, how those technologies are maintained and repaired and adapted and repurposed, and how old and new coexist in patterns that are invisible to anyone looking only at the frontier.

The sunrise Segal describes from the top of his tower is real. The view is genuinely extraordinary. But the tower itself is built on foundations that are older than any innovation narrative acknowledges, maintained by people the innovation narrative does not see, and surrounded by a landscape of use that extends far beyond the horizon of any single breakthrough — no matter how dazzling the light appears from the summit.

---

Chapter 2: What People Actually Do with These Tools

On a Tuesday morning in March 2026, a mid-level marketing manager at a consumer goods company in Cincinnati opens her laptop and types a prompt into Claude. She needs a first draft of a quarterly business review presentation. She describes the structure she wants, pastes in some bullet points from an email chain, and asks for slide-by-slide talking points. Three minutes later, she has a draft. It is adequate. Not brilliant, not transformative, not the kind of output that anyone would post on X with the caption "I am not joking, and this isn't funny." It is competent boilerplate that saves her forty-five minutes she would have spent staring at a blank slide deck.

She will not write a book about this experience. She will not describe it as a phase transition. She will adjust two of the talking points, add a chart Claude could not generate, and move on to her next meeting. By Thursday, she will have forgotten she used AI at all.

This is the median AI experience. It is invisible in every manifesto, every keynote, every breathless account of the frontier. And on David Edgerton's account, it may be the most important AI story there is.

Edgerton's use-centered history of technology rests on a distinction between significance and drama. The technologies that receive the most attention are the technologies that make the most dramatic stories: the atomic bomb, the moon landing, the iPhone launch, the chatbot that passes the bar exam. The technologies that affect the most people are almost always less dramatic: the sewing machine, the bicycle, the washing machine, the shipping container. The significance of a technology is measured not by its peak capability but by its deployment — by how many people use it, how often, for what purposes, and under what conditions.

Applied to AI, this distinction generates a reframing so thorough it borders on inversion. The Orange Pill tells the story of AI from the frontier: a product built in thirty days, a book written on a transatlantic flight, engineers crossing role boundaries in southern India, a non-technical founder building a revenue-generating application over a weekend. These are extraordinary experiences. They are also, by definition, outliers. They describe what the most capable users do with the most powerful tools under the most favorable conditions. They do not describe what most people do with AI most of the time.

What most people do with AI most of the time is mundane. They draft emails. They generate boilerplate. They ask for meeting summaries. They autocomplete code they could have written themselves in slightly more time. They translate documents. They search for information they could have found with a slightly more creative Google query. They produce work that is adequate — not extraordinary, not paradigm-shifting, not the kind of work that demonstrates the collapse of the imagination-to-artifact ratio. Just adequate. Faster than before, marginally better in some cases, marginally worse in others, and entirely unremarkable.

Edgerton would recognize this pattern immediately, because it is the pattern of every technology in history. The printing press's most significant output was not Luther's Ninety-Five Theses. It was the thousands of ordinary documents — commercial invoices, legal contracts, administrative records, almanacs, prayer books — that the press made cheaper and faster to produce. These documents were not revolutionary. They were the mundane infrastructure of a society gradually adjusting to a new production method. Their cumulative effect was enormous, but no single document was remarkable, and no contemporary observer would have identified any of them as the point of the printing press.

The personal computer's most significant application was not the spreadsheet that transformed accounting or the word processor that transformed writing. It was email — the mundane, unglamorous, productivity-sapping technology that quietly restructured organizational communication over two decades. Email was never the subject of a breathless innovation narrative. It was never described as a phase transition. It simply became the medium through which most white-collar work was coordinated, and its cumulative effect on organizational life was larger than any single application the personal computer enabled.

The internet's most significant application was not the search engine or the social network. It was the gradual, undramatic digitization of existing processes — supply chain management, inventory tracking, customer records, scheduling, payroll — that produced no headlines and transformed more labor than any headline-generating application.

In each case, the innovation narrative focused on the dramatic application while the mundane application quietly reshaped the world. The dramatic application proved the technology's capability. The mundane application determined its significance.

The same split is visible in the AI moment. Segal describes building Napster Station in thirty days — a genuine feat of compressed engineering that demonstrates what AI-augmented development can achieve at the frontier. The Berkeley researchers described something different: workers drafting routine documents faster, expanding into adjacent tasks, filling pauses with AI interactions, producing more output of roughly similar quality. Twenty-seven percent of Claude-assisted work in their study was work that would not have existed before the tool. Not because the work was needed, but because the tool made it possible, and the internalized imperative to achieve converted possibility into production.

This is not the imagination-to-artifact ratio approaching zero. This is the mundane reality of a technology being absorbed into ordinary practice — a process that is incremental, uneven, often underwhelming, and vastly more consequential than any frontier demonstration.

Consider the actual distribution of AI use across the global economy. The frontier users — the software engineers, the technology founders, the AI researchers — constitute a tiny fraction of the workforce. The vast majority of AI interactions occur in contexts that no innovation narrative would recognize as significant: a human resources coordinator using AI to draft job postings, a real estate agent using it to generate property descriptions, a middle school teacher using it to create differentiated reading assignments, a small business owner using it to compose responses to customer complaints.

None of these uses will ever appear in a book about the future of intelligence. Each of them represents a small, real, measurable improvement in a specific person's workday. And collectively, across millions of such interactions, they will reshape more labor than any thirty-day product sprint.

Edgerton's framework predicts this, because it has always been true. The bicycle is his signature example. No innovation narrative has ever celebrated the bicycle as a transformative technology. It is too old, too simple, too mechanically transparent. It lacks the drama of the automobile, the airplane, the digital computer. Yet in the developing world, the bicycle has done more for human mobility, economic participation, and healthcare access than any technology of the twentieth century. Health workers on bicycles reached villages that no motorized vehicle could access. Farmers on bicycles brought produce to markets that walking could not reach in time. Students on bicycles attended schools that would have been too far to reach on foot.

The bicycle's significance was invisible to the innovation narrative because the innovation narrative looks at capability, not deployment. The bicycle's capability is modest. Its deployment was enormous. And the gap between those two measurements is where the actual story of technological impact lives.

AI's bicycle moment — the moment when its most significant applications are also its most mundane — has almost certainly already arrived, but the innovation narrative cannot see it, because the innovation narrative is looking at Napster Station and not at the marketing manager in Cincinnati.

There is a second dimension to the use-centered analysis that cuts even deeper. Edgerton does not merely distinguish between dramatic and mundane applications. He distinguishes between what a technology's designers intended and what users actually do with it. Technologies are never used exactly as intended. They are adapted, modified, combined with other technologies, and repurposed for needs the designers never imagined. The result is a landscape of use that is far richer and far stranger than any design specification could predict.

The telephone was designed for business communication. It was adopted primarily for social connection — a use that Alexander Graham Bell considered frivolous. The phonograph was designed for office dictation. It became the foundation of the recorded music industry. The internet was designed for military communication and academic data sharing. It became the medium through which human beings organize their social, commercial, and political lives.

In every case, the most significant use was not the intended use. The designers were wrong about what their technology was for. The users figured it out, through the slow, experimental, largely undocumented process of trying things and seeing what worked.

The AI moment is no different. Segal's book itself is evidence. Claude was not designed as a book-writing collaborator. It was designed as a general-purpose language model, and its primary marketed applications were code generation, data analysis, and business communication. The collaboration that produced The Orange Pill — the iterative process of feeding half-formed ideas into a machine and receiving back structures, connections, and formulations that neither the human nor the machine could have produced alone — is a use that no designer planned for.

This is what Edgerton calls the creole technology: the hybrid form that emerges when designed intention collides with actual practice. The creole technology is unpredictable by definition, because it arises from the specific, local, idiosyncratic conditions of use. The engineer in Trivandrum who crosses from backend to frontend development is producing a creole use of Claude — repurposing a code-generation tool as a cross-domain translation device. The non-technical founder who builds a revenue-generating product is producing another creole use — repurposing a developer tool for entrepreneurial bootstrapping.

These creole uses may prove more consequential than any intended application, precisely because they emerge from the collision between technology and the actual, messy, unpredictable conditions of human practice. But they are invisible in advance. They cannot be predicted from the design specification or the marketing material or the innovation narrative. They can only be observed after the fact, by people who are looking at use rather than invention.

The use-centered analysis does not deny AI's power. It reframes the question. Instead of asking what AI can do — the innovation-centered question that animates The Orange Pill — it asks what people actually do with AI. And the answer, so far, is mostly mundane: small gains, incremental adjustments, adequate output produced slightly faster, and a vast landscape of ordinary practice that is being reshaped so gradually that the people inside it can barely feel the change.

That gradual reshaping may turn out to be the most important thing about this technological moment. It will never make a manifesto. It will make history.

---

Chapter 3: The Persistence of the Old

In 1890, the electric light was going to kill the candle. In 1920, the automobile was going to kill the horse. In 1950, television was going to kill the radio. In 1990, email was going to kill the letter. In 2010, the tablet was going to kill the textbook. In 2025, artificial intelligence was going to kill the software engineer.

The candle industry's global revenue in 2024 exceeded three billion dollars. There are more horses in the United States today than there were in 1960. Radio's weekly reach in the United Kingdom exceeds eighty-eight percent of the adult population. The United States Postal Service delivers more than forty billion pieces of physical mail per year. Textbook sales, after a brief contraction, have stabilized. And in 2026, more human beings work in software engineering than at any previous point in history.

The list of technologies that were supposed to die and did not is so long that it constitutes, in effect, a counter-history of the entire modern era. David Edgerton has spent his career writing that counter-history, and its central finding is disarmingly simple: old technologies almost never disappear when new technologies arrive. They coexist. Sometimes for decades. Sometimes for centuries. The innovation narrative declares them dead because the innovation narrative can only see the new. The use-centered history finds them alive, functioning, serving populations and purposes that the new technology cannot reach.

This pattern — the persistence of the old alongside the new — is so consistent across every technological domain and every historical period that Edgerton treats it not as an anomaly but as a law. Technologies persist because they are embedded in systems that are larger than the technology itself: systems of practice, infrastructure, institutional organization, cultural habit, economic incentive, and accumulated expertise. The automobile could not displace the horse until roads were paved, fuel stations were built, mechanics were trained, traffic laws were enacted, and an entire infrastructure of support — from insurance to licensing to parking — was constructed around the new technology. That infrastructure took decades to build, and during those decades, the horse persisted, not as a quaint relic but as a functioning, economically rational technology-in-use.

The parallel to AI is immediate and uncomfortable for anyone operating inside the innovation narrative. The Orange Pill's Chapter 18, "Leading After the Orange Pill," describes three shifts underway in the working world: the dissolution of specialist silos, the rise of integrative thinking as the primary skill, and the emergence of the question as the primary product. These shifts are real at the frontier. They are happening in Segal's organization and in organizations like it — technology companies led by early adopters with the resources and the inclination to restructure around new tools.

But the frontier is not the world. The world is vastly larger, vastly more inertial, and vastly more committed to existing practices than the frontier can see from its position at the leading edge.

Consider the infrastructure requirements for the AI transition that Segal describes. His engineers in Trivandrum used Claude Code with the Max plan — one hundred dollars per person per month. The cost sounds trivial. But it presupposes reliable high-speed internet (available in Trivandrum, not available in much of rural India). It presupposes hardware capable of running modern development environments. It presupposes English-language fluency at a level sufficient for effective prompting. It presupposes institutional willingness to restructure workflows around a new tool. It presupposes a culture of experimentation that tolerates failure and rewards adaptation. And it presupposes a leader — Segal himself — who is willing to invest five days of intensive, in-person training to catalyze the transition.

Remove any one of these preconditions and the transformation does not occur. Not because the tool is inadequate but because the infrastructure of use is missing. And infrastructure, as Edgerton has demonstrated across every domain he has studied, builds slowly. It builds unevenly. It builds in patterns that are shaped by economics, geography, politics, and institutional history — factors that the innovation narrative renders invisible because they are not dramatic.

The organizations that will be last to adopt AI are not the ones that lack awareness of AI. They are the ones that lack the infrastructure — technical, institutional, cultural — that AI adoption requires. Government agencies bound by procurement cycles measured in years. Healthcare systems operating under regulatory frameworks that change on decadal timescales. Educational institutions whose faculty were hired under tenure agreements that predate the internet, let alone artificial intelligence. Small businesses in regions where broadband remains unreliable and expensive.

These organizations are not Luddites. They are not refusing the future out of fear or sentiment. They are embedded in systems — regulatory, financial, institutional, infrastructural — that operate on timescales incompatible with the innovation narrative's compressed timeline. They will adopt AI eventually. They will adopt it partially, unevenly, and alongside the older technologies and practices that continue to serve their needs. The spreadsheet will coexist with the AI assistant. The handwritten exam will coexist with the AI-graded assessment. The phone call will coexist with the chatbot. The specialist silo will coexist with the cross-functional pod.

Edgerton would predict this not because he is pessimistic about AI but because it is what has happened with every technology in recorded history. The pattern has no exceptions.

Segal himself provides evidence for it, though he frames the evidence differently. In Chapter 1, he describes a senior engineer who "spent the first two days oscillating between excitement and terror" before concluding that the tool had stripped away the manual labor masking what he was actually good at. This is a story of adaptation — an experienced professional finding new value in old expertise. But it is also a story of persistence. The engineer's decades of architectural knowledge did not disappear. It persisted as the judgment layer that directed the tool. The old expertise was not replaced. It was repositioned.

This is precisely Edgerton's prediction for most expertise in the AI transition. The framework knitters' knowledge of materials, drape, and quality did not vanish when the power loom arrived. It migrated — into quality inspection, textile design, and the assessment of machine output. The migration was invisible to the innovation narrative, which saw only the displacement. The persistence of the expertise, in modified form, serving new purposes within the new system, was the actual story of the transition.

Segal's Chapter 8, on the Luddites, tells this story from the innovation-centered perspective: the Luddites were right about the facts, wrong about their options, and unable to see what would grow in the space the machines opened. Edgerton's perspective adds a crucial dimension that the innovation narrative omits. What grew in that space did not grow immediately, and it did not grow for everyone. The Luddite generation bore the cost. Their children bore much of the cost. The institutions that eventually redirected the transition toward broader human flourishing — labor laws, the eight-hour day, the weekend, child labor prohibitions — took decades to build, and they were built not by the innovators but by the workers, organizers, and legislators who were living inside the transition's consequences.

The persistence of the old is not a failure of adaptation. It is the ordinary pace at which human institutions, human habits, and human expertise adjust to new conditions. That pace is measured in decades, not months. The innovation narrative treats this pace as a problem to be overcome — a friction to be smoothed away, an obstacle on the path to the future. The use-centered historian treats it as data. Data about how technological change actually works, who bears its costs, and how long the actual transition takes as opposed to how long the promoters say it will take.

There is a deeper point embedded here, one that Edgerton does not make explicitly but that his evidence implies. The persistence of old technologies and practices is not merely inertial. It is often rational. The doctor who continues to rely on physical examination alongside AI-assisted diagnostics is not being conservative for its own sake. She is responding to the actual reliability profile of the new tool, which remains imperfect, context-dependent, and occasionally wrong in ways that her embodied clinical judgment can catch. The teacher who continues to assign handwritten essays is not refusing to engage with AI. She is responding to the actual pedagogical value of the handwriting process — the slowed cognition, the physical engagement, the impossibility of outsourcing the thinking to a machine — which serves educational purposes that AI-assisted writing does not.

These are not sentimental attachments to the past. They are practical judgments about what works, made by people who use technologies every day and whose accumulated experience gives them information that no innovation narrative contains. Edgerton's use-centered history takes these judgments seriously, not as resistance to be overcome but as expertise to be consulted.

The shock of the old, in the age of AI, will be the discovery that the old persists — not because people are afraid of the future but because the old continues to work, continues to serve needs that the new has not yet learned to meet, and continues to constitute the actual material and institutional fabric of most people's working lives. The innovation narrative will declare these practices dead long before they disappear. The use-centered historian will find them still functioning, still serving, still mattering, decades after their obituaries were written. That is what has happened every time before. There is no evidence that this time will be different, and considerable evidence that the promoters of every previous technology said exactly the same thing.

---

Chapter 4: Maintenance and Repair — The Invisible Majority

Here is a number that does not appear in The Orange Pill: seventy percent.

That is a rough but defensible estimate of the proportion of all software engineering labor that is devoted not to building new things but to maintaining existing ones. Debugging. Patching. Updating dependencies. Migrating to new infrastructure. Fixing what breaks when an upstream library changes its API. Managing technical debt — the accumulated cost of every shortcut, every expedient decision, every "we'll clean this up later" that accrued over years of development under deadline pressure.

Seventy percent. The vast majority of all programming work is not creation. It is upkeep.

This number should be the starting point of any serious analysis of AI's impact on software development. It is not. Innovation narratives begin with creation, because creation is dramatic. Segal's imagination-to-artifact ratio — the distance between a human idea and its realization — measures the cost of making something new. It is a compelling metric. It captures something real about the expansion of creative capability. And it describes, at most, thirty percent of the work.

David Edgerton has argued, across every technological domain he has studied, that the most important work in any technological system is not invention but maintenance. Keeping existing systems running. Repairing what breaks. Updating what becomes obsolete. Adapting what was built for one set of conditions to function under another. This work is unglamorous. It does not make headlines. It does not appear in manifestos about the future. It is also the work without which every system in civilization would collapse within months.

The maintenance economy is invisible for the same reason that plumbing is invisible: it is noticed only when it fails. The electrical grid, the water system, the road network, the telecommunications infrastructure, the server farms that keep the internet running, the code that keeps the banking system operational, the people who keep all of these things in working order — this vast apparatus of upkeep is the actual foundation of modern life, and it receives approximately zero attention in any conversation about technological transformation.

Andrew Russell and Lee Vinsel, building directly on Edgerton's framework, published a manifesto in 2016 titled "Hail the Maintainers." Their argument was simple: innovation culture celebrates the people who create new things and renders invisible the people who keep existing things running. This is not merely an oversight. It is a structural distortion that produces bad policy, bad investment, and bad understanding of how technology actually functions in society.

Consider the implications for AI-generated code. Segal celebrates the collapse of the imagination-to-artifact ratio: a person with an idea can now produce a working prototype in hours. This is true. It is also the beginning of the story, not the end.

That prototype must now be maintained. The code Claude generated must be debugged when it encounters edge cases the initial prompt did not anticipate. It must be updated when the frameworks it depends on release new versions. It must be patched when security vulnerabilities are discovered — and AI-generated code has been shown to introduce security vulnerabilities at rates that, in some studies, exceed human-generated code, in part because the model optimizes for functional correctness rather than adversarial robustness. It must be refactored when requirements change, as requirements always do. It must be documented — not for the machine but for the humans who will maintain it after the original creator has moved on to the next project.

And here is the critical asymmetry: AI is extraordinary at generation and mediocre at maintenance. Generating code from a natural-language description is the task that large language models were optimized for. Maintaining code — understanding the accumulated history of a system's decisions, recognizing why a particular implementation was chosen over the obvious alternative, tracking the interactions between components that were built at different times under different assumptions by different people — requires a form of contextual understanding that current AI systems handle poorly.

Maintenance requires what Edgerton's colleague Ruth Schwartz Cowan called "embedded knowledge" — the understanding that lives not in documentation but in the specific, local, accumulated experience of the people who have been tending the system. The maintainer knows that this particular server crashes under load on the third Tuesday of the month because of a cron job that nobody documented. The maintainer knows that this particular function was written in a non-obvious way because the obvious way triggered a race condition that took three engineers six weeks to diagnose. The maintainer knows which parts of the codebase are fragile, which are robust, which have been tested under stress and which have not.

This knowledge is precisely the kind of knowledge that Segal describes his senior engineer losing in Chapter 10 of The Orange Pill — the "architectural intuition" that erodes when Claude handles the plumbing. The engineer notices, months later, that she is "making architectural decisions with less confidence" and cannot explain why. Edgerton's framework explains why: she has lost the maintenance knowledge that only accumulates through the friction of tending a system over time.

The irony is severe. AI accelerates creation and may simultaneously degrade the capacity for maintenance. The imagination-to-artifact ratio collapses, and artifacts proliferate. Each artifact requires maintenance. The maintenance burden grows faster than the maintenance capacity, because the skills that maintenance requires — patience, contextual understanding, the accumulated knowledge of how systems behave under real-world conditions over real time — are precisely the skills that the removal of friction atrophies.

This is not a hypothetical concern. It is already visible in the software industry. Engineers who have used AI coding assistants for six months report declining ability to debug manually. The tolerance for the slow, painstaking, often frustrating work of diagnosis erodes as the tools provide faster alternatives. But the faster alternatives work only when the problem is common enough to be in the training data. When the problem is novel — when it arises from the specific, local, idiosyncratic conditions of this particular system's history — the maintainer's embodied knowledge is the only resource that can diagnose it.

Edgerton studied this dynamic in the context of military technology, where the maintenance burden has historically dwarfed the innovation budget. During the Second World War, the ratio of maintenance personnel to combat personnel in the British military was approximately ten to one. For every soldier firing a weapon, ten soldiers were keeping vehicles running, repairing communications equipment, maintaining supply chains, and performing the thousand mundane tasks that kept the war machine operational. The innovation narrative tells the story of the Spitfire, the radar, the code-breaking machines. The use-centered history tells the story of the mechanics who kept the Spitfires flying — work that was essential, skilled, and almost entirely invisible.

The AI economy will exhibit the same ratio, and possibly a worse one. Every AI-generated system needs people who understand what the system does, how it fails, and how to fix it when it fails in ways the original prompt did not anticipate. Every AI-dependent workflow needs someone who knows what to do when the model changes — when the API updates, when the pricing shifts, when the output quality degrades, when the model is deprecated entirely and the system must be migrated to a new provider on a timeline that no one planned for.

These maintainers will be the invisible majority of the AI workforce. They will not appear in the innovation narratives. They will not be celebrated at conferences. They will not post on X about the extraordinary things they built over the weekend. They will be the people who show up on Monday morning and keep the things that were built over the weekend from collapsing under the weight of real-world use.

Segal acknowledges part of this in Chapter 19, his analysis of the Software Death Cross. The value of a software company, he argues, is not in the code but in the ecosystem — the data layer, the integrations, the institutional trust, the accumulated workflow patterns. This is a maintenance argument, though Segal does not frame it as one. The ecosystem he describes is the product of years of maintenance work: patching, updating, adapting, supporting, debugging, and the slow accumulation of institutional knowledge about how the system functions in the specific conditions of each customer's environment.

Edgerton's framework takes this insight further. If the value is in the ecosystem, then the most important workers are the people who maintain the ecosystem — not the people who built it originally and not the people who will build the next thing. The maintainers. The people whose work is unglamorous, essential, and structurally invisible to every framework that measures value by innovation.

The Orange Pill describes a future in which the question becomes the product and the capacity to decide what should be built becomes the primary skill. Edgerton's use-centered analysis suggests a different future, or rather, a complementary one: a future in which the capacity to maintain what has been built is at least as valuable as the capacity to build it in the first place, and in which the maintenance work — the work of keeping dams from eroding, systems from failing, and accumulated knowledge from being lost — is recognized as the essential, skilled, deeply human labor that it has always been.

The imagination-to-artifact ratio is a measure of creation. The artifact-to-maintained-system ratio is a measure of something less poetic and more consequential: the ongoing work that determines whether a created thing persists long enough to matter. AI has collapsed the first ratio. It has done nothing to collapse the second. If anything, by accelerating the proliferation of artifacts, it has made the second ratio worse.

Invention is easy. It has always been easy relative to the alternative. The alternative is maintenance: the patient, unglamorous, largely invisible work of keeping the world running. That work does not appear in any manifesto. It is the foundation on which every manifesto rests.

Chapter 5: Creole Technologies — How Users Transform AI

In 1877, Thomas Edison filed a patent for the phonograph. He was certain about what it was for. He published a list of ten anticipated uses, and the list tells you everything about the gap between design intention and actual practice. The phonograph, Edison believed, would be used for dictation in business offices, for recording the last words of dying people, for teaching elocution, for reproducing music in the home (this one he got right, eventually, almost by accident), for creating family records of voices, for producing phonographic books for the blind, for teaching spelling, for recording telephone conversations, for preserving the explanations of teachers, and for connecting the phonograph to the telephone to create a permanent record of communications.

Dictation. Deathbed recordings. Elocution lessons. This is what the inventor of the recorded music industry thought his invention was for.

What users actually did with the phonograph was play music. They played music so enthusiastically, so insistently, so profitably, that within two decades the phonograph had created an entirely new industry — the recording industry — that Edison himself had not anticipated, did not particularly want, and spent years trying to redirect toward the business applications he considered more serious. The users won. They always do.

David Edgerton calls this phenomenon the creole technology. The term is borrowed from linguistics, where a creole is a new language that emerges when speakers of different languages are forced into contact and must communicate. A creole is not a degradation of either parent language. It is a new thing — a hybrid with its own grammar, its own vocabulary, its own expressive possibilities that neither parent language contained. Creole technologies work the same way. When a designed tool meets the actual conditions of use, something emerges that the designer could not have predicted: a hybrid of intention and practice, a technology-in-use that is genuinely different from the technology-as-designed.

The history of technology is, in significant part, a history of creole adaptations that the designers never saw coming. The automobile was designed for transportation. Users turned it into a mobile living room, a status symbol, a site for sexual encounters, a weapon, a mobile office, and a medium for self-expression through customization. The personal computer was designed for business productivity. Users turned it into a gaming platform, a communication device, a creative studio, a surveillance tool, and eventually the portal through which most human beings would conduct their economic, social, and political lives. The internet was designed to survive nuclear war by routing information around damaged nodes. Users turned it into the largest marketplace, the largest library, the largest gossip network, and the largest pornography distribution system in human history.

In every case, the creole application — the use that emerged from practice rather than design — proved more significant than the intended application. And in every case, the designers were the last to understand what their technology was actually for.

Artificial intelligence in 2025 and 2026 is in the early stages of creolization, and the process is already producing hybrids that no designer anticipated. The Orange Pill itself is evidence. Segal's collaboration with Claude — the iterative process of feeding half-formed ideas into a language model and receiving back structures, connections, and formulations that redirected the argument — is not a use case that Anthropic's engineers designed for. Claude was built as a general-purpose language model. Its marketed applications are code generation, business communication, data analysis, and research assistance. Nowhere in the product documentation does it say "book-writing collaborator who will challenge the author's half-formed ideas and occasionally produce philosophical connections that restructure the argument."

Yet that is what happened. The creole emerged from the collision between Segal's specific needs — a builder with ideas that outpaced his capacity to articulate them — and the tool's general capabilities. The result was a form of intellectual partnership that neither party designed for and that both parties contributed to in ways that cannot be cleanly separated. Segal describes this in Chapter 7 with the honesty that creole technologies demand: "Neither of us owns that insight. The collaboration does."

This is a creole statement. It describes a product of use, not of design.

The creolization of AI is happening at every scale and in every domain, mostly invisible to the innovation narrative because the innovation narrative looks at intended applications. Consider the phenomena that have emerged in the first year of widespread AI deployment. Developers using Claude not merely to generate code but to cross domain boundaries — backend engineers building frontends, designers writing features, non-technical founders prototyping products. These are not the intended applications. They are creole adaptations. The tool was designed to assist within domains. Users are deploying it to breach the walls between them.

Teachers using AI not as an answer machine for students but as a question-generation tool — assigning students the task of interrogating the AI's outputs, identifying its errors, evaluating the quality of its reasoning. The tool was designed to provide answers. Users are repurposing it as a device for teaching the skills that answers cannot teach: skepticism, evaluation, the capacity to identify what sounds right but is not.

Therapists using large language models as a journaling intermediary — patients writing to the AI between sessions, then bringing the conversation logs to therapy as a starting point for discussion. The tool was designed for information retrieval. Users have turned it into a mirror for self-reflection, a technology for externalizing the internal monologue in a form that can be examined collaboratively.

Musicians using AI not to compose music but to generate material that they then react against — deliberately prompting bad or unexpected output as a creative stimulus, the way a jazz musician might react to a wrong note by turning it into the foundation of an improvisation. The tool was designed to produce good output. Users are exploiting its failures as creative fuel.

None of these applications were designed. All of them emerged from use. And Edgerton's framework predicts that these creole applications — the hybrids that emerge from the collision between designed intention and actual practice — will prove more consequential than the intended applications, for the same reason that recorded music proved more consequential than business dictation. Users know what they need. Designers know what they built. The gap between those two forms of knowledge is where creole technologies live.

The innovation narrative cannot see creole technologies because it looks forward from the moment of invention. It asks: what was this tool designed to do? How well does it do it? How will it improve? These are reasonable questions, but they are designer-centered questions. The use-centered question is different: what are people actually doing with this tool that nobody expected?

Answering that question requires a different kind of attention. It requires watching users, not designers. Visiting classrooms, not conferences. Observing the marketing manager in Cincinnati and the therapist in São Paulo and the musician in Lagos, not the keynote speaker in San Francisco. The creole technologies emerge from the periphery, from the margins, from the places where users lack the resources to use the tool as intended and must therefore improvise.

This has implications for prediction that the innovation narrative systematically ignores. The most important AI applications of 2030 are almost certainly not the ones that any current roadmap describes. They are the ones that will emerge from use — from the specific, local, idiosyncratic, unpredictable conditions in which millions of people experiment with these tools and discover uses that no designer imagined.

Edgerton's most provocative observation about creole technologies is that they tend to emerge fastest in conditions of scarcity rather than abundance. The bicycle became a transformative healthcare technology not in wealthy countries with ambulances and paved roads but in poor countries where the bicycle was the only vehicle available. The mobile phone became a banking platform not in countries with established financial infrastructure but in Kenya, where the absence of bank branches created the need that M-Pesa filled. Scarcity forces improvisation. Improvisation produces creoles.

Applied to AI, this suggests that the most inventive creole applications may emerge not from Silicon Valley or London or Bangalore but from the places where AI tools are the only sophisticated tools available — where the absence of institutional infrastructure, the shortage of trained specialists, the lack of alternatives forces users to push the tool into applications that no well-resourced designer would have thought of, because the well-resourced designer had other options.

The developer in Lagos that Segal describes in Chapter 14 is not merely a beneficiary of democratization. On Edgerton's account, she is a potential source of creole innovation — a user whose specific conditions of constraint may produce uses of AI that the San Francisco engineers, with their abundance of alternatives, would never discover. The most important thing about her is not that she can now do what a Google engineer does. It is that she will do things that a Google engineer would never think to do, because her conditions demand improvisations that abundance never requires.

This is the deepest implication of the creole technology framework for the AI moment. The future of AI will be written not by the designers but by the users. Not by the people who build the models but by the people who use them in conditions the builders never imagined. Not at the frontier but at the margins, where scarcity breeds invention and the collision between designed tool and actual need produces something genuinely, unpredictably new.

The innovation narrative looks at the leading edge and calls it the future. The use-centered historian looks at the periphery and finds the future already happening there — unannounced, undocumented, and invisible to anyone who is looking only where the light is brightest.

---

Chapter 6: Production Over Innovation in the AI Economy

There is a distinction in economics that almost everyone ignores because it is not exciting. The distinction is between innovation and production. Innovation is the creation of new things. Production is the ongoing manufacture of existing things. Innovation gets the magazine covers, the TED talks, the venture capital. Production gets the economy.

David Edgerton's research demonstrates, across every major technological era of the past century, that production has contributed more to economic output than innovation. This is not a close contest. The ratio is not fifty-one to forty-nine. The vast majority of economic activity in any given year consists not of creating new products and services but of producing existing ones — manufacturing the same cars, processing the same payroll, shipping the same goods, providing the same medical care, teaching the same curricula, running the same systems that ran last year and the year before that.

Innovation changes what is produced. Production determines how much of it reaches people. The printing press was an innovation. The ongoing printing of Bibles, almanacs, legal contracts, and commercial invoices for the next five hundred years was production. The innovation gets the chapter in the history book. The production gets the economy.

This distinction applies to the AI moment with a precision that the innovation narrative does not want to confront. Segal's Orange Pill is an innovation narrative. It tells the story of new capabilities, new products, new ways of working. It celebrates the frontier: the product built in thirty days, the engineer who crosses domain boundaries, the founder who builds a company without a technical co-founder. These are genuine innovations. They demonstrate what is now possible that was not possible before.

They are also, in economic terms, a rounding error.

The significant economic story of AI will not be told at the frontier. It will be told in the vast interior of the economy where millions of ordinary workers use AI tools to produce ordinary work slightly more efficiently. The marketing manager who saves forty-five minutes on a slide deck. The customer service representative who handles twelve percent more inquiries per shift. The accountant whose month-end close takes four days instead of five. The logistics coordinator whose route optimization improves by three percent.

None of these gains will appear in a book about the future of intelligence. Each of them is small. Collectively, across millions of workers and millions of workdays, they constitute the actual economic impact of AI — the production story that dwarfs the innovation story by orders of magnitude.

Edgerton identified this pattern in the history of electrification, and the parallel is nearly exact. The innovation narrative of electrification focuses on Edison, Tesla, and the light bulb. The production story of electrification is the thirty-year process by which electric motors replaced steam engines in American factories, producing cumulative productivity gains that accounted for more economic growth than any single invention of the electrical age. The motors were not dramatic. They were installed one at a time, in one factory at a time, producing incremental improvements in one production process at a time. The aggregate effect was the single largest source of productivity growth in the first half of the twentieth century.

The AI production story will follow the same arc. Not dramatic. Not sudden. Not a phase transition that reorganizes everything in a season. A slow, cumulative accumulation of small gains across millions of workplaces, measured not in billions of dollars of run-rate revenue for a single tool but in the aggregate output of an economy whose ordinary processes have been incrementally improved.

Segal's Death Cross analysis in Chapter 19 of The Orange Pill operates within the innovation framework. The chart shows two curves crossing — SaaS valuations falling as AI market value rises. This is an innovation-sector story. It describes what is happening in the market for new technology products. It does not describe what is happening in the economy that uses those products.

From the production perspective, the Death Cross looks different. The SaaS companies whose valuations are falling are, in many cases, production companies. They produce the ongoing, unglamorous services that enterprises depend on: payroll processing, customer relationship management, human resources administration, supply chain coordination. Their value was never primarily in their code — Segal makes this point himself — but in the production infrastructure they maintain: the integrations, the data layers, the compliance frameworks, the institutional knowledge of how their systems function in the specific conditions of each customer's deployment.

These production functions do not disappear when AI arrives. They persist, because production persists. The hospital still needs its electronic medical records system on Tuesday morning, regardless of what happened at the AI frontier over the weekend. The logistics company still needs its route optimization software. The bank still needs its transaction processing infrastructure. These are production systems, and their value is measured not by their novelty but by their reliability — by the simple, boring, absolutely critical fact that they work every day, under load, without failure.

The innovation narrative treats reliability as a solved problem. It is not. Reliability is the product of maintenance, and maintenance is the production-side labor that keeps systems running long after the innovation-side labor of building them has ended. A SaaS company whose system has been reliable for ten years has earned something that no weekend prototype can replicate: the institutional trust of thousands of organizations that have built their operations around the assumption that the system will work tomorrow.

Edgerton's framework predicts that the AI economy will eventually bifurcate into an innovation sector that gets the attention and a production sector that gets the revenue. The innovation sector will build new tools, demonstrate new capabilities, and generate the breathless coverage that drives public understanding of what AI is for. The production sector will use those tools — and the older tools alongside them — to produce the ordinary goods and services that the economy actually runs on. The production sector will be larger, more economically significant, and almost entirely invisible in any narrative about the future of technology.

The economic implications of this bifurcation are substantial. Investment flows disproportionately to the innovation sector, because innovation produces dramatic returns (and dramatic failures) on timescales that match the venture capital cycle. The production sector, which generates returns slowly and reliably, attracts less investment despite contributing more to economic output. This misallocation is not new — Edgerton documented it across the twentieth century — but AI may amplify it, because AI's innovation narrative is uniquely compelling and its production story is uniquely boring.

The policy implications are equally substantial. Governments that orient their AI strategies around innovation — building research labs, attracting AI startups, winning the "AI race" — may miss the larger opportunity, which is improving the productivity of the vast existing economy through incremental AI adoption. This is the equivalent of a government in 1920 investing heavily in electric light research while ignoring the slow, transformative replacement of steam motors in its factories. The dramatic investment gets the press release. The mundane adoption gets the economic growth.

Segal's celebration of builders and their extraordinary creations represents the innovation narrative at its most compelling. The counter-narrative is not that builders do not matter. It is that the builders who matter most, in economic terms, may not be the ones building new things. They may be the ones who figure out how to integrate AI into the production of existing things — who make the slide deck faster, the logistics smoother, the medical record more accurate, the payroll more reliable. That work will never generate a manifesto. It will generate an economy.

---

Chapter 7: The Significance of the Mundane

The condom has saved more lives than penicillin. The figure is not precisely calculable, but the epidemiological consensus is broad: a simple latex barrier, manufactured for pennies, distributed without prescription, requiring no electricity, no internet connection, no institutional infrastructure, and no training more complex than a five-minute demonstration, has prevented more deaths from sexually transmitted disease and more suffering from unintended pregnancy than any pharmaceutical innovation of the modern era.

No innovation narrative has ever celebrated the condom. It is too old, too simple, too cheap, too associated with bodily functions that polite discourse prefers to ignore. It has no origin myth. There was no moment when a brilliant inventor unveiled it before an astonished audience and the world was transformed. It was improved incrementally over centuries — animal membrane to rubber to latex to polyurethane — by anonymous manufacturers responding to market demand. It is the paradigmatic mundane technology: world-changing in its cumulative effect, invisible in every account of what matters about technology.

David Edgerton collected mundane technologies the way other historians collect breakthroughs. The bicycle. The sewing machine. The corrugated iron sheet. The rickshaw. The shipping container. The condom. These are the technologies that, in his account, actually constitute the material fabric of civilization — the things that most people use most of the time to address the problems that most urgently affect their lives. They share certain characteristics: they are cheap, mechanically simple, infrastructure-independent, maintainable by their users, and so deeply embedded in daily practice that they have become invisible.

Invisibility is the defining feature of the mundane technology, and it is both the source of its power and the reason it is ignored. A technology becomes mundane when it disappears into the background of daily life — when using it requires no conscious attention, no special training, no deliberate decision. The electric light is mundane. The flush toilet is mundane. The zipper is mundane. Each of these technologies was, at the moment of its introduction, a dramatic innovation. Each became mundane through the process of universal adoption and habitual use. And each, in its mundane state, does more for more people than the dramatic technologies that dominate innovation narratives.

The AI moment is producing its own mundane technologies, though the innovation narrative cannot see them yet because they have not yet become invisible. Consider the autocomplete suggestion. Not the dramatic code generation that Segal celebrates — the production of entire working systems from natural-language descriptions — but the quiet, ubiquitous, barely noticed feature that predicts the next word in an email and fills it in when the user presses Tab.

Autocomplete saves, on average, approximately two to four seconds per completion. A typical knowledge worker encounters perhaps fifty to a hundred completions per day. The aggregate time saved is measured in minutes — five, perhaps ten, per workday. This is not dramatic. It is not transformative. It will never appear in a manifesto about the future of human intelligence.

Multiply it by a hundred million knowledge workers and the mundane becomes momentous. Five minutes per worker per day, across a hundred million workers, is approximately eight million hours of labor recovered per day. Over a working year, that is roughly two billion hours — the equivalent of a million full-time workers, conjured into existence not by any dramatic breakthrough but by a feature so trivial that its users have already stopped noticing it.

This is Edgerton's argument in its purest form: the significance of the mundane is invisible precisely because it is mundane, and it dwarfs the significance of the dramatic precisely because it is universal. The frontier demonstration reaches thousands. The mundane application reaches millions. The frontier shifts what is possible. The mundane shifts what is actual. And actuality, not possibility, is where economic and social significance lives.

The Orange Pill is organized around dramatic demonstrations: the product built in thirty days, the book written on a transatlantic flight, engineers crossing domain boundaries, a founder building a revenue-generating application over a weekend. Edgerton's use-centered framework does not deny the reality of these achievements. It denies their representativeness. They are outliers — the peak capability of the most skilled users under the most favorable conditions. The median AI experience is not a Napster Station built in thirty days. It is a slide deck drafted in three minutes. And the slide deck, in its banality, may matter more.

The mundane has another characteristic that the dramatic lacks: durability. Dramatic applications are fragile. They depend on frontier capabilities, on specific tools, on users with exceptional skills. When the model changes, the frontier application must be rebuilt. When the tool is deprecated, the dramatic achievement may become irreproducible. Mundane applications are robust. Autocomplete works regardless of model version. The scheduling assistant works regardless of which large language model powers it. The translation tool works even when the underlying architecture changes entirely. Mundane technologies survive transitions that kill dramatic ones, because mundane technologies are loosely coupled to specific implementations and tightly coupled to persistent human needs.

This durability maps onto a deeper feature of technological history that Edgerton documented meticulously: the technologies that matter most are the technologies that persist longest, and the technologies that persist longest are the ones that serve the most basic and most universal human needs. Food production. Shelter. Transportation. Communication. Healthcare. Education. The dramatic technologies — the atomic bomb, the space rocket, the supercomputer — come and go with the political and economic conditions that funded them. The mundane technologies — the bicycle, the corrugated sheet, the condom — persist because the needs they serve persist.

AI's mundane applications will persist for the same reason. The need to draft correspondence more quickly will not disappear. The need to translate documents will not disappear. The need to schedule meetings, summarize reports, generate routine documentation, and perform the thousand small administrative tasks that consume knowledge workers' time will not disappear. These needs are as durable as the need for shelter and transportation, and the technologies that serve them will prove more durable than any frontier demonstration.

There is a moral dimension to the significance of the mundane that Edgerton makes explicit and that the innovation narrative systematically evades. Dramatic technologies serve dramatic needs — the needs of the powerful, the wealthy, the technologically sophisticated. Mundane technologies serve ordinary needs — the needs of the majority, the needs that are so basic and so universal that serving them has no prestige. The bicycle serves the midwife in rural Uganda. The corrugated iron sheet serves the family in the slum. The condom serves the teenager with no access to healthcare.

When a civilization organizes its attention, its investment, and its narrative around dramatic technologies, it is making a choice about whose needs matter. The needs of the few who can access the frontier are treated as significant. The needs of the many who live in the mundane are treated as background noise. Edgerton's work is, in part, a moral argument against this allocation of attention. The mundane matters more. It serves more people. It saves more lives. It constitutes more of the actual material fabric of human existence. And it deserves more of the intellectual and institutional attention that is currently lavished on the frontier.

The AI discourse, in 2026, is almost entirely organized around the frontier. The conferences celebrate dramatic applications. The investment flows to dramatic capabilities. The books — including the one that this book responds to — tell dramatic stories. The mundane applications, which serve more people and reshape more labor and will prove more durable than any frontier demonstration, are invisible.

Edgerton would not be surprised. This is what happened with electricity, with the automobile, with the personal computer, and with the internet. The mundane was always invisible at the beginning. It always became visible later, when the dramatic had faded and the slow, cumulative, world-reshaping work of the mundane had become too large to ignore. The same will happen with AI. The question is how much time, how much investment, and how much intellectual attention will be wasted on the dramatic before the mundane receives the recognition it deserves.

The condom saved more lives than penicillin. The autocomplete suggestion may save more labor than Claude Code. The numbers will not be dramatic. They will be real.

---

Chapter 8: War, Crisis, and the Misdirection of Technological History

The atomic bomb is the most innovation-centric technology in human history. Every element of its story is organized around the dramatic: the secret laboratory at Los Alamos, the race against Nazi Germany, the blinding flash over Hiroshima, the mushroom cloud that became the defining image of the twentieth century. The bomb arrived as rupture incarnate — a technology so powerful that it reorganized international relations, reshaped military strategy, and haunted the collective imagination for generations.

It was also, measured by its actual deployment, one of the least-used technologies of the modern era. Two bombs were dropped in anger. Two. In the eighty years since, the most consequential nuclear technology has been not the bomb but the power plant — a production technology, unglamorous, plagued by maintenance challenges, serving the mundane need for electricity. Nuclear power generates roughly ten percent of the world's electricity. The bomb generates fear, prestige, and an inexhaustible supply of innovation narratives. On any use-centered accounting, the power plant matters more.

David Edgerton argued throughout his career that war and crisis distort technological history by concentrating attention on extreme applications while rendering invisible the ordinary uses that constitute the majority of a technology's impact. The distortion is systematic: war demands dramatic technologies, dramatic technologies make dramatic stories, dramatic stories dominate historical memory, and historical memory shapes public understanding of what technology is and what it is for. The result is a civilizational perception of technology that is organized around weapons, emergencies, and existential threats rather than around the mundane applications that affect more people.

The AI discourse of 2025 and 2026 exhibits this distortion with remarkable fidelity. The dominant frames for discussing artificial intelligence are crisis frames. Existential risk: AI might destroy humanity. Job displacement: AI will eliminate millions of jobs. Civilizational transformation: AI will reorganize every institution and every industry within a generation. Arms race: the nation that leads in AI will dominate the twenty-first century. Each of these frames concentrates attention on the extreme scenario — the worst case, the best case, the case that produces the most dramatic narrative — and renders invisible the ordinary deployment that will affect more people than any extreme scenario.

Segal's Orange Pill participates in the crisis frame, though more thoughtfully than most. His five-stage pattern of technological transition — threshold, exhilaration, resistance, adaptation, expansion — is a crisis-centered framework. It describes technology as arriving in a moment of rupture, provoking a sequence of emotional and institutional responses, and ultimately resolving into a new equilibrium. The framework is not wrong. It captures something real about how societies experience the arrival of powerful new technologies. But it describes the experience from inside the crisis — from the perspective of the people who feel the rupture most acutely, who are closest to the frontier, who have the most at stake in the outcome.

Edgerton's use-centered framework describes the same transition from outside the crisis, and the view is strikingly different. The view from outside shows not a five-stage dramatic arc but a far more gradual process: slow adoption, uneven deployment, incremental adjustment, the persistence of older practices alongside newer ones, and the accumulation of small changes over timescales measured in decades rather than seasons. The crisis frame and the use-centered frame describe the same underlying reality. They produce narratives so different that they seem to describe different events.

Consider how the crisis frame has operated in the AI discourse specifically. The "SaaSpocalypse" — Segal's term for the trillion-dollar market value evaporation of early 2026 — is a crisis narrative. The Death Cross is a crisis image. The language is borrowed from financial catastrophe: apocalypse, death, collapse. The reality underneath the language is a market repricing — significant, disruptive, painful for the people inside it, but not an apocalypse. SaaS companies did not cease to exist. They were revalued. The difference between a repricing and an apocalypse is the difference between the crisis frame and the use-centered frame.

The jobs discourse operates the same way. "AI will eliminate millions of jobs" is a crisis frame. The use-centered analysis asks: what actually happened to employment in the first year of widespread AI deployment? The answer, so far, is more complicated than any crisis frame can accommodate. Some jobs have been eliminated. Some have been restructured. Some new jobs have been created. Many jobs have been intensified — the Berkeley finding that AI does not reduce work but multiplies it. The net effect, as of mid-2026, is ambiguous. The ambiguity is not dramatic enough for the crisis frame, which requires clarity: either catastrophe or salvation. The ambiguity is, however, the actual state of the evidence.

The existential risk discourse is the purest expression of the crisis frame, and Edgerton's analysis of it would be characteristically bracing. The argument that AI might destroy humanity concentrates an extraordinary amount of intellectual and institutional attention on a scenario that is, by the admission of its own proponents, speculative. Meanwhile, the actual, documented, currently occurring effects of AI — the intensification of work, the erosion of attention, the amplification of inequality, the maintenance burden of proliferating AI-generated systems — receive a fraction of the attention, because they are not dramatic enough to compete with existential risk in the marketplace of ideas.

Edgerton documented this pattern across the history of military technology. The atomic bomb received more intellectual attention than any technology in history. The Kalashnikov rifle killed more people. The machete, in the Rwandan genocide, killed nearly a million people in a hundred days — with a technology so mundane that it barely registers as technology at all. The crisis frame directed attention toward the dramatic weapon and away from the mundane weapon, and the allocation of attention had consequences: investment in nuclear nonproliferation dwarfed investment in small-arms control, despite the fact that small arms caused more deaths by orders of magnitude.

The AI version of this misallocation is already visible. Investment in AI safety research — focused primarily on existential risk scenarios — has grown rapidly. Investment in understanding the mundane, everyday, already-occurring effects of AI on ordinary work, ordinary education, ordinary attention, and ordinary institutional practice has grown far more slowly. The crisis frame directs resources toward the dramatic scenario and away from the actual one. This is not a failure of intention. It is a structural feature of how crisis narratives allocate attention.

Segal's own narrative exhibits the tension. His account of the Berkeley research in Chapter 11 — the finding that AI intensifies work rather than reducing it — is a use-centered observation embedded within an innovation-centered framework. He recognizes the significance of the finding. He also moves past it, because the innovation narrative demands forward motion toward the next insight, the next framework, the next stage of the argument. A use-centered narrative would have lingered. It would have asked what the intensification means for the millions of workers who are not at the frontier, who are not building extraordinary things, who are simply working harder because the tools make more work possible and the internalized achievement imperative converts possibility into compulsion.

The misdirection of technological history is not a conspiracy. It is a structural feature of how dramatic narratives operate. Dramatic narratives need crises. They need turning points, phase transitions, before-and-after moments that organize the story into a comprehensible arc. Use-centered narratives do not have these features. They are slow, cumulative, ambiguous, and resistant to the dramatic structure that makes stories compelling.

This is precisely why use-centered analysis matters: because the features that make a narrative compelling are not the features that make it true. The compelling story is the crisis story. The true story is the slow story — the story of gradual adoption, mundane use, incremental adjustment, and the accumulation of small changes that, over decades, reshape the world without anyone noticing that it happened.

Edgerton would tell the AI story differently than Segal tells it. Not as a rupture that demands an immediate institutional response, but as a slow process that demands patient, sustained, empirically grounded attention to what is actually happening — not what the promoters say is happening, not what the crisis frame suggests might happen, but what the evidence shows is happening, right now, in the offices and classrooms and clinics and factories where ordinary people are using these tools to do ordinary work.

That story will take decades to tell. The crisis narrative has already told its story and moved on to the next crisis. The question is which narrative will produce better institutions, better policies, better understanding, and better outcomes for the people who live inside the transition rather than narrate it from above.

Edgerton's bet, informed by a century of evidence, is on the slow story. The slow story has been right every time before. The crisis narrative has been dramatic, compelling, and wrong — wrong not in its facts but in its emphasis, wrong not in what it includes but in what it renders invisible, wrong not in its intelligence but in its attention.

The most dangerous misdirection is not the one that tells you something false. It is the one that tells you something true so loudly that you cannot hear the quieter truth beside it. The quieter truth is always the one that matters more.

Chapter 9: The Global Deployment Gap

There is a map that does not exist but should. It would show, for every square kilometer of the Earth's surface, the actual rate of AI tool usage — not downloads, not subscriptions, not the number of people who have heard of ChatGPT, but the frequency with which human beings in that location use AI tools to accomplish real tasks in their real working lives.

The map would be almost entirely dark.

Bright spots in the San Francisco Bay Area. A glow along the Boston-Washington corridor. Clusters in London, Bangalore, Tel Aviv, Singapore, Shanghai, Seoul. Scattered points of light in every major city on every continent. And between the bright spots, an immense darkness — not the darkness of ignorance but the darkness of infrastructure, of affordability, of institutional absence, of the ten thousand mundane preconditions that must be satisfied before a technology moves from possibility to practice.

David Edgerton spent decades studying this darkness, though he studied it under a different name. He called it the deployment gap — the distance between a technology's existence and its actual use by the people whose lives it is supposed to improve. The deployment gap is the central fact of global technological history, and it is the fact that innovation narratives are structurally incapable of seeing, because innovation narratives begin at the bright spots and never look at the dark.

Segal describes the developer in Lagos and the student in Dhaka as beneficiaries of AI democratization. "A student in Dhaka can now access the same coding leverage as an engineer at Google," he writes in Chapter 14 of The Orange Pill. The sentence is followed immediately by an honest qualification: "Not the same salary. Not the same network. Not the same institutional support. Not the same safety net if the project fails." Segal knows the democratization is partial. He says so. But the structure of his argument — the arc from limitation to expansion, from scarcity to abundance — pulls toward the bright spots. The darkness remains, acknowledged but unexplored.

Edgerton would explore the darkness. Not because he denies the bright spots, but because the darkness is where most of the world's population lives, and their relationship to any technology is determined not by its theoretical capability but by the mundane infrastructure that determines whether the capability is accessible.

Consider what the developer in Lagos actually requires to use Claude Code productively. She requires electricity — reliable, continuous electricity, not the intermittent supply that characterizes much of sub-Saharan Africa, where the average Nigerian experienced over four thousand hours of power outage in 2023. She requires internet connectivity — not the theoretical coverage that appears on telecom maps but the actual bandwidth, measured in download speed and latency and cost per gigabyte, that determines whether a conversation with a large language model is technically feasible. In many African countries, a gigabyte of mobile data costs between two and five percent of average monthly income. A sustained coding session with Claude might consume several gigabytes per day. The arithmetic is prohibitive.

She requires hardware. A laptop capable of running a modern development environment costs, at minimum, three hundred to five hundred dollars — a sum that represents weeks or months of income for a significant fraction of the world's population. She requires English-language fluency at a level sufficient for effective prompting, because the tools are trained predominantly on English data and optimized for English-language interaction. She requires the specific cultural competency that comes from familiarity with the Silicon Valley conventions embedded in the tools' design — the assumptions about workflow, about project structure, about what constitutes a well-formed request. And she requires time — hours of uninterrupted cognitive work, which is itself a luxury in economies where survival demands that most waking hours be devoted to income generation.

Remove any one of these preconditions and the theoretical democratization fails. Not because the technology is inadequate, but because the infrastructure of use is missing.

Edgerton documented this pattern across the entire history of modern technology. The automobile was theoretically available to anyone who could afford one from the moment Ford began production. The actual deployment of the automobile required paved roads, fuel distribution networks, repair facilities, licensing systems, traffic regulation, and an entire institutional ecosystem that took decades to build and that reached different populations at vastly different speeds. In the United States, automobile ownership became widespread within a generation. In much of Africa and South Asia, it remains limited today, more than a century after the technology was invented.

The personal computer followed the same arc. Theoretically available from 1977. Actually deployed, in the sense of integrated into the working practices of ordinary people, over a period of thirty years in wealthy countries and still incomplete in poor ones. The internet followed the same arc again. Theoretically available from the mid-1990s. Actually deployed, measured by meaningful usage rather than nominal access, over a period that is still ongoing, with more than two billion people in 2026 remaining effectively offline.

AI will follow the same arc. The innovation narrative projects universal access from the existence of the technology. Edgerton's historical evidence projects decades of uneven deployment, determined not by the capability of the tool but by the mundane infrastructure that determines who can actually use it.

The bicycle, Edgerton's signature example, illustrates both the promise and the limitation of the democratization argument. The bicycle was, in its time, a genuine democratizing technology. It was cheap, mechanically simple, infrastructure-independent (it worked on dirt paths), maintainable by its users, and transformative for the communities that adopted it. Health workers on bicycles reached villages that no motorized vehicle could access. Students on bicycles attended schools that walking could not reach. The bicycle's democratizing power was real, and it was real precisely because the bicycle's infrastructure requirements were minimal. It did not need paved roads. It did not need fuel stations. It did not need repair facilities staffed by specially trained technicians. It needed a user, a path, and the ability to pedal.

AI's infrastructure requirements are the opposite of the bicycle's. AI requires reliable electricity, high-speed connectivity, expensive hardware, specialized language competency, and institutional support structures. It is, in infrastructural terms, closer to the automobile than to the bicycle — a technology whose theoretical democratizing potential is gated by an infrastructure that is expensive to build, slow to deploy, and distributed according to patterns of wealth and power that predate the technology by centuries.

This does not mean democratization will not happen. It means democratization will happen on a timeline measured in decades, not years, and it will happen unevenly, reaching the populations closest to existing infrastructure first and the populations farthest from it last. The developer in Lagos will benefit before the farmer in rural Niger. The student in Dhaka will benefit before the garment worker in the Dhaka factory district. The deployment gap will narrow, but it will narrow along lines of existing advantage, because infrastructure follows wealth.

Edgerton's most uncomfortable claim about the deployment gap is that the gap itself is not a temporary condition to be overcome. It is a structural feature of how technologies distribute in a world of unequal resources. Every technology that has ever been invented has been distributed unequally, and the pattern of inequality has been remarkably consistent: the populations with the most resources adopt first, the populations with the fewest resources adopt last, and the gap between them narrows more slowly than any innovation narrative predicts.

The policy implication is direct. Governments and institutions that want to democratize AI access should invest not in AI research but in infrastructure: reliable electricity, affordable connectivity, hardware subsidies, multilingual tool development, and institutional support for the mundane preconditions of adoption. The bicycle democratized mobility not because anyone invested in bicycle innovation but because the bicycle's infrastructure requirements were minimal. AI will democratize capability only to the extent that the infrastructure it requires is available — and making that infrastructure available is a production challenge, not an innovation challenge. It requires building power grids, not building models.

Segal is right that AI lowers the floor of who gets to build. Edgerton's addendum is that the floor is lowered only where the infrastructure exists to support the lower floor. In the bright spots on the map that does not exist, the floor has dropped dramatically. In the darkness between the bright spots, the floor has not moved at all.

The map would tell us where the actual transition is happening and where it is not. It would tell us who is being served and who is being left behind. It would redirect attention from the bright spots, where the innovation narrative lives, to the darkness, where the majority of the world's population lives. And it would remind us that the most important technological investments are not always the most dramatic ones. Sometimes the most important investment is a power line to a village that has never had reliable electricity — because without that power line, every AI tool in the world is a brochure, not a technology.

---

Chapter 10: Invention Is Easy; Maintenance Is Everything

In February 2026, Edo Segal stood on the CES floor in Las Vegas watching hundreds of people interact with Napster Station, a product that had not existed thirty days earlier. The compressed timeline was the proof. Thirty days from concept to functioning product, powered by AI-augmented development. The imagination-to-artifact ratio, collapsed to the width of a conversation.

What happened on February 1, the day after CES, is a question that The Orange Pill does not ask.

David Edgerton would ask it. He would ask it not to diminish the achievement but to complete the story, because the story of any technology is not the story of its creation. It is the story of its persistence — the long, unglamorous aftermath in which the created thing must be kept running under conditions the creator did not fully anticipate, by people who were not present at the creation, for a duration that exceeds the creator's attention.

Day 31 is when the maintenance begins. The code that Claude generated must be updated when dependencies change. The hardware must be serviced when components fail. The conversational AI model must be retrained when user interactions reveal failure modes that the original thirty-day sprint did not anticipate. The product must be adapted when it is deployed in environments — different venues, different acoustics, different languages, different user populations — that differ from the conditions of the original demonstration.

Each of these tasks is individually mundane. Collectively, they constitute the overwhelming majority of the labor that Napster Station will require over its lifetime. The thirty days of creation were dramatic, visible, celebratory. The years of maintenance that follow will be invisible — performed by people who were not on the CES floor, who do not appear in any account of the product's origin, and whose work will never be described as a phase transition.

Edgerton identified this asymmetry as the central distortion of innovation-centered thinking. Creation is visible. Maintenance is invisible. The visibility bias produces a systematic overvaluation of creation and undervaluation of maintenance that distorts investment, distorts policy, distorts education, and distorts public understanding of what technology actually requires.

The asymmetry has deepened in the AI era, and for a specific reason. AI tools dramatically accelerate creation. They do not dramatically accelerate maintenance. The imagination-to-artifact ratio has collapsed. The artifact-to-maintained-system ratio has not. If anything, it has worsened, because the acceleration of creation produces more artifacts, each of which requires maintenance, and the total maintenance burden grows faster than the capacity to address it.

Consider the mathematics. Before AI, a team of twenty engineers might produce one new product per quarter while maintaining the existing product portfolio. With AI-augmented development, the same team might produce five new products per quarter — Segal's twenty-fold productivity multiplier applied to creation. But each new product enters the maintenance queue. After one year, the team has created twenty new products (versus four in the pre-AI world) and must maintain all of them simultaneously. The maintenance burden has grown fivefold. The team's capacity for maintenance has not grown at all, because maintenance is the part of the work that AI handles least well.

This is not a hypothetical projection. It is a pattern already visible in organizations that adopted AI development tools early. The proliferation of prototypes — products built in days or weeks, launched enthusiastically, and then left to accumulate technical debt — has created what some engineers are calling "prototype graveyards": collections of functioning-but-unmaintained systems that work today and will break tomorrow, and that nobody has been assigned to tend.

Edgerton would recognize this immediately. He documented the identical pattern in the history of military technology, where the enthusiasm for acquiring new weapons systems consistently outstripped the capacity to maintain them. The British military in the Second World War discovered that acquiring a new tank was the easy part. Keeping it operational in the North African desert — maintaining the engine, replacing the treads, sourcing the spare parts, training the mechanics, building the supply chain — consumed ten times the resources of the original acquisition. The acquisition was dramatic. The maintenance was invisible. And when maintenance failed, the tank became a sculpture.

The same dynamic appears in the history of development aid. Wealthy nations have spent decades donating advanced technologies — medical equipment, water purification systems, agricultural machinery — to developing countries. The donation is dramatic: a photo opportunity, a press release, a moment of visible generosity. The maintenance is invisible. And when the donated equipment breaks, as all equipment eventually does, there are no spare parts, no trained technicians, no institutional infrastructure for repair. The equipment is abandoned. The village returns to the older technology that works — the hand pump, the bicycle, the corrugated iron sheet — because those technologies can be maintained with local resources and local knowledge.

AI-generated systems face the same vulnerability. A prototype built in thirty days with Claude Code is a donation of capability. Maintaining that capability requires a sustained institutional commitment that extends far beyond the initial creation: ongoing access to the AI tool, ongoing availability of engineers who understand the generated code well enough to maintain it, ongoing investment in the infrastructure that supports the system, and ongoing attention to the thousand small failures that accumulate when any system operates in the real world over real time.

Segal acknowledges part of this in his Chapter 19 analysis of the Software Death Cross. He writes that the value of a software company is not in its code but in its ecosystem — "the data layer, the integrations, the institutional trust, the accumulated workflow patterns." This is, structurally, a maintenance argument. The ecosystem Segal describes is the product of maintenance: years of patient, unglamorous work by people who tended the system, responded to failures, adapted to changing conditions, and accumulated the institutional knowledge that makes the system reliable.

But the argument, in Segal's hands, serves the innovation narrative. The ecosystem's value is presented as a defense against the Death Cross — a reason why existing software companies will survive despite the commoditization of code. Edgerton would reframe it differently. The ecosystem is not merely a competitive advantage. It is the actual product. The code was always the least important part. The maintenance — the ongoing, invisible, essential work of keeping the system running — was always the real work, and the real value, and the real reason the ecosystem exists.

The implications for the AI economy are stark. If creation becomes cheap and maintenance remains expensive, then the economic center of gravity shifts from innovators to maintainers. The people who matter most will not be the people who build new things. They will be the people who keep existing things running — who understand the systems well enough to diagnose their failures, who have the patience and the skill to perform the unglamorous work of upkeep, who resist the cultural pressure to move on to the next shiny prototype and instead tend to the systems that people actually depend on.

This is a different vision of the future than The Orange Pill presents. Segal's future is organized around creative directors, integrative thinkers, question-askers — the people who decide what should exist. Edgerton's future includes all of these, but adds a population that Segal's narrative renders invisible: the maintainers. The people who keep the dams from eroding. The people who show up on Day 31 and every day after. The people whose work has never been the subject of a manifesto and never will be, because maintenance is structurally incompatible with the dramatic narrative that manifestos require.

Russell and Vinsel, in their extension of Edgerton's work, proposed a simple reframing: "Innovation is a small piece of what happens with technology. Most of the time, most of the technology around us is not new, and most of the work done with technology is not innovative. It is the work of keeping existing systems going." This observation, applied to AI, produces a prediction: the AI workforce of 2035 will contain more maintainers than creators. The ratio will not be close. And the maintainers will be, as they have always been, the invisible majority — essential, skilled, and uncelebrated.

The sunrise from the tower's roof is visible because someone maintained the tower. The staircase holds because someone inspected the joints. The windows are clear because someone cleaned them. The observation deck supports the weight of everyone who climbed to see the view because someone checked the load-bearing capacity and reinforced the structure when it showed signs of stress.

Segal built the tower. Edgerton asks who will keep it standing.

The answer, as it has been for every technology in the history of civilization, is the people whose names do not appear on the architect's drawing. The people whose work begins on Day 31 and never ends. The people who do not build the dam but maintain it — stick by stick, day by day, against the constant pressure of a current that does not care about manifestos, about innovation narratives, about the drama of creation.

The river does not stop because the dam was well-built. The river tests the dam every day. The maintainers are the ones who answer.

---

Epilogue

The question I could not shake, after months inside Edgerton's thinking, was not about technology at all. It was about counting.

What do we count? When we measure the impact of AI — when I stand in front of a team or a boardroom or a dinner table and talk about what happened in that room in Trivandrum, or what we built in thirty days before CES — what am I counting?

I am counting creation. I am counting the dramatic. I am counting the artifacts that emerged from the collision between human intention and machine capability. And every number I cite — twenty-fold productivity, thirty-day build cycles, a trillion dollars of evaporated market value — is a number from the bright spots on a map that is mostly dark.

Edgerton does not tell me I am wrong. That is what makes his work so difficult to dismiss. He tells me I am counting the wrong things, or rather, counting too few things. He tells me that for every artifact I celebrate, there is a maintenance burden I have not measured. For every frontier demonstration, there are a million mundane uses I have not seen. For every developer in Lagos whose story I invoke, there is a village without reliable electricity where my democratization narrative is, functionally, fiction.

I knew some of this before I read him. In The Orange Pill, I wrote about the senior engineer who lost architectural intuition after months with Claude. I wrote about the addictive pull of productive compulsion. I acknowledged that the democratization was "real but partial." But acknowledgment is not the same as attention. I acknowledged the maintenance burden the way a builder acknowledges the weather — noted, accounted for in the schedule, and then forgotten in the excitement of watching the structure rise.

Edgerton does not let you forget the weather. He does not let you forget that the structure must stand through seasons you did not plan for, maintained by people you have not met, serving purposes you did not design for. He insists — quietly, empirically, without drama — on counting everything. The mundane alongside the extraordinary. The maintenance alongside the creation. The darkness alongside the light.

What I take from his work is not a contradiction of what I wrote. It is a completion. The tower I described in The Orange Pill has a view from the roof that is genuinely extraordinary. But the tower rests on a foundation that I described too briefly and populated with maintainers I rendered almost invisible. The staircase I asked you to climb was built by people whose labor I took for granted.

If the argument of The Orange Pill is that we must be worthy of amplification, then Edgerton adds a hard corollary: we must be worthy of maintenance. Worthy of the sustained, unglamorous, daily attention that keeps what we build from collapsing. Worthy of the patience that maintenance requires — patience that the innovation narrative, with its compressed timelines and dramatic arcs, actively undermines.

I still believe the sunrise is real. I still believe the view from the roof changes what you see. But I understand now, better than I did when I wrote those words, that the roof holds only because someone tends the joints. And tending the joints is work that deserves not just acknowledgment but celebration — the kind of celebration that our culture reserves for creators but owes equally to the people who keep the created world intact.

The mundane is where most of us live. The mundane is where most of the impact happens. And the mundane deserves better than to be the backdrop to someone else's manifesto.

Including mine.

-- Edo Segal

The AI revolution has its heroes: the thirty-day product sprint, the trillion-dollar market correction, the engineer who crossed domain boundaries overnight. David Edgerton has spent his career studyi

The AI revolution has its heroes: the thirty-day product sprint, the trillion-dollar market correction, the engineer who crossed domain boundaries overnight. David Edgerton has spent his career studying what those stories leave out. Through his use-centered history of technology, Edgerton reveals that the most consequential story is never the invention -- it is the decades of maintenance, mundane deployment, and uneven adoption that determine whether a technology actually changes lives. This book applies Edgerton's framework to the AI moment and discovers an uncomfortable truth: the autocomplete that saves five minutes may reshape more labor than any frontier demonstration, the old tools persist longer than any obituary predicts, and the invisible maintainers -- not the celebrated creators -- are the ones who keep the world running.

David Edgerton
“** "Hype about new technology is ahistorical, crude nonsense. There is hardly anything original about it. All that changes is the particular machine." -- David Edgerton, testimony before the UK House of Lords Select Committee on Artificial Intellige”
— David Edgerton
0%
11 chapters
WIKI COMPANION

David Edgerton — On AI

A reading-companion catalog of the 29 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that David Edgerton — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →