By Edo Segal
The builders were never who I thought they were.
I spent thirty years in the technology industry operating under an assumption so pervasive I mistook it for a law of nature: innovation flows from producers to consumers. Companies build. Users use. The interesting questions are about how to build faster, ship sooner, capture more of the market before a competitor does. The entire apparatus I inhabited — the product roadmaps, the sprint cycles, the feature prioritization meetings — was organized around the premise that creation is our job and adoption is theirs.
Then I read Eric von Hippel, and the premise collapsed.
Not because he argued against it philosophically. Because he counted. He went into laboratories and operating rooms and machine shops and surfboard garages and he counted who actually built the innovations that manufacturers eventually sold. The answer, across industry after industry, was the users. Seventy-seven percent of scientific instrument innovations. Two-thirds of semiconductor process innovations. The mountain bikes, the surgical tools, the open-source software that runs the infrastructure of the modern internet. Users built them. Manufacturers noticed later.
Sixteen million Americans were modifying and creating products for their own use, and the innovation economy that I belonged to had no way to see them. Not because they were hiding. Because our instruments were pointed at the wrong part of the river.
In The Orange Pill, I describe the moment the imagination-to-artifact ratio collapsed — when a marketing manager could build a custom CRM in an afternoon, when a teacher could create an assessment tool calibrated to her specific students, when I could describe Napster Station to Claude and watch it materialize. I called that moment a democratization of capability. Von Hippel's framework showed me it was something more precise: a cost collapse that released decades of pent-up user innovation that had always been there, latent, waiting for the barrier to break.
The distinction matters. Democratization sounds like a gift bestowed by the powerful. What actually happened is that millions of people who already knew what they needed finally gained the means to build it. The knowledge was always theirs. The sticky, contextual, impossible-to-transfer understanding of their own problems was always theirs. What changed was the cost of acting on it.
This book applies von Hippel's four decades of empirical research to the AI moment, and what it reveals is both more hopeful and more demanding than the standard narrative. More hopeful because the flood of innovation is real and broad and human. More demanding because the institutions that must govern that flood — the quality mechanisms, the commons protections, the dams — do not yet exist at the scale the moment requires.
The builders were always everywhere. Now they have tools.
— Edo Segal ^ Opus 4.6
1941-present
Eric von Hippel (1941–present) is an American economist and professor at the MIT Sloan School of Management, where he has held the T. Wilson (1953) Professorship in Management since the 1990s. Born in the United States, he trained as an economist and engineer before joining MIT, where he has spent over four decades conducting empirical research on the sources of innovation. His landmark studies of scientific instruments, semiconductor equipment, sporting goods, and medical devices demonstrated that users — not manufacturers — are the primary source of innovation in many industries, overturning the producer-centric model that had dominated innovation economics. His key concepts include lead users (users whose advanced needs anticipate broader market trends), sticky information (knowledge that is costly to transfer from its point of origin), toolkits for user innovation (manufacturer-provided platforms that shift design authority to users), and free innovation (the large-scale phenomenon of individuals innovating at their own expense without expectation of financial return). His major works include *The Sources of Innovation* (1988), *Democratizing Innovation* (2005), and *Free Innovation* (2017). Von Hippel's research has reshaped innovation policy, corporate R&D strategy, and the understanding of open-source development, and his empirical methods — counting who actually innovates rather than assuming who should — established a research tradition that continues to expand across dozens of industries worldwide.
For most of the twentieth century, the economics of innovation rested on an assumption so widely shared it was never examined. The assumption was this: producers innovate, consumers consume. Firms invest in research and development, hire engineers, build laboratories, file patents, and bring new products to market. Consumers evaluate what is offered and buy or do not buy. The flow of innovation runs in one direction — from the corporation to the customer — and the interesting questions are about how to make that flow more efficient, more profitable, more predictable.
Eric von Hippel spent forty years demonstrating that this assumption is, for a large and consequential class of innovations, empirically false.
The evidence began accumulating in the mid-1970s, when von Hippel, then a young professor at MIT's Sloan School of Management, undertook a study of innovation in scientific instruments — the electron microscopes, gas chromatographs, nuclear magnetic resonance spectrometers, and transmission electron microscopes that form the backbone of laboratory research. The conventional model predicted that instrument manufacturers would be the primary source of innovation. They had the engineering talent, the capital, the incentive structures. They were in the business of making these instruments better.
The data said otherwise. Of one hundred and eleven innovations von Hippel studied across four instrument categories, roughly seventy-seven percent had been developed not by the manufacturers but by the scientists who used the instruments. The users had identified a need in the course of their research, built a prototype to address it, and only then had the manufacturer recognized the innovation's commercial potential and incorporated it into a production model. The direction of flow was reversed. Innovation moved from user to producer, not the other way around.
This was not a marginal finding. It was not a special case confined to a single industry with unusual characteristics. Over the next four decades, von Hippel and a growing community of researchers replicated the pattern across an extraordinary range of domains. In semiconductor process equipment, users originated roughly two-thirds of the innovations that manufacturers eventually commercialized. In sporting equipment — mountain bikes, windsurfing rigs, skateboard designs — users were the dominant source of novel product concepts. In surgical instruments, the physicians who performed procedures were the ones who modified, adapted, and reinvented the tools they held in their hands. In software, the open-source movement provided a massive, distributed demonstration that users could not only innovate but could organize to produce software systems of extraordinary complexity and reliability without any manufacturer in the loop at all.
The pattern was consistent enough that von Hippel could formalize it. Users innovate, his research showed, when two conditions are met. First, their needs must be heterogeneous — sufficiently diverse that no single manufacturer can serve them all with standardized products. A surgeon whose specific procedural technique creates a specific instrumental need that no catalogue instrument addresses has a heterogeneous need. A scientist whose experimental protocol requires an instrument modification that no manufacturer offers has a heterogeneous need. The more diverse the user population, the more likely it is that individual users will face needs that the manufacturer's product development process has not anticipated and cannot economically serve.
Second, the cost of innovation must be low enough, relative to the benefit, to make it rational for the user to build rather than wait. A surgeon who needs a modified retractor and can bend one in the hospital workshop in an afternoon will do so. The same surgeon, if the modification required a six-month engineering project and a hundred thousand dollars in tooling, would submit a suggestion to the manufacturer and wait. The cost-benefit ratio determines the threshold: below it, users innovate; above it, they endure.
These two conditions — heterogeneous needs and a favorable cost-benefit ratio — constitute the structural explanation for user innovation. The explanation is not psychological (users are creative). It is not ideological (users should innovate). It is economic. Users innovate because, given the specific conditions of their situation, innovating is cheaper than any alternative.
The implications extend far beyond the academic literature. If the conventional model is wrong — if innovation flows from users as readily as from producers, and in some domains more readily — then the entire institutional architecture built around the conventional model needs reexamination. Patent systems designed to incentivize producer innovation may impede user innovation by creating barriers to sharing. Corporate R&D strategies designed to generate innovations internally may be less effective than strategies designed to identify and incorporate innovations that users have already developed. Government innovation policies that direct funding to firms may miss the largest and most dynamic source of innovation in the economy.
Von Hippel organized his findings into a framework that has proven remarkably durable. The framework identifies user innovation not as an anomaly but as a structural feature of economies in which user needs are diverse and innovation costs are variable. The rate of user innovation in any domain is a predictable function of these two variables. Change the variables and the rate changes with them.
This is the point where the framework meets the present moment with extraordinary force.
The Orange Pill documents three individuals who, in the winter and spring of 2025-2026, built working software products to solve their own specific problems. A marketing manager who needed a customer relationship management system tailored to her particular workflow built one in an afternoon using Claude Code. A teacher who needed a reading tracker calibrated to the specific developmental needs of her students built one through a series of conversations with an AI assistant. An architect who needed a structural analysis tool that integrated with her existing design software prototyped one over a weekend.
Each of these individuals is a user innovator in von Hippel's precise sense. Each faced a heterogeneous need — a need specific enough that no commercial product addressed it. The marketing manager's workflow did not match the assumptions embedded in Salesforce or HubSpot. The teacher's assessment framework did not align with the standardized tools available to her district. The architect's integration requirements fell outside the scope of any existing analysis platform. Each had the motivation to innovate: the need was real, immediate, and personal.
What changed was the cost.
Before the language interface, each of these innovations would have required either years of self-taught programming or the hiring of a professional developer. The marketing manager could have described her needs to a developer, waited weeks for a prototype, reviewed it, requested changes, waited again, and eventually — if the budget held and the developer understood the specification — received something approximating what she needed. The cost would have been measured in thousands of dollars and months of elapsed time. At that cost, the innovation was not rational. The marketing manager would have adapted her workflow to Salesforce's assumptions and endured the mismatch.
With the language interface, the cost dropped to an afternoon and a subscription fee. The cost-benefit ratio crossed von Hippel's threshold. The innovation that had been latent — the unmet need that the user had been enduring because the cost of solving it exceeded the benefit — was suddenly rational to pursue.
And this is the critical point: the marketing manager's need did not change. Her creativity did not change. Her motivation did not change. The only variable that changed was cost. Von Hippel's framework predicts, with the precision of an economic model rather than the vagueness of a cultural narrative, that a collapse in the cost of innovation will produce a corresponding explosion in the rate of user innovation. The prediction is not aspirational. It is structural. Lower the cost, and users who were previously priced out of innovating will innovate. The magnitude of the explosion is proportional to the magnitude of the cost reduction.
The magnitude of this particular cost reduction is unprecedented in the history of human tool-making.
The marketing manager's CRM, built in an afternoon, would have taken a professional developer two to four weeks to build from scratch. The teacher's reading tracker, conversed into existence over a series of sessions, would have required a software specification, a development cycle, and a testing phase totaling perhaps three months. The architect's analysis tool, prototyped over a weekend, would have required a specialized engineering team and a budget in the tens of thousands of dollars.
In each case, the cost dropped by one to two orders of magnitude. Not a marginal improvement. A structural transformation of the economics of building.
Von Hippel's research predicts what happens next, and the prediction is grounded not in speculation but in four decades of empirical observation across dozens of industries. When the cost of innovation drops by an order of magnitude, the population of users who find it rational to innovate expands by a corresponding order of magnitude. Users who previously endured unmet needs — because the cost of addressing them exceeded the benefit — now find it rational to build. The threshold has moved, and it has moved beneath millions of people who were standing above it.
The three stories Segal tells are not anecdotes. They are the first visible data points of a structural transformation that von Hippel's framework makes legible in a way no other analytical lens quite can. The stories are moving not because the individuals are exceptional — though they may be — but because they are ordinary. They are users who faced problems and solved them. The only thing that changed was that solving the problem became cheap enough to attempt.
The scale of the latent demand, the unmet needs of millions of users who have been enduring mismatches between their specific requirements and the standardized products available to them, is the thing the conventional innovation model cannot see. The conventional model counts the innovations that producers bring to market. It does not count the innovations that users would have built if the cost had been low enough. That uncounted reservoir of latent user innovation is, von Hippel's research suggests, vastly larger than the visible stream of producer innovation. And the language interface has just removed the dam.
The conventional model will require revision. Not because it was wrong about producer innovation — firms do innovate, and will continue to innovate — but because it was incomplete. It described one tributary and mistook it for the entire river. The user innovation tributary was always there, flowing beneath the surface of the visible economy, constrained by cost but never eliminated. The language interface has brought it to the surface, and the flow is larger than anyone standing on the bank of the producer tributary was prepared to see.
---
Not all users innovate. Even when the cost of innovation drops dramatically, the majority of users in most markets continue to use commercially available products as they find them. They may complain about the fit. They may wish the product worked differently. But they do not build alternatives. The cost-benefit calculation, even at reduced costs, does not cross their threshold.
The users who do innovate tend to share two specific characteristics that von Hippel identified and validated across multiple decades of field research. First, they face needs that are ahead of the general market — needs that will eventually become widespread but that are currently experienced only by people operating at the leading edge of a practice or a market trend. Second, they expect to benefit significantly from obtaining a solution. The combination of early need and high expected benefit produces a population of users who are both motivated and positioned to innovate: motivated because the benefit of a solution is large enough to justify the effort, and positioned because their location at the frontier of a practice gives them access to problem information that users further from the frontier do not yet possess.
Von Hippel called these individuals lead users, and the concept has proven to be among the most productive in innovation economics.
The lead-user concept is not a personality type. It is not a measure of creativity, intelligence, or entrepreneurial spirit. It is a structural position in a market. A lead user is a person whose relationship to a problem — the intensity of the need, the specificity of the context, the proximity to the frontier — creates the conditions under which innovation is rational. Change the person's position and the innovation incentive changes with it. The same surgeon who innovates in the operating room, where her needs are intense and specific, does not innovate in her kitchen, where her needs are generic and easily served by commercial products. Lead-user status is domain-specific, not a general trait.
The importance of lead users extends beyond the innovations they produce. Lead users matter because their innovations are predictive. The needs they face today are the needs the broader market will face tomorrow. The surgeon who modifies a retractor to accommodate a new minimally invasive technique is responding to a trend — the shift toward minimally invasive surgery — that will eventually reshape the entire surgical instrument market. Her innovation is not idiosyncratic. It is an early signal of a market trajectory that has not yet become visible to the manufacturers whose planning horizons are calibrated to current, not future, demand.
This predictive quality is what makes lead users disproportionately valuable to the innovation ecosystem. A manufacturer that identifies lead-user innovations early gains a window into future market needs that no amount of conventional market research can provide. Surveys and focus groups capture the needs of average users — users who are, by definition, experiencing the present, not the future. Lead users are experiencing the future now, because their position at the frontier of practice exposes them to problems that the rest of the market has not yet encountered.
The three innovators described in The Orange Pill are lead users in this precise sense. The marketing manager who built a custom CRM was not a typical marketing professional. She was operating at the leading edge of a specific practice — a workflow that integrated customer communication, pipeline tracking, and performance analytics in a way that no existing CRM supported. Her need was not a complaint about Salesforce's user interface. It was a structural mismatch between her emerging workflow and the assumptions embedded in commercially available tools. That mismatch will become more widespread as more marketing professionals adopt similar workflows. Her innovation anticipates the market.
The teacher who built a reading tracker was not simply dissatisfied with existing assessment tools. She was working with a specific population of students, in a specific developmental context, using a specific pedagogical approach that required formative assessment data organized in a way that no standardized tool provided. Her need was rooted in a trend — the shift toward individualized, data-informed instruction — that is gradually transforming elementary education. Her innovation is not a curiosity. It is a prototype of the assessment infrastructure that thousands of classrooms will eventually require.
The architect who built a structural analysis tool was responding to a convergence of computational design methods and traditional structural engineering that is reshaping architectural practice. Her tool integrated parametric design outputs with structural performance criteria in a way that no commercial platform yet supported. The integration need will intensify as computational design methods penetrate further into mainstream practice. Her weekend prototype foreshadows a category of tools that does not yet exist commercially.
In each case, the lead user possessed something that no manufacturer could easily access: the specific, embodied, contextual knowledge of what the problem actually was. The marketing manager knew her workflow from the inside. The teacher knew her students from daily observation over years. The architect knew the gap between her design tools and her structural requirements from having lived in that gap through dozens of projects. This knowledge is what von Hippel calls sticky information — the subject of the next chapter — and its distribution is the structural reason why user innovation persists even in markets where manufacturers have every incentive to innovate.
What distinguishes AI-augmented lead users from their historical predecessors is a compression of the innovation cycle that has no precedent in von Hippel's four decades of empirical observation. Traditional lead users often performed the most cognitively demanding part of innovation — identifying the need, understanding the problem, envisioning the solution — but then hit the implementation barrier. The surgeon who knew exactly what the modified retractor should look like still needed an engineering team to build the prototype. The scientist who knew exactly what the instrument modification should accomplish still needed a machinist or an electronics technician to execute the modification. The identification of the need and the building of the solution were separated by a gap that required either personal technical skill or access to someone else's technical skill.
The language interface closes this gap. The lead user who identifies the need can now build the solution herself, in the same cognitive session, without switching from the language of need-identification to the language of technical implementation. The marketing manager did not write a specification and hand it to a developer. She described her workflow to Claude and received a working system. The cognitive arc from problem to solution was unbroken.
This compression matters for three reasons that extend beyond individual productivity.
First, it accelerates the rate at which lead-user innovations enter the broader ecosystem. In the traditional model, the gap between need-identification and implementation introduced delays measured in months or years. The surgeon identified the need, found an engineer, explained the need (imperfectly, because the information was sticky), waited for a prototype, tested it, requested modifications, waited again. Each cycle introduced delay and information loss. The compression of this cycle to hours or days means that lead-user innovations become available to the broader market faster — available not as commercial products but as demonstrations, as proofs of concept, as evidence that a particular need can be addressed. The faster lead-user innovations become visible, the faster the broader market can respond.
Second, the compression expands the population of lead users who actually innovate rather than merely enduring their unmet needs. Von Hippel's research documented many lead users who identified needs but did not build solutions, because the implementation cost was too high relative to the benefit. These users possessed the most valuable input to the innovation process — the problem definition — but lacked the means to produce the output — the working solution. They were, in economic terms, innovation-constrained: motivated to innovate but blocked by cost. The language interface removes the constraint. Lead users who would have endured their needs in silence now build.
Third, and most subtly, the compression changes the nature of the innovation itself. When the gap between need-identification and implementation is measured in months, the innovator has time to formalize the need, to abstract it, to generalize it beyond the specific context in which it arose. This formalization can be valuable — it makes the innovation more transferable — but it also introduces distortion. The formalized version of the need is never quite the same as the raw, contextual, embodied version. The language interface allows the user to innovate directly from the raw need, without the intermediate step of formalization. The resulting innovation is more precisely fitted to the actual problem, because it has not been distorted by the translation process.
This precision of fit — the degree to which an innovation addresses the specific, contextual, embodied need of the user who created it — is the defining quality of user innovation. It is the thing that distinguishes a surgeon's hand-modified retractor from a manufacturer's standardized instrument. It is the thing that distinguishes the marketing manager's custom CRM from Salesforce. It is the thing that makes user innovations, in aggregate, more valuable than their individual modesty might suggest: each one is precisely fitted to a specific need, and the aggregate of precisely fitted solutions addresses a range of human needs that no manufacturer, however sophisticated, could survey or serve.
The lead users are building. The language interface has given them the means. And the innovations they produce — modest individually, transformative in aggregate — are the earliest signals of a market that has not yet fully formed but whose shape von Hippel's framework makes visible to anyone willing to look at the data rather than the assumptions.
---
The most deceptively simple question in innovation economics is this: If manufacturers have the engineering talent, the capital, and the market incentive to innovate, why do users so often beat them to it?
Von Hippel's answer centers on a concept he introduced in a 1994 paper and refined over the following decades: sticky information. The concept is straightforward in its definition and radical in its implications. Sticky information is information that is costly to transfer from one location to another. The cost is not primarily financial. It is cognitive. The information resides in a specific context — a user's embodied experience, a practitioner's tacit knowledge, a workflow's accumulated adaptations — and resists extraction, codification, and transmission without significant loss of fidelity.
A surgeon's knowledge of what is wrong with a particular instrument is a paradigmatic example. She has held the instrument inside a human body a thousand times. She knows, not from reading a specification but from the accumulated memory of her hands, that the angle of the jaw is wrong by three degrees for the approach she uses in a specific subset of procedures. She knows that the ratchet mechanism creates a vibration that interferes with her tactile feedback at a critical moment. She knows that the grip diameter is too large for the sustained hold her technique requires. This knowledge is enormously specific, deeply contextual, and almost entirely tacit. It lives in the intersection of her particular technique, her particular anatomy (the size and strength of her hands), and the particular procedure she performs on the particular patient population she serves.
Transferring this knowledge to the manufacturer is expensive. Not because the surgeon is unwilling to share — most user innovators are remarkably willing to share their knowledge, a finding von Hippel documented extensively — but because the knowledge resists the codification that transfer requires. The surgeon can describe the problem in natural language: "The jaw angle is wrong for my approach." But this description is incomplete. It does not convey the specific angle, the specific approach, the specific subset of procedures, the specific interaction between the instrument and the surgeon's hand in the particular configuration of a particular operating field. To convey all of this, the surgeon would need to produce a detailed engineering specification — a document that translates her tacit, embodied knowledge into the formal language of the manufacturer's design process.
This translation is the bottleneck. The surgeon is expert in surgery, not in engineering specification. The manufacturer is expert in engineering, not in the specific, contextual, embodied experience of a particular surgeon performing a particular procedure. The information that would enable the manufacturer to build the right instrument is locked inside the surgeon's experience, and the cost of extracting it — of translating it from the surgeon's cognitive format into the manufacturer's cognitive format — is high enough that the transfer frequently does not occur. The need remains unmet. The surgeon adapts, or endures, or modifies the instrument herself if she has the skill.
This is the structural mechanism that explains user innovation. Users innovate not because they are more creative than manufacturers. They innovate because they possess the information necessary to innovate — the sticky, contextual, embodied knowledge of their own needs — and because the cost of acting on that information (building the solution themselves) is often lower than the cost of transferring the information to a manufacturer and waiting for the manufacturer to act.
The stickiness of information is not a market failure in the traditional sense. It is a feature of how knowledge is distributed in the world. The person closest to the problem knows the most about the problem, and the most important things she knows are the things she cannot easily tell anyone else. This is true in surgery, in teaching, in architectural design, in marketing, in every domain where the quality of a solution depends on understanding the specific context in which the solution will be used.
The language interface transforms the economics of sticky information in a way that von Hippel's 1994 framework anticipated in structure, even if it could not have anticipated in mechanism.
Before the language interface, the user who possessed sticky information about her needs had two options. She could attempt to transfer the information to a manufacturer — through surveys, focus groups, customer support interactions, product reviews — and accept the degradation that the transfer process imposed. Or she could act on the information herself, building a solution that leveraged her direct access to the sticky knowledge, but only if she possessed or could acquire the technical skills necessary to build.
The language interface introduces a third option: the user describes her need in natural language, and a machine translates the description into a working solution. The critical feature of this option is that it does not require the user to fully codify her knowledge. She describes the problem as she experiences it — in the language she uses to think about it, with the ambiguities and contextual references that natural language permits — and the machine performs the translation from description to implementation.
The stickiness of the information has not been eliminated. The surgeon's tacit knowledge of the instrument's failings is still tacit. The teacher's embodied understanding of her students' developmental needs is still embodied. The architect's intuitive sense of the gap between her design tools and her structural requirements is still intuitive. What has changed is the cost of acting on that knowledge. The user no longer needs to either codify her knowledge (expensive, lossy) or acquire technical skills (expensive, time-consuming). She needs only to describe her need in the language she already uses to think about it.
This reduction in the cost of acting on sticky information has a specific structural consequence that von Hippel's framework predicts with considerable precision. In the traditional model, the stickiness of user need-information created a barrier to innovation — a barrier that only users with intense needs and sufficient technical skill could overcome. The user innovators who appeared in von Hippel's studies were, disproportionately, technically skilled. The scientist who modified her electron microscope was a physicist who understood electronics. The surgeon who modified her retractor had machining skills. The software users who contributed to open-source projects were, by definition, programmers. The sticky-information barrier was partially permeable — it allowed technically skilled users through while blocking the rest.
The language interface makes the barrier permeable to any user who can describe her need. The marketing manager does not need to know how to write code. She needs to know what she needs. The teacher does not need to understand database architecture. She needs to understand her students. The architect does not need to program structural analysis algorithms. She needs to know what her buildings must withstand. The technical skill that was previously required to act on sticky information has been absorbed by the machine. What remains is the skill that was always the most important: the ability to know what you need. Domain expertise. Contextual understanding. The sticky information itself.
This is a rebalancing of what matters in innovation. Previously, two forms of knowledge were necessary to innovate: knowledge of the problem (which the user possessed) and knowledge of the solution technology (which the manufacturer or the technically skilled user possessed). The language interface separates these two forms of knowledge and eliminates the second as a barrier. Knowledge of the problem is now sufficient to innovate, because the machine supplies the solution technology on demand.
The practical consequence is a vast expansion of who gets to innovate. The population of users with sticky information about their needs — information that would be costly to transfer to a manufacturer — is the entire population of users. Everyone who uses a product and encounters a mismatch between the product's assumptions and her specific needs possesses sticky information. Previously, only a fraction of these users could act on that information. Now, any user who can describe her need can act on it.
The research von Hippel and Sandro Kaulartz published in 2020 on "next-generation consumer innovation search" is relevant here, though its focus was different. That paper demonstrated that machine learning techniques for natural language understanding could be used to identify early-stage user innovations described in publicly available text on the internet — social media posts, forum discussions, blog entries. The method captured descriptions of need-solution pairs that users had already developed and shared. The language interface extends this logic: if machine learning can identify user innovations described in natural language, it can also build user innovations described in natural language. The move from identifying to building is the move from observation to action, and it is the move that collapses the implementation barrier.
Von Hippel's framework suggests that the sticky-information advantage of user innovators will not be eroded by AI. Manufacturers using AI to anticipate user needs will produce solutions based on aggregated, generalized, de-contextualized information — the kind of information that survives the transfer from user to manufacturer but loses its specificity in transit. The user who builds with AI will produce solutions based on her own contextual, specific, embodied knowledge — the sticky information that resists transfer. The quality of the user's solution will be higher, for her specific needs, than the quality of the manufacturer's solution, because the user's solution was built from better information.
Sticky information remains sticky. What has changed is the cost of acting on it. And that change is sufficient to transform the innovation landscape.
---
In 2001, von Hippel published a paper that proposed a practical bridge between the theory of user innovation and the strategy of firms that wished to benefit from it. The paper, "Toolkits for User Innovation and Design," argued that manufacturers could systematically accelerate user innovation by providing toolkits — integrated sets of design tools that shifted the locus of design from the manufacturer to the user.
The argument was grounded in the sticky-information problem. Manufacturers faced a persistent dilemma: the information necessary to design a product that precisely met a user's needs resided with the user, not the manufacturer, and the cost of transferring that information was high. The conventional approach — iterative cycles of specification, prototyping, testing, and revision, with the manufacturer doing the designing and the user providing feedback — was slow, expensive, and lossy. Each cycle required the user to inspect a prototype and articulate what was wrong, and each articulation was an imperfect translation of the user's tacit knowledge into the manufacturer's design language.
Toolkits offered a structural solution. Instead of trying to extract the user's sticky information and bring it to the manufacturer's design process, the manufacturer would bring the design process to the user. The toolkit would provide the user with the means to design her own solution, using her own sticky information directly, without the intermediate step of transferring that information to someone else.
Von Hippel identified five criteria that an effective user innovation toolkit must satisfy. First, it must enable users to complete design cycles through trial and error — to create a design, test it, observe the results, and modify the design accordingly, without requiring external intervention at any stage. Second, it must offer a solution space large enough to encompass the designs that users actually want to create. A toolkit that constrains users to a narrow set of possibilities defeats its own purpose. Third, it must be user-friendly — accessible to people who possess domain expertise but not necessarily design or engineering expertise. Fourth, it must contain libraries of commonly used modules, patterns, or components that users can incorporate without rebuilding from scratch. Fifth, it must produce outputs that can be used or deployed without further professional translation — the user's design must be the final product, not a specification that requires additional manufacturing steps.
These five criteria defined a design space for toolkits, and the history of software tools can be read as a series of attempts to satisfy them — each attempt satisfying some criteria while falling short on others.
Spreadsheets, beginning with VisiCalc in 1979, satisfied the first criterion superbly. A user could enter a formula, see the result immediately, modify the formula, and see the updated result — a tight trial-and-error loop that required no external intervention. The fourth criterion was partially satisfied by templates and built-in functions. But the second criterion — solution space — was severely constrained. Spreadsheets could model quantitative relationships but could not produce interactive applications, data-driven interfaces, or the kind of integrated systems that most user needs ultimately required. And the third criterion was satisfied only for users whose problems could be expressed in the grid-and-formula paradigm that spreadsheets imposed.
Low-code platforms, which proliferated in the 2010s, expanded the solution space considerably. Users could build database-backed applications, workflow automations, and interactive interfaces without writing traditional code. But the third criterion — user-friendliness — proved stubbornly resistant. Low-code platforms still required users to think in terms of data models, event triggers, conditional logic, and interface components. The vocabulary was simplified relative to traditional programming, but it was still the vocabulary of software engineering, not the vocabulary of the domain in which the user worked. The marketing manager needed to learn to think like a software designer before she could build what she needed. The teacher needed to learn a different set of abstractions than the ones she used to think about pedagogy. The cognitive tax was real, and it was sufficient to exclude the majority of potential user innovators.
Application programming interfaces, developer frameworks, and cloud platforms addressed the fifth criterion — deployability — with increasing sophistication. But each of these tools was designed for users who were already programmers. They expanded the capabilities of technically skilled users without lowering the barrier for the technically unskilled. The population of user innovators grew, but it grew at the margin — more programmers could build more things — rather than at the base.
The language interface satisfies all five of von Hippel's criteria simultaneously. No previous toolkit has done this, and the fact that no previous toolkit has done it explains the anomalous speed of adoption that The Orange Pill documents. The adoption speed was not driven by marketing, by network effects, or by institutional mandates. It was driven by the recognition, arriving with the visceral immediacy of a phase transition, that the toolkit that user innovators had been waiting for had arrived.
Consider each criterion in turn.
Trial and error without external intervention. The user describes what she wants. The machine produces it. The user examines the result, identifies what is wrong, describes the correction, and the machine revises. The cycle time is measured in seconds. No external person — no developer, no designer, no IT department — needs to be consulted at any stage. The user and the machine form a closed design loop that can iterate as many times as necessary to converge on a satisfactory solution. The marketing manager can say, "No, the pipeline view should show deal value, not count," and see the revision immediately. The teacher can say, "The reading level categories need to match my district's assessment framework, not Lexile scores," and see the adjustment in the next response. The trial-and-error cycle is native to the interaction, not an add-on feature.
Solution space. The language interface provides access to the full solution space of general-purpose programming. Anything that can be expressed as software — and the range of what can be expressed as software is enormous and growing — is within the toolkit's solution space. The marketing manager is not limited to CRM templates. She can build any application she can describe. The teacher is not limited to educational technology categories. She can build any classroom tool she can envision. The constraint on the solution space is the user's imagination and descriptive ability, not the toolkit's technical boundaries. This is a qualitative change from every previous toolkit, where the solution space was defined by the toolkit's architecture and the user had to adapt her needs to the toolkit's boundaries.
User-friendliness. The interface is natural language. The user needs no training in software design, no understanding of data models, no familiarity with programming concepts. She needs only the ability to describe what she wants in the language she already uses to think about her domain. The marketing manager describes her CRM in marketing language. The teacher describes her reading tracker in pedagogical language. The architect describes her analysis tool in structural engineering language. The machine performs the translation from domain language to implementation. The cognitive tax that every previous toolkit imposed — the requirement that the user learn to think like a software designer — has been eliminated.
Libraries of reusable modules. Large language models have been trained on the entirety of publicly available code, documentation, and technical discussion. When the user describes a need that has a standard solution — a login system, a database query, a chart visualization — the model draws on its training to produce a module that reflects established best practices. The user does not need to know that these patterns exist. She describes what she needs, and the model selects and adapts the appropriate components. The library is implicit rather than explicit, but it is far larger and more comprehensive than any toolkit library that has ever been explicitly assembled.
Deployable outputs. The language interface produces working code that can be compiled, tested, and deployed. The marketing manager's CRM runs. The teacher's reading tracker functions. The architect's analysis tool produces results. The gap between design and deployment — the gap that historically required a professional developer to bridge — has been closed. The user's description is the specification; the machine's output is the product.
Von Hippel's 2001 paper predicted that toolkits would evolve toward greater flexibility and expressiveness as the underlying technology matured. The prediction was correct, but the form of the evolution was not what the paper envisioned. The paper imagined toolkits becoming more sophisticated within their existing paradigm — better low-code platforms, more powerful configuration engines, richer libraries of pre-built components. What actually happened was a paradigm break: the toolkit ceased to be a structured set of components and became a general-purpose builder controlled by natural language.
This matters for the von Hippel framework because it changes the denominator in the cost-benefit equation that governs user innovation. The benefit of innovating — solving a specific, personal, heterogeneous need — has not changed. The cost has changed categorically. The cost is no longer measured in engineering time or in the cognitive overhead of learning a new design paradigm. It is measured in the time required to have a conversation.
The implications of this cost change are the subject of the remaining chapters, but the toolkit analysis provides the structural reason for the implication that matters most. When the ultimate toolkit arrives — when a single tool satisfies all five criteria for effective user innovation simultaneously — the rate of user innovation is limited only by the rate at which users encounter unmet needs and the rate at which they can describe those needs. Both of these rates are very high. The human experience of mismatch between available tools and actual needs is pervasive, and the human capacity to describe those needs in natural language is, by definition, universal.
The dam has not cracked. It has been removed. What follows is the flood.
In 2017, von Hippel published a book that extended his framework in a direction that most innovation economists found uncomfortable. The book was called Free Innovation, and its central claim was this: a large and growing fraction of innovation is produced by individuals who spend their own time and money, build solutions for their own use, and expect no financial return. They do not patent. They do not sell. They do not license. They build because they have a need, and they share because sharing costs them almost nothing and the social reward — recognition, reciprocity, the satisfaction of helping someone who faces a similar problem — is sufficient compensation.
The claim was uncomfortable because it violated the foundational assumption of innovation economics: that innovation requires incentives, and the most reliable incentive is profit. The patent system, the venture capital industry, the entire apparatus of intellectual property law rests on the premise that people will not innovate unless they can capture the economic returns of their innovations. Remove the prospect of profit, and the incentive to innovate disappears. This is the logic that justifies twenty-year patent monopolies, that funds R&D tax credits, that structures the relationship between universities and the companies that license their discoveries.
Von Hippel's data contradicted this logic with a bluntness that left little room for interpretive maneuvering. In national surveys conducted across six countries — the United States, the United Kingdom, Japan, South Korea, Finland, and Canada — von Hippel and his collaborators found that millions of individuals had developed or modified consumer products for their own use in the previous three years. The innovations ranged from the trivial to the significant: modified sporting equipment, adapted kitchen tools, custom software scripts, redesigned garden implements, novel teaching materials. The innovators had spent their own money on materials and their own time on development. The median development cost was modest — a few hundred dollars and a few dozen hours. And the vast majority had no intention of commercializing their innovations.
They were not entrepreneurs. They were not aspiring entrepreneurs. They were people who had encountered a problem, built a solution, and moved on with their lives. The fact that they had innovated was, in many cases, not even salient to them. When survey respondents were asked whether they had "invented or modified a product," many initially said no. Only when prompted with specific examples — "Have you ever modified a tool to work better for your purposes? Have you ever built something to solve a problem at home or at work?" — did they recognize that what they had done constituted innovation.
The invisibility of free innovation is one of its most important features. It does not register in patent databases, in corporate R&D statistics, in government innovation surveys that count only commercially motivated activity. It exists beneath the surface of the measured economy, in the aggregate of millions of small acts of problem-solving that no institution counts because no institution has a reason to count them. Von Hippel's contribution was to count them — to make the invisible visible — and to demonstrate that the aggregate is enormous.
The three innovators described in The Orange Pill are free innovators in this precise sense. The marketing manager did not build her CRM to sell. She built it to use. The teacher did not build her reading tracker to launch an educational technology company. She built it because her students needed something that did not exist. The architect did not build her analysis tool to enter the software market. She built it because her practice required a capability that no commercial product provided. Each invested a few hours and a subscription fee. None expected financial return. Each was motivated by the combination of personal need and the intrinsic satisfaction of solving a problem — the same motivations von Hippel's surveys identified in millions of free innovators across six countries.
What the language interface changes about free innovation is not the motivation. The motivation was always there. What it changes is the cost, and the change in cost has consequences that ramify through the entire structure of the free innovation paradigm.
Consider the cost structure of free innovation before the language interface. A user who wished to build a custom software tool needed either programming skills or access to someone with programming skills. If she possessed the skills herself, the cost was measured in the hours of her own labor — hours that competed with her professional work, her family obligations, her other interests. If she did not possess the skills, the cost was either the time required to learn them (months or years) or the money required to hire someone who had them (hundreds or thousands of dollars). At these costs, only users with the most intense needs — needs that justified the investment of significant personal resources — would innovate.
The language interface reduces the cost to the time required for a conversation. An afternoon. A weekend. A series of evening sessions. The financial cost is a subscription fee that is, for most knowledge workers in developed economies, negligible. At this cost, users with moderate needs — needs that would not have justified learning to code or hiring a developer — find it rational to innovate.
The expansion of the free-innovation population is not linear. It follows the structure of a demand curve: as cost drops, the quantity of innovation demanded increases, but the increase is not proportional. It is convex. At high costs, a small reduction in cost produces a small increase in innovation. At low costs, a small reduction produces a large increase, because the population of users with moderate, previously unmet needs is vastly larger than the population of users with intense needs. The language interface operates at the low end of this curve, where each additional reduction in cost unlocks a disproportionately large increase in innovation activity.
Von Hippel's survey data provides a basis for estimating the magnitude. In the United States alone, his research identified approximately sixteen million consumer innovators active in a three-year period — individuals who had developed or modified a product for personal use. These were innovators who had crossed the threshold at the pre-AI cost structure. The language interface lowers the threshold by one to two orders of magnitude. If the relationship between cost and innovation holds — and four decades of cross-industry data suggests it does — the number of active user innovators in the AI era will be correspondingly larger. Not sixteen million. Possibly ten times that number in the United States alone, and proportionally in every economy where the language interface is accessible.
The aggregate value of these innovations is difficult to measure precisely, for the same reason it was difficult to measure before: free innovations do not generate transactions, and economic measurement systems are designed to count transactions. A custom CRM that one marketing manager builds and uses has no market price. A reading tracker that one teacher builds for her classroom generates no revenue. An analysis tool that one architect builds for her practice does not appear in any software market survey.
But the aggregate value is real. Each of these innovations solves a problem that was previously unsolved — a mismatch between the user's specific needs and the products available to her. The marketing manager who endures a poorly fitting CRM is less productive than the marketing manager who uses one tailored to her workflow. The teacher who lacks a suitable assessment tool is less effective than the teacher who has one calibrated to her students. The architect who compensates manually for the gap between her design tools and her structural requirements is slower and more error-prone than the architect whose custom tool bridges the gap.
Multiply these individual productivity gains by millions of users, and the aggregate economic value of free innovation in the AI era may rival or exceed the value of commercial software production. This is a conjecture, not a measurement, but it is a conjecture grounded in the same structural logic that von Hippel's framework has validated across dozens of industries over four decades. When the cost of innovation drops and the population of innovators expands, the aggregate value of the innovations produced expands with it. The magnitude of the current cost drop suggests an expansion of historic proportions.
Free innovation also exhibits a behavioral pattern that von Hippel documented extensively and that the language interface is likely to amplify: free revealing. The tendency of user innovators to share their innovations openly, without seeking intellectual property protection or financial compensation, is one of the most robust findings in the user innovation literature. Von Hippel's surveys found that the majority of user innovators share their innovations with others — through conversation, through online forums, through informal networks — and that this sharing is motivated by a combination of social reward (recognition, reciprocity, community membership) and rational self-interest (sharing attracts improvements from other users who adopt and modify the innovation).
When the cost of creating an innovation approaches zero, the calculus of free revealing shifts further toward sharing. The sunk-cost motivation to protect an innovation — the instinct that says, "I invested significant resources in building this, so I should capture the returns" — is proportional to the cost of creation. A developer who spent six months building a tool has a strong motivation to protect it. A user who spent an afternoon building the same tool with Claude has a correspondingly weaker motivation. The afternoon's investment does not generate the protective instinct that six months' investment does. The result is that a higher proportion of AI-assisted innovations will be shared, and shared more readily, than innovations produced at higher cost.
The consequence is an acceleration of the horizontal innovation networks that von Hippel identified as the primary channel through which user innovations propagate. A horizontal innovation network is a community of users who innovate, share, and improve upon each other's work without the intermediation of a manufacturer. Open-source software communities are the most visible examples, but von Hippel's research documented similar networks in sporting equipment (windsurfing, mountain biking), scientific instrumentation, medical devices, and numerous other domains. These networks produce innovation at rates that frequently exceed those of manufacturer-funded R&D, because they mobilize a larger, more diverse, more contextually informed pool of creative effort.
The language interface expands both the size and the velocity of these networks. Size increases because the population of users who can contribute innovations is no longer limited to the technically skilled. A teacher who builds a reading tracker and shares it with her colleagues has contributed to an educational innovation network — a network that did not previously exist because the cost of contributing was too high for non-programmers. Velocity increases because the cycle time from need-identification to shareable innovation has compressed from months to hours. An improvement suggested by one member of the network can be implemented and shared by another member in the same day.
Von Hippel's framework predicts that these expanded, accelerated networks will produce an innovation output that is not merely incrementally larger than what came before but structurally different in kind. The difference is in the grain of the innovations. Each innovation is small, specific, precisely fitted to a particular user's particular need. The aggregate is an innovation ecosystem of extraordinary granularity — millions of micro-solutions, each addressing a specific mismatch between a user's needs and the tools available to her, collectively addressing a range of human needs that no centralized innovation system could survey, let alone serve.
The aggregate of free innovations constitutes an economic resource that existing measurement systems are not equipped to value. It does not appear in GDP calculations, in corporate earnings statements, in patent filings, or in any of the metrics that economists and policymakers use to assess innovation output. But it is real, and its value will grow as the cost of innovation continues to fall and the population of free innovators continues to expand. The task of measuring this value — of making the invisible visible, as von Hippel's original surveys did for consumer innovation — is one of the urgent empirical challenges of the coming decade.
---
When millions of individuals innovate freely and share their innovations openly, the aggregate of those shared innovations constitutes a commons — a pool of resources available to all, owned by none, sustained by collective contribution rather than by market incentive or governmental mandate. The concept is borrowed from the literature on natural resource management, where commons — fisheries, forests, pastures, aquifers — have been studied for centuries as cases where individual self-interest and collective well-being intersect, sometimes harmoniously, sometimes catastrophically.
Garrett Hardin's 1968 essay "The Tragedy of the Commons" defined the pessimistic case. If a shared resource is available to all and no one is responsible for its maintenance, each individual has an incentive to extract as much value as possible while contributing as little as possible. The fishery is overfished. The pasture is overgrazed. The resource degrades until it collapses. Hardin's conclusion was that commons could be sustained only through privatization (giving individuals ownership of portions of the resource, thereby aligning individual incentive with resource maintenance) or through state regulation (imposing rules that limit extraction and mandate contribution).
Elinor Ostrom spent her career demonstrating that Hardin's conclusion was empirically wrong — or, more precisely, that it was right only for a subset of commons and under a subset of conditions. Across hundreds of case studies of successfully managed commons — irrigation systems in Spain, fishing cooperatives in Japan, forest management systems in Nepal, water-sharing arrangements in the American West — Ostrom identified a set of institutional design principles that distinguished commons that thrived from commons that collapsed. The principles included clear boundaries (who may use the resource), proportional rules (those who benefit must also contribute), local monitoring (the users themselves, not external authorities, oversee compliance), graduated sanctions (violations are punished, but proportionally, not punitively), and accessible conflict resolution mechanisms.
These principles are relevant to the innovation commons that AI-augmented user innovation is producing, because the innovation commons faces the same structural challenges that natural resource commons face. Not the tragedy of depletion — innovations, unlike fish, are not consumed by use — but the tragedy of degradation: the risk that the commons fills with low-quality, unreliable, or misleading contributions until the cost of finding useful innovations exceeds the benefit of searching.
The risk is not hypothetical. It is already visible in the early stages of AI-augmented innovation sharing. Online repositories of AI-generated code, AI-assisted design templates, and AI-produced educational materials are growing rapidly, and the quality distribution is wide. Some contributions are excellent — well-tested, well-documented, precisely fitted to a real need. Others are superficially plausible but fundamentally flawed: code that runs but produces incorrect results under edge conditions, designs that look professional but fail under stress, educational materials that are articulate but factually wrong. The smooth surface of AI-generated output — the aesthetics of competence without the guarantee of correctness — makes quality assessment harder, not easier, because the traditional signal of quality (visible effort, rough edges that indicate human engagement) is absent.
The open-source software movement provides the most relevant precedent. Open-source commons have thrived for decades, producing software of extraordinary complexity and reliability — Linux, Apache, Firefox, Git — through the collective contribution of user innovators who share their work freely. The governance mechanisms that sustain these commons are, in broad structure, consistent with Ostrom's principles. Clear boundaries: established contribution guidelines and code review processes determine what enters the repository and what is rejected. Proportional rules: contributors who demonstrate sustained commitment earn greater influence over the project's direction. Local monitoring: the community itself — not an external authority — reviews, tests, and validates contributions. Graduated sanctions: contributors whose work consistently fails review lose privileges incrementally, not punitively.
But the open-source precedent has a limitation that the current moment exposes. Open-source communities have historically been populated by technically skilled users — programmers who can read, evaluate, and improve each other's code. The governance mechanisms depend on this technical literacy. Code review works because the reviewers can understand the code. Quality assessment works because the assessors have the expertise to evaluate correctness, efficiency, and maintainability.
When the population of user innovators expands to include millions of people who build through natural language conversation rather than through direct code writing, the governance mechanisms that sustained the open-source commons require adaptation. The marketing manager who builds a CRM and shares it with her colleagues cannot review the underlying code — she did not write it, and she may not be able to read it. The teacher who builds a reading tracker and shares it with her district cannot evaluate whether the code implements the assessment logic correctly — she specified the logic in natural language and received the implementation from a machine. The innovation is shared, but the capacity to evaluate it has not expanded with the capacity to produce it.
This is the governance challenge specific to the AI-augmented innovation commons. The production of innovations has been democratized. The evaluation of innovations has not. The asymmetry between the two creates a structural vulnerability: a commons that grows faster than the community's capacity to assess what it contains.
Von Hippel's research on user innovation communities suggests that self-governance mechanisms will emerge, and historical precedent supports this expectation. Communities develop reputational systems, curation hierarchies, trust networks, and quality signals that enable participants to navigate abundance. The question is whether these mechanisms will emerge fast enough, and at sufficient scale, to prevent degradation during the period of rapid expansion.
Several structural features of AI-augmented innovation communities may support self-governance. First, the innovations are use-tested. Unlike speculative designs or theoretical proposals, user innovations are built to solve the innovator's own problem. The marketing manager's CRM works for her purposes. The teacher's reading tracker functions in her classroom. Use-testing provides a baseline quality signal: the innovation has been validated by at least one user in at least one context. This is not a guarantee of quality — a tool that works for one user in one context may fail for another user in a different context — but it is a stronger signal than no testing at all.
Second, the natural-language descriptions that accompany AI-augmented innovations are, by the nature of the creation process, available as documentation. When the marketing manager describes her CRM needs to Claude, the description itself — the conversation, the iterative refinements, the specific requirements she articulated — constitutes a record of the innovation's intended function. This record is more accessible to non-technical evaluators than source code, because it is expressed in the same natural language that the evaluators use to think about the domain. The teacher who evaluates a colleague's shared reading tracker can read the natural-language specification and assess whether it matches her own needs, even if she cannot read the underlying code.
Third, AI tools themselves can serve as evaluation infrastructure. A user who receives a shared innovation can ask Claude to review the code, test it against specified conditions, explain its behavior, and identify potential failure modes. The same tool that enables non-technical users to build also enables non-technical users to evaluate, within the limits of the tool's reliability. This is not a substitute for expert review. But it raises the floor of evaluation capability in the same way that the language interface raises the floor of building capability.
These features provide a foundation for commons governance, but they are not sufficient alone. The institutional design challenge identified by Ostrom — the construction of rules, norms, and mechanisms that sustain a shared resource over time — remains. The innovation commons that AI-augmented user innovation will produce requires deliberate institutional design: platforms that surface quality signals, community norms that reward accurate documentation and penalize misrepresentation, mechanisms for version control and attribution that allow innovations to be improved without losing the trail of contributions that produced them, and governance structures that scale with the community rather than calcifying at an early stage.
The open-source model provides a starting point but not a template. The governance of an innovation commons populated by millions of non-technical users sharing natural-language-specified innovations will look structurally different from the governance of a commons populated by thousands of programmers sharing source code. The principles — boundary clarity, proportional contribution, local monitoring, graduated sanctions, accessible conflict resolution — may transfer. The mechanisms that implement those principles will need to be invented.
The analogy to natural resource commons is instructive at one final level. The most successful natural resource commons are not the ones that maximize extraction. They are the ones that sustain the resource over time, balancing current use against future availability, individual benefit against collective resilience. The innovation commons is not a fishery — innovations are not depleted by use — but it can be degraded by noise, by mistrust, by the erosion of the quality signals that enable users to find what they need. The governance challenge is not to prevent overuse but to prevent degradation — to maintain the commons as a space where useful innovations can be found, evaluated, and improved by the community that produces them.
The dams that sustain this commons are not technological. They are institutional. They are the norms, the platforms, the governance mechanisms, the cultural expectations that determine whether a shared resource thrives or drowns in its own abundance.
---
The producer-centered model of innovation — firms invest in R&D, develop products, and bring them to market — has been the dominant organizing framework for industrial economies since the early twentieth century. The model's institutional expression is visible everywhere: in corporate R&D departments, in the patent system, in venture capital financing, in the structure of university technology transfer offices, in the tax incentives that governments offer for private-sector research expenditure. The assumption that innovation originates with producers is not merely an academic theory. It is the structural foundation of an innovation economy worth trillions of dollars.
Von Hippel's research did not so much challenge this model as demonstrate its incompleteness. Producers do innovate. Corporate R&D departments do produce valuable inventions. The patent system does incentivize certain categories of innovation. The model is not wrong. It is partial. It describes one tributary of the innovation river — the producer tributary — and mistakes it for the entire flow. The user tributary was always there, flowing alongside it, often larger, consistently less visible, and almost entirely unmeasured by the institutions designed to monitor innovation output.
The incompleteness of the model was, for decades, a matter of academic interest rather than strategic urgency. User innovation existed, but it existed at a scale and a pace that producers could accommodate. A surgical instrument manufacturer could monitor the modifications that leading surgeons made to its products, incorporate the most promising modifications into the next product generation, and maintain its position as the primary supplier. The time lag between user innovation and manufacturer adoption was long enough that the manufacturer's response — identification, evaluation, engineering, production, distribution — could keep pace. User innovation was a source of ideas. Manufacturers were the means of distribution. The relationship, if unequal, was functional.
The language interface disrupts this relationship at a structural level. When the cost of user innovation drops by orders of magnitude, three things happen simultaneously, each of which threatens a different element of the producer-centered model.
First, the volume of user innovation increases beyond the manufacturer's capacity to monitor and incorporate. Von Hippel's research on lead-user identification was premised on the assumption that lead-user innovations were relatively rare and could be identified through systematic search — surveys, field studies, the "next-generation consumer innovation search" method he developed with Kaulartz using machine learning techniques to scan publicly available text for descriptions of need-solution pairs. These methods were designed for an environment in which user innovations numbered in the hundreds or thousands per industry per year. When the number climbs to hundreds of thousands or millions, the identification problem changes qualitatively. The signal-to-noise ratio deteriorates. The manufacturer cannot find the innovations worth incorporating, because they are buried in an avalanche of innovations that vary enormously in quality, generality, and commercial potential.
Second, the time lag between user innovation and user deployment collapses. In the traditional model, the user innovator built a prototype and then waited — for the manufacturer to notice, to evaluate, to engineer a production version, to distribute. The waiting period gave the manufacturer time to respond. The language interface eliminates the waiting period. The user innovator builds and deploys in the same session. The CRM is in use the afternoon it is built. The reading tracker is in the classroom the week it is created. The manufacturer is no longer ahead of the user in deployment capability. For a significant class of innovations, the manufacturer is behind.
Third, and most consequentially, the value of the manufacturer's core offering — standardized functionality delivered at scale — erodes. The marketing manager who builds her own CRM does not need Salesforce's CRM functionality. She may still need Salesforce's data infrastructure, its integration ecosystem, its compliance certifications, its customer support. But the functionality that was the centerpiece of the manufacturer's value proposition — the thing the customer was paying for — is now something the customer can produce herself, tailored to her specific needs, at a fraction of the cost.
This is the producer's dilemma in its sharpest form. The manufacturer's product is being disaggregated by its own customers. The functionality layer — the features, the interfaces, the workflow logic — is being reproduced, and often improved upon, by users who possess the sticky information that the manufacturer lacks. What remains is the infrastructure layer — the data systems, the integrations, the security, the compliance, the reliability guarantees, the scale economies that no individual user can replicate.
Von Hippel's research identified several adaptation strategies that producers have historically employed when user innovation accelerates.
The first strategy is incorporation: monitoring user innovations, identifying the most promising ones, and integrating them into the commercial product. This is the strategy that surgical instrument manufacturers, scientific instrument companies, and sporting equipment firms have employed for decades. It works when the volume of user innovation is manageable and the pace of innovation is slow enough for the manufacturer's development cycle to absorb. In the AI era, the volume and pace may exceed the capacity of this strategy. The manufacturer who tries to incorporate every valuable user innovation into a product update will find herself perpetually behind, because users are innovating faster than any centralized development process can integrate.
The second strategy is toolkit provision: building and maintaining the platforms on which users innovate. This is the strategy that the major AI companies — Anthropic, OpenAI, Google — are currently pursuing. They do not compete with user innovations on functionality. They provide the infrastructure that makes user innovation possible. The language interface itself is a manufacturer's toolkit, designed to shift the locus of innovation from the manufacturer to the user. This strategy is consistent with von Hippel's framework and historically successful: companies that provide superior toolkits attract larger user innovation communities, which produce more innovations, which attract more users, in a self-reinforcing cycle.
The third strategy is service provision: shifting the value proposition from product creation to the maintenance, scaling, and support of user-created innovations. The manufacturer does not build the marketing manager's CRM. She builds it herself. The manufacturer provides the data infrastructure on which the CRM runs, the security certifications that the marketing manager's enterprise requires, the backup and recovery systems that protect against data loss, the integration layer that connects the custom CRM to the company's email system, its accounting software, its customer communication platforms. The value is not in the functionality but in the infrastructure that makes functionality reliable, secure, and interoperable at scale.
The SaaS valuation correction described in The Orange Pill — the "Death Cross" of Chapter 19 — is the market's recognition that the producer-centered model of software value is being repriced. The companies whose value was concentrated in functionality — thin applications that solved singular problems — are losing value because users can now produce that functionality themselves. The companies whose value resides in the infrastructure layer — data, integrations, compliance, trust, ecosystem — are better positioned, because these are precisely the things that user innovators cannot easily replicate.
The historical pattern is instructive. When user innovation accelerated in the sporting equipment industry in the 1980s and 1990s, manufacturers that fought the trend — through patent enforcement against user modifications, through product designs that resisted customization, through marketing campaigns that disparaged user-built alternatives — lost market share to manufacturers that embraced it. The companies that provided platforms for user modification, that incorporated user innovations into their product lines, that built communities around the shared practice of innovation, thrived. The companies that tried to maintain the producer monopoly on innovation found themselves competing not with a single rival firm but with the collective creativity of their entire user base — a competition they could not win.
The AI era presents the same choice at a vastly larger scale. Producers can fight user innovation, through platform lock-in, through proprietary data formats, through terms of service that restrict how users may modify or extend the products they have purchased. Or they can adapt, building the infrastructure that user innovation requires, providing the trust and reliability that individual users cannot easily establish, and positioning themselves as enablers rather than competitors of the user innovation explosion.
The choice is strategic, not moral. Von Hippel's framework does not prescribe what producers should do. It predicts what will happen to producers who make different choices, based on four decades of observing what has happened to producers who made those choices in previous innovation transitions. The prediction is clear: producers who fight the current of user innovation will be swept aside by it. Producers who build the infrastructure for it will thrive.
---
Von Hippel's framework was never limited to software. His earliest studies documented user innovation in physical products — scientific instruments, semiconductor fabrication equipment, sporting goods — and his subsequent work traced the phenomenon across industries as diverse as medical devices, agricultural equipment, and consumer products. The theory's explanatory power derives from its generality: it explains user innovation wherever heterogeneous needs, sticky information, and a favorable cost-benefit ratio coincide. The specific domain is incidental. The structural conditions are what matter.
The Orange Pill focuses predominantly on software, because software is where the language interface first produced its most dramatic effects. Code is the medium most directly amenable to generation through natural-language conversation. The marketing manager's CRM, the teacher's reading tracker, the architect's analysis tool — each is a software artifact produced through dialogue with a machine that translates natural language into executable code. The demonstrations are compelling because they are concrete: the artifacts work, the productivity gains are measurable, the contrast with pre-AI development methods is stark.
But the framework predicts that the language interface will enable user innovation in any domain where the output can be expressed or substantially mediated digitally. Software was first because software is the most direct digital output. What follows is the application of the same structural logic — collapsing innovation cost, expanding the innovator population, releasing latent heterogeneous demand — to domains where the relationship between natural-language description and useful output is less direct but no less consequential.
Consider design. A small business owner who needs a logo, a package design, a store layout, a marketing brochure has historically faced the same cost barrier that the marketing manager faced with software. She could describe what she wanted to a graphic designer, wait for a draft, provide feedback, wait for a revision, and eventually — if the budget held and the designer understood the brief — receive something approximating her vision. The cost was measured in hundreds or thousands of dollars and days or weeks of elapsed time. At that cost, many small business owners settled for generic templates that approximately served their needs without precisely addressing them.
Generative AI image and design tools have begun to lower this barrier, though the reduction is not yet as dramatic as the reduction in software development cost. The small business owner can describe a logo concept in natural language and receive visual options in seconds. The options are not always satisfactory — the gap between verbal description and visual intention is wider than the gap between verbal description and software specification, because visual aesthetics involve subtleties that natural language captures imperfectly. But the iterative cycle of description, generation, evaluation, and refinement approximates the trial-and-error design loop that von Hippel's toolkit criteria require, and the cost has dropped by an order of magnitude. The user who could not afford a designer can now afford to experiment, and experimentation is the soil in which user innovation grows.
Consider education. A teacher's specific pedagogical needs are among the most heterogeneous in any professional domain. No two classrooms are identical. The student population, the curriculum requirements, the school culture, the teacher's own instructional philosophy, the available technology infrastructure — each of these variables interacts with the others to produce a unique instructional context. Standardized educational technology products address the central tendency of this distribution — the "average classroom" that exists in the manufacturer's model but nowhere in reality. Teachers have always adapted, modified, and improvised, but the cost of producing custom instructional materials, assessment tools, and classroom management systems has historically been high enough that adaptation was limited to what a teacher could accomplish with the tools at hand.
The language interface is enabling teachers to build what they need. Not only the reading tracker of the three lead-user stories, but lesson plan generators calibrated to specific student populations, formative assessment tools aligned with specific pedagogical frameworks, interactive exercises designed for specific learning objectives, parent communication systems tailored to specific school contexts. Each of these is a user innovation in von Hippel's sense: a solution built by a user to address a heterogeneous need that no standardized product serves, at a cost low enough to make the innovation rational.
The educational domain illustrates a feature of user innovation beyond software that the software examples do not fully capture: the innovation often involves content as much as code. The teacher's reading tracker is a software artifact, but the assessment framework it implements — the specific criteria, the specific developmental stages, the specific relationship between observed behaviors and instructional responses — is pedagogical content that the teacher contributed from her own expertise. The software is the container. The content is the innovation. The language interface enables both simultaneously, because the natural-language conversation through which the teacher specifies the tool is simultaneously a conversation about the tool's functionality and its pedagogical logic.
Consider legal practice. A lawyer's specific document needs — contracts, agreements, regulatory filings, compliance frameworks — are deeply heterogeneous. No two transactions are identical. Standardized legal templates serve as starting points but invariably require modification, and the modifications are driven by the lawyer's contextual knowledge of the specific transaction, the specific client, the specific regulatory environment, and the specific risk profile. This contextual knowledge is sticky in von Hippel's precise sense: it resides in the lawyer's embodied understanding of the practice and resists extraction through standardized processes.
AI-assisted legal drafting tools are enabling lawyers to build custom documents through natural-language specification rather than through the laborious process of modifying templates by hand. The cost reduction is significant: a contract that previously required hours of drafting and revision can be produced in a fraction of the time. The quality of the output varies — the risk of plausible but incorrect legal language parallels the risk of plausible but incorrect code in software — and the governance challenges are correspondingly more acute, because an error in a legal document can have consequences that an error in a personal productivity tool does not. The domain demands stronger dams, more rigorous quality assurance, more robust institutional oversight. But the structural dynamics are the same: heterogeneous needs, sticky information, collapsing innovation cost, expanding innovator population.
Consider scientific research. A scientist's data analysis needs are specific to her experimental design, her data characteristics, her analytical framework, and the specific hypotheses she is testing. Standardized statistical software provides general-purpose tools, but the application of those tools to a specific dataset — the selection of appropriate methods, the specification of model parameters, the interpretation of results in the context of domain-specific theoretical frameworks — requires the scientist's own contextual knowledge. The language interface enables scientists to build custom analysis pipelines through natural-language specification, reducing the barrier between research design and data analysis. The scientist who previously needed to learn Python or R to implement a custom analysis can now describe the analysis in the language of her discipline and receive an executable implementation.
Across each of these domains, the structural logic is identical. User needs are heterogeneous. The information necessary to address those needs is sticky — embedded in the user's contextual expertise and resistant to transfer. The cost of innovation has dropped dramatically. The population of potential user innovators has expanded correspondingly. And the aggregate of user innovations produced across all of these domains simultaneously constitutes an innovation output that the producer-centered model of innovation cannot account for, because it was designed to measure the producer tributary, not the user tributary, and the user tributary is now in flood.
Von Hippel's framework makes two predictions about the trajectory of user innovation beyond software. The first prediction is convergent: across domains, the dynamics of user innovation will follow the same structural pattern — cost collapse, population expansion, heterogeneity expression — that has been documented in software. The specific mechanisms will differ. The governance challenges will differ. The quality risks will differ. But the underlying economics will be the same, because the economics are driven by structural conditions (heterogeneous needs, sticky information, cost-benefit ratios) that are general to human problem-solving, not specific to any domain.
The second prediction is divergent: the institutional responses required to govern user innovation will differ significantly across domains. The consequences of error in a personal CRM are negligible. The consequences of error in a legal document, a medical protocol, or a structural engineering calculation are potentially severe. The dams required for the innovation commons in high-stakes domains must be correspondingly stronger — not to prevent innovation, which would be both futile and counterproductive, but to ensure that the evaluation, testing, and validation mechanisms keep pace with the production mechanisms. The challenge is not to slow the flood but to ensure that what the flood carries is fit for use.
The language interface does not merely democratize software. It democratizes any form of digital creation. The three stories in The Orange Pill are the first chapter of a longer narrative — a narrative in which millions of people across dozens of domains build solutions to their own specific problems, share those solutions with others who face similar problems, and collectively produce an innovation output that no centralized system of production could match.
The next chapters of that narrative are being written now, in classrooms and law offices and research laboratories and design studios and a hundred other settings where people face problems and have just discovered that the cost of solving them has dropped to the price of a conversation. Von Hippel's framework does not predict whether the outcome will be utopian or dystopian. It predicts that the outcome will depend on the institutional structures — the dams, the commons governance, the quality mechanisms, the norms of sharing and attribution — that are built during the current period of explosive growth. The tools are here. The innovations are coming. The question is whether the institutions that sustain the commons will be built before the commons is overwhelmed by its own abundance.
The most consequential implication of von Hippel's framework applied to the AI moment is not about speed. It is not about cost. It is about diversity — the sheer range of what gets built when the barrier between need and solution is reduced to the cost of a conversation.
Innovation economics has long operated with an implicit assumption about the distribution of human needs. The assumption is that needs cluster. That most people in a given market want approximately the same thing, with variations at the margins. The manufacturer's job is to identify the center of the cluster and build a product that serves it, accepting that users at the margins will be underserved but trusting that the center is large enough to sustain a business. This assumption — the assumption of need homogeneity — is the structural foundation of mass production, mass marketing, and the standardized product categories that organize consumer economies.
Von Hippel's research demonstrated that this assumption is, for many categories of products and an even larger range of professional tools, dramatically wrong. User needs are not normally distributed around a central tendency. They are heterogeneous in a way that defies the clustering assumption. The marketing manager's workflow is not a minor variation on a standard workflow. It is a specific configuration of tasks, priorities, information flows, and decision criteria that reflects her particular industry, her particular organization, her particular role, and her particular cognitive style. The teacher's assessment needs are not a minor variation on a standard assessment framework. They reflect her specific student population, her specific pedagogical philosophy, her specific curricular context, and her specific understanding of what developmental progress looks like in the twenty-three specific children she teaches.
The heterogeneity is not marginal. It is foundational. The "average user" whose needs the standardized product is designed to serve does not exist. She is a statistical fiction — a central tendency computed from a population of individuals whose actual needs differ from the average and from each other in ways that matter to their daily work.
Manufacturers have historically responded to heterogeneity through two mechanisms. The first is segmentation: dividing the market into subgroups whose needs are sufficiently similar to be served by a targeted product variant. Enterprise CRM for large sales organizations. Small business CRM for companies with fewer than fifty employees. Educational CRM for schools and universities. Each segment receives a product variant that is closer to its needs than the generic product would be, though still not precisely fitted to any individual user's specific requirements. The second mechanism is customization: providing configuration options that allow users to adjust the product within boundaries defined by the manufacturer. Custom fields in the CRM. Adjustable display options. Configurable workflow rules. The user adapts the product to her needs, but only within the degrees of freedom the manufacturer has anticipated and provided.
Both mechanisms are constrained by the manufacturer's capacity to observe and serve heterogeneity. Segmentation requires the manufacturer to identify the relevant dimensions of variation and design product variants accordingly — a process limited by the manufacturer's understanding of user needs, which is itself limited by the stickiness of user need-information. Customization requires the manufacturer to anticipate the dimensions along which users will wish to adjust the product — an anticipation that is necessarily incomplete, because the manufacturer cannot foresee every configuration of needs that the heterogeneous user population will present.
The language interface removes both constraints. The user does not need the manufacturer to observe her heterogeneity. She expresses it directly, in natural language, to a machine that translates the expression into a working solution. She does not need the manufacturer to anticipate the dimensions of customization she requires. She specifies them herself, in real time, in the course of building. The solution space is not limited by what the manufacturer has foreseen. It is limited only by what the user can describe.
This is the structural change that produces what might be called the heterogeneity explosion: the sudden, dramatic expansion of the range of solutions that exist in the world, driven not by any increase in human creativity or any change in human needs, but by the removal of the cost barrier that previously prevented the vast majority of heterogeneous needs from being addressed.
The magnitude of the latent heterogeneity — the unmet needs that users have been enduring because the cost of addressing them exceeded the benefit — can be estimated from von Hippel's survey data. In the United States, his research identified approximately sixteen million consumer innovators active in a three-year period. These were users whose needs were intense enough to justify innovation at the pre-AI cost structure — a cost measured in dozens of hours and hundreds of dollars. For every user whose need was intense enough to cross this threshold, there were many more users whose needs were real but not intense enough to justify the investment. The ratio of latent to expressed heterogeneity — the number of users with unmet needs to the number who actually innovated — is unknown but almost certainly large. Von Hippel's research in several domains suggested that the population of users who perceive room for improvement in the products they use outnumbers the population who actually innovate by a factor of five to ten.
If this ratio holds, the language interface — which reduces the cost of innovation by one to two orders of magnitude — will bring a correspondingly large population of latent innovators above the threshold. Not sixteen million consumer innovators in a three-year period, but potentially a hundred million or more. Each one building a solution tailored to her specific needs. Each solution different from every other, because the needs that drive them are different.
The aggregate of these solutions constitutes an innovation ecosystem whose diversity dwarfs anything in the history of human tool-making. Not because each individual innovation is revolutionary — most will be modest, practical, precisely fitted to a specific need and uninteresting to anyone who does not share that need — but because the range of needs addressed will be, for the first time in economic history, commensurate with the actual heterogeneity of human requirements.
This explosion has structural consequences that extend beyond the innovation itself.
The first consequence is a transformation of the relationship between production and consumption. In the mass-production economy, the consumer's role was to choose among options the producer had defined. The range of choice was wide — supermarket shelves offered thirty varieties of cereal, software markets offered dozens of CRM platforms — but the choice was always among pre-existing options. The consumer selected. She did not create. The heterogeneity explosion dissolves this boundary. The user who builds her own CRM is simultaneously a producer and a consumer. She does not select from a menu. She creates the menu item. The distinction between producing and consuming, which was never as clean as the conventional model assumed — users have always modified, adapted, and improvised — becomes, for a significant class of goods and tools, incoherent.
The second consequence is a challenge to the metrics by which economies measure innovation output. Patent counts, R&D expenditure, new product introductions, venture capital investment — each of these metrics measures producer innovation. None measures user innovation, and none is designed to capture the heterogeneity explosion. A hundred million users building custom solutions to their own specific problems will not register in patent databases, because the solutions are too specific and too practical to warrant the cost of patenting. They will not register in R&D statistics, because no corporate R&D budget funded them. They will not register in new product surveys, because they are not products — they are personal tools, built for personal use, visible only to the user and perhaps to the small community with whom she shares them.
The economic value of this invisible innovation output is real but unmeasured. Each custom solution represents a productivity gain — the difference between using a poorly fitted standardized tool and using a precisely fitted custom tool. Multiply that gain by a hundred million users, and the aggregate productivity impact is substantial. But it is invisible to the instruments that economists use to observe the economy, in the same way that household production — cooking, cleaning, childcare — is invisible to GDP calculations despite constituting an enormous share of the economy's actual output.
Von Hippel's original contribution was to make visible the user innovation that existing measurement systems could not see. The heterogeneity explosion extends this challenge by an order of magnitude. The innovation is not merely uncounted. It is uncountable by existing methods, because each innovation is unique, each is valuable only in its specific context, and no aggregation method designed for standardized products can capture the value of millions of bespoke solutions.
The third consequence is a reshaping of competitive dynamics. When users can build precisely what they need, the manufacturer's standardized product must justify its existence on grounds other than functionality. The CRM that the marketing manager builds herself is, for her specific purposes, superior to any commercial CRM — because it was built from her own sticky information, calibrated to her own workflow, uncompromised by the design trade-offs that any product serving a heterogeneous market must make. The commercial CRM survives not because it is functionally superior but because it offers things the user-built solution cannot: reliability at scale, security certifications, integration with other enterprise systems, customer support, the institutional trust that comes from a recognized brand and a track record of service.
The competitive advantage shifts from functionality to infrastructure. The producer who competes on functionality competes with every user in her market — a competition that becomes increasingly unwinnable as user innovation tools improve. The producer who competes on infrastructure — on the reliability, security, interoperability, and trust that user-built solutions cannot easily achieve — competes in a domain where producer advantages remain durable.
The heterogeneity explosion is the deepest structural consequence of the AI moment, viewed through the lens of von Hippel's framework. The standard narrative of AI and innovation emphasizes speed — things get built faster — and cost — things get built more cheaply. These are real and important. But they are symptoms of a more fundamental change: the liberation of human heterogeneity from the constraints of standardization. For the first time in economic history, the range of solutions that exist in the world can begin to match the range of needs that exist in the world. The gap between what people need and what is available to them — a gap that has persisted throughout the history of manufactured goods — is narrowing, not because manufacturers have gotten better at serving heterogeneous needs, but because users have gained the means to serve their own.
Von Hippel's framework provides the vocabulary for understanding this transformation — heterogeneous needs, sticky information, cost-benefit thresholds, lead users, free innovation — because the framework was built to explain exactly this phenomenon, at exactly this scale, long before the technology arrived to produce it. The fact that the theory preceded the evidence, and that the evidence confirms the theory, is the strongest argument for taking the theory's predictions about what comes next seriously.
What comes next is a world in which the binding constraint on innovation is not capability but imagination. Not "Can I build this?" but "Can I conceive of this?" — and the answer to that question depends not on any toolkit, however powerful, but on the quality of the questions the user brings to the conversation.
---
The metaphor of the flood is imprecise in one respect and precisely right in another. It is imprecise because a flood implies destruction — the overwhelming of structures by a force that cannot be controlled. The innovation flood that von Hippel's framework predicts is not inherently destructive. User innovation has, across every domain in which it has been studied, produced outcomes that are on balance positive: solutions more precisely fitted to user needs, innovations more rapidly distributed through horizontal networks, economic value more broadly generated across the population.
The metaphor is precisely right because a flood, even a beneficial one, overwhelms existing infrastructure. A river in flood fertilizes the plain but also destroys the levees that protected the settlements along its banks. The question is not whether the flood will come — it is already here — but whether the infrastructure will be rebuilt, adapted, and extended fast enough to direct the flow toward benefit rather than chaos.
Von Hippel's forty years of research provide a structural framework for understanding what the infrastructure must look like, what principles should guide its design, and where the greatest risks of failure lie.
The first risk is quality degradation. When the cost of producing innovations approaches zero, the cost of producing bad innovations also approaches zero. The same language interface that enables a teacher to build a well-designed reading tracker enables another user to build a superficially plausible tool that implements assessment logic incorrectly. The AI-generated output is smooth — grammatically correct code, professional-looking interfaces, confident documentation — and the smoothness conceals defects that only careful evaluation can reveal.
In domains where the consequences of error are personal and limited — a marketing manager's CRM that miscalculates a metric, a personal productivity tool that sorts tasks incorrectly — the quality risk is manageable. The user discovers the error through use and corrects it. The feedback loop is tight, the cost of failure is low, and the self-correcting dynamics of trial and error operate effectively.
In domains where the consequences of error are shared and potentially severe — educational tools that shape how children are assessed, legal documents that structure binding agreements, analysis tools that inform structural engineering decisions, medical protocols that guide clinical practice — the quality risk is acute. An error in a shared educational tool may propagate through an entire school district before anyone detects it. An error in a legal document may create liability that the user-innovator never anticipated. An error in a structural calculation may remain hidden until the building is loaded beyond the tolerance that the correct calculation would have revealed.
The governance mechanisms that existing innovation systems have developed — peer review in science, quality assurance in manufacturing, regulatory approval in medicine and engineering, malpractice liability in law — were designed for a world in which the production of innovations was expensive and therefore relatively scarce. The expense served, inadvertently, as a quality filter: the cost of producing an innovation ensured that only innovations backed by significant investment reached the point of deployment, and the investment created an incentive to ensure quality before deployment.
When production cost approaches zero, this inadvertent filter disappears. The governance mechanisms must be rebuilt to operate in a high-volume, low-cost environment — an environment in which the number of innovations seeking evaluation vastly exceeds the evaluation capacity of any existing institution.
Von Hippel's research on user innovation communities suggests a structural approach to this problem. Communities develop their own quality mechanisms — reputational systems that identify reliable contributors, curation processes that surface the most useful innovations, testing protocols that validate functionality before dissemination. These mechanisms are endogenous to the community rather than imposed from outside, and they tend to be more adaptive, more responsive to the specific characteristics of the innovations they evaluate, than exogenous regulatory systems designed for a different era of innovation production.
The open-source software movement provides the clearest precedent. Major open-source projects maintain quality through a combination of automated testing (code that is tested against a suite of expected behaviors before it enters the repository), peer review (contributions are examined by experienced community members before they are accepted), and graduated trust (contributors earn broader permissions as their track record demonstrates reliability). These mechanisms are not perfect. Major open-source projects have admitted vulnerabilities that persisted for years before detection. But they are effective enough, at sufficient scale, to produce software that runs critical infrastructure worldwide.
The challenge is extending these mechanisms to communities populated by non-technical users who build through natural language rather than through direct code writing. Automated testing can be adapted: the language interface can generate test suites alongside the code it produces, and the user can validate the output against her expectations without reading the code. Peer review can be adapted: community members can evaluate shared innovations by examining the natural-language specifications, testing the outputs against their own needs, and reporting results. Graduated trust can be adapted: users who consistently share reliable innovations earn recognition that signals quality to others in the community.
But adaptation is not automatic. It requires deliberate institutional design — the construction of platforms, norms, and mechanisms that enable quality assessment at the scale the heterogeneity explosion will produce. The design challenge is proportional to the scale of the flood, and the scale is unprecedented.
The second risk is enclosure. The innovation commons that user innovation produces is a shared resource, and shared resources are vulnerable to enclosure — the appropriation of the commons by private interests who restrict access to extract profit. The history of commons enclosure is long and instructive, from the literal enclosure of English common lands in the eighteenth century to the enclosure of the digital commons through proprietary platforms that capture user-generated content behind terms of service that transfer ownership from the creator to the platform.
The AI-augmented innovation commons faces enclosure risk from multiple directions. Platform companies that provide the language interface may claim rights over innovations produced using their tools, through terms of service that most users do not read. Aggregators may collect user-shared innovations, package them into proprietary products, and sell them back to the community that produced them. Patent trolls may identify patentable elements in user innovations and assert claims against the innovators themselves.
Ostrom's institutional design principles provide guidance for resisting enclosure. Clear boundaries — explicit norms about who owns what, codified in terms of service that protect the innovator's ownership of her innovations. Proportional rules — users who contribute to the commons benefit from the contributions of others; users who only extract are identified and their access is limited. Local monitoring — the community itself maintains awareness of enclosure threats and mobilizes to resist them. Accessible conflict resolution — disputes about ownership, attribution, and access are resolved through mechanisms that the community trusts and that operate at the speed the community requires.
The third risk, and the one that von Hippel's framework identifies as the most structurally important, is the failure of institutional adaptation. The five-stage pattern that The Orange Pill identifies — threshold, exhilaration, resistance, adaptation, expansion — places the current moment squarely in the adaptation phase. The threshold has been crossed. The exhilaration of early users is well documented. The resistance of established producers and threatened specialists is visible. What determines whether the trajectory bends toward expansion or toward some less benign outcome is the quality of the institutional response during the current phase.
The historical evidence is mixed. Some technology transitions produced rapid institutional adaptation: the printing press was followed, within a generation, by the development of copyright law, library systems, and academic publishing norms that directed the flood of printed material toward productive use. Other transitions produced institutional failure: the early industrial revolution produced generations of worker exploitation before labor laws, workplace safety regulations, and collective bargaining established the institutional framework that eventually directed the gains toward broader benefit.
The determining variable, across these historical cases, is whether the institutional response was proactive or reactive — whether the dams were built before the flood reached its peak or after the damage was already done. Proactive institutional design requires foresight: the ability to anticipate the structural consequences of a technological transition before those consequences become visible in the form of crises. Reactive institutional design is crisis-driven: the institution is built in response to a failure — a factory collapse, a financial crisis, a public health emergency — that demonstrates the need for governance that should have been established earlier.
Von Hippel's framework provides the foresight that proactive institutional design requires. The framework predicts, with considerable precision, the structural dynamics of the current transition: the expansion of the innovator population, the explosion of heterogeneous solutions, the formation of innovation communities, the strain on quality-assurance mechanisms, the vulnerability of the commons to enclosure, the need for governance structures that are endogenous to the communities they serve rather than imposed from outside. These predictions are not speculative. They are derived from four decades of empirical observation across dozens of industries, validated by the consistency of the pattern across domains as different as surgical instruments and windsurfing equipment.
The research tradition that von Hippel established makes one claim that is more important than any specific prediction. The claim is empirical rather than ideological: innovation is not the exclusive province of corporations, governments, or credentialed experts. It is a distributed human capacity, expressed wherever people face problems and possess the means to solve them. The means have just become, for the first time in human history, nearly universal. The consequences of that universality will depend not on the technology that enabled it but on the institutions that channel it — the norms, the platforms, the governance mechanisms, the quality systems, the cultural expectations that determine whether a flood fertilizes or destroys.
The toolkits are here. The innovators are building. The flood has begun. The institutional infrastructure — the commons governance, the quality mechanisms, the enclosure protections, the evaluation systems — is the work that remains. It is the work that will determine whether the next decade is remembered as the moment when innovation truly became democratic, or as the moment when the promise of democratization was drowned by the consequences of building without the institutional structures that make building sustainable.
The data does not resolve this question. It frames it, with the precision that forty years of empirical research provides. The resolution is not a matter of prediction. It is a matter of construction — of building the institutional dams that direct the innovation flood toward the broad, fertile plain rather than the unprotected settlement.
The construction is underway. Whether it will be completed in time is the empirical question of the decade.
---
Sixteen million people were already building things for themselves, and nobody noticed.
That is the number that rearranged my understanding — not of AI, not of the technology industry, but of what had been happening all around me for decades while I was looking at the wrong part of the river. Sixteen million consumer innovators in the United States alone, building and modifying products for their own use, counted in von Hippel's national surveys. Teachers bending their classroom tools. Surgeons modifying their instruments. Parents jerry-rigging solutions for problems no manufacturer had bothered to see. Sixteen million people, innovating without venture capital, without R&D budgets, without patents, without anyone calling what they did innovation — because the word had been captured by an industry that believed innovation was its exclusive property.
I was part of that industry. I believed it too.
What von Hippel's framework gave me, chapter by chapter, was something I did not expect from an economist: a correction of my arrogance. I had spent my career assuming that the builders were on one side of a line and the users were on the other, and that the builders' job was to make tools good enough that the users would adopt them. The entire institutional apparatus I inhabited — the product roadmaps, the user research, the feature prioritization, the A/B tests, the carefully staged rollouts — was designed around the premise that innovation flows from us to them.
The data says it flows in both directions, and the direction I had been ignoring was, in many domains, the larger flow.
The concept I keep returning to is sticky information — the knowledge that lives in the hands of the person closest to the problem and resists transfer no matter how many focus groups you run. When I described Napster Station's requirements to Claude and received a working prototype, the thing that made it work was not the machine's capability. It was that I did not have to translate. The sticky information — my specific, contextual, hard-to-articulate understanding of what the product needed to feel like — went directly from my experience to the implementation without passing through the distortion of a specification document, a handoff meeting, a requirements review. The stickiness did not disappear. The cost of acting on it did.
That distinction matters for everything that follows from this moment. The language interface did not make people more creative. It did not make their needs less heterogeneous. It did not invent the impulse to build. It removed the cost that had been preventing millions of people from acting on what they already knew. The creativity was always there. The needs were always there. The building impulse was always there. What was missing was a toolkit that met people where they already stood — in their own language, with their own knowledge, facing their own specific, irreplaceable, stubbornly particular problems.
That toolkit has arrived. And the flood it releases will be composed not of identical copies of the same solution but of millions of different solutions, each one shaped by the specific contours of a specific human need. The heterogeneity explosion is not a metaphor I would have reached for on my own. It is von Hippel's contribution, earned through decades of counting what others overlooked, and it names the thing about this moment that I find most hopeful and most demanding at the same time.
Hopeful, because a world in which the range of solutions matches the range of human needs is a world in which fewer people endure the quiet friction of tools that almost work. Demanding, because the institutions that must govern this abundance — the quality mechanisms, the commons governance, the protections against enclosure — do not yet exist at the scale the flood requires. The dams need building, and the people who understand what is coming have an obligation to build them.
I am building. If you have read this far, I suspect you are too.
-- Edo Segal
Sixteen million Americans were already innovating -- modifying products, building solutions, solving problems no manufacturer had noticed -- and the innovation economy couldn't see them. Eric von Hippel spent four decades counting what everyone else overlooked: that users, not producers, are the primary source of innovation in industry after industry. Now AI has collapsed the cost of building by orders of magnitude, and the pent-up creativity of millions is flooding into the open.
This book applies von Hippel's empirical framework to the AI revolution and reveals something the standard narrative misses. The explosion we are witnessing is not a gift from Silicon Valley to the world. It is the release of latent human capability that was always there, blocked by cost, unlocked by a toolkit that finally meets people in their own language with their own knowledge.
The flood is here. The institutions that must govern it are not. Von Hippel's research tells us exactly what to build -- and what happens if we don't.
-- Eric von Hippel

A reading-companion catalog of the 13 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Eric von Hippel — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →