By Edo Segal
The decision I made fastest was the one that deserved the most deliberation.
Trivandrum. Twenty engineers. A hundred dollars a month per seat. I walked into that room with a plan to transform how my team works. By Friday, the transformation was real — measurable, repeatable, extraordinary. I wrote about it in *The Orange Pill* as a breakthrough. I still believe it was.
But Sheila Jasanoff's work introduced a question I had not thought to ask: Who decided that this transformation should happen to these people, and on what authority?
Not legal authority. I had that. Not technical authority. The results spoke for themselves. Democratic authority. The kind that comes from the people whose professional identities, daily routines, and sense of what their expertise means were reorganized in five days — having had a genuine voice in whether and how that reorganization occurred.
I did not ask. It did not occur to me to ask. That is the fishbowl Jasanoff cracks open.
She is not a technologist. She is a scholar of how societies govern sciences and technologies they do not fully understand, and she has spent four decades showing that the people who build powerful things are structurally unable to see certain consequences of those things — not because they are careless, but because their way of knowing admits certain evidence and excludes the rest. The builder sees the capability gain. The builder does not see, except as abstraction, what capability gains cost the people who absorb them without institutional recourse.
Her concepts — co-production, civic epistemology, sociotechnical imaginaries, technologies of humility — are not decorative academic language. They are diagnostic instruments. They reveal the architecture of decisions that look neutral but carry embedded assumptions about whose knowledge counts, whose experience matters, and who gets to shape the future that everyone inherits.
I needed this lens because *The Orange Pill* is a builder's book, written from inside the builder's fishbowl. The view from here is real. It is also partial. Jasanoff shows what the partiality hides: that governance adequate to this moment cannot be built by builders alone, that the people downstream of every dam deserve a voice in where it is placed, and that legitimacy — not just competence — is what separates construction from imposition.
The AI discourse is saturated with technical voices arguing about capabilities and risks. Jasanoff asks the question underneath: Who decides? That question will outlast every model version, every benchmark, every quarterly earnings call. It is the question democracy was built to answer. Whether our institutions can answer it for AI is the test of this generation.
— Edo Segal ^ Opus 4.6
b. 1944
Sheila Jasanoff (b. 1944) is an Indian-born American scholar of science and technology studies, the Pforzheimer Professor of Science and Technology Studies at Harvard University's John F. Kennedy School of Government, where she founded the Program on Science, Technology, and Society. Trained in mathematics at Harvard, linguistics at the University of Bonn, and law at Harvard Law School, Jasanoff developed foundational concepts in the governance of science and technology, including co-production (the idea that scientific knowledge and social order are made simultaneously), civic epistemology (the culturally embedded ways different societies validate knowledge claims), sociotechnical imaginaries (collectively held visions of desirable futures shaped by science and technology), and technologies of humility (institutional practices designed to acknowledge uncertainty and incorporate diverse knowledge in governance). Her major works include *The Fifth Branch: Science Advisers as Policymakers* (1990), *Science at the Bar: Law, Science, and Technology in America* (1995), *Designs on Nature: Science and Democracy in Europe and the United States* (2005), and *The Ethics of Invention: Technology and the Human Future* (2016), as well as the co-edited volume *Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power* (2015). Recipient of the Bernal Prize and numerous other honors, Jasanoff is widely regarded as one of the most influential thinkers on the democratic governance of science and technology, whose insistence that legitimacy requires participation — not merely expertise — has become increasingly urgent in the age of artificial intelligence.
In April 2024, Sheila Jasanoff sat in a recording studio at Harvard and made an observation that should have stopped the AI governance conversation in its tracks. She had been studying risk for decades, she noted, and what struck her about the AI debate was a peculiar asymmetry: the threats were described in the language of extinction, vast and existential and cinematically vague, while the promises were described with exquisite specificity — productivity gains measured to the decimal, adoption curves plotted by the week, revenue projections extending confidently into a future that nobody could actually see. "There's a disconnect," she said, "between the kind of talk we hear about threat and the kind of specificity we hear about the promises."
That asymmetry is not an accident. It is a governing structure. It determines what gets built, what gets funded, what gets regulated, and what gets ignored. And it operates precisely because most people have not noticed it is there.
Jasanoff's career has been organized around a single, uncomfortable proposition: that the people who know the most about a technology are not necessarily the people best equipped to decide how it should be governed. Not because their knowledge is wrong, but because it is partial — brilliant within its domain and structurally blind outside it. The expert knows what the system can do. The citizen knows what the system does to her. These are different orders of knowledge, and a governance framework that admits only the first while excluding the second has already failed before the first regulation is drafted.
The distinction sounds simple. It is not. It cuts against the deepest assumption of modern technological culture: that competence confers authority, that the people who build powerful things are the natural governors of those things, that understanding the mechanism is sufficient for understanding the consequences. This assumption is so pervasive that it rarely presents itself as an assumption at all. It presents itself as common sense. Of course the people who understand AI should lead the conversation about AI. Who else would?
Jasanoff's answer, developed across four decades of studying how societies govern technologies they do not fully understand, is: everyone else. Not instead of the experts. Alongside them. Because the consequences of a technology are not contained within the domain of the people who built it. They radiate outward, into workplaces and classrooms and dinner tables and the quiet hours when a parent lies awake wondering whether the ground will hold for her children. And the people in those spaces possess knowledge that no benchmark can capture and no safety audit can replicate.
Consider the AI debate as it actually unfolded in 2025 and 2026. The dominant voices belonged to a specific community: AI researchers, company executives, venture capitalists, and the policy professionals who orbit them. These voices discussed capabilities, architectures, alignment protocols, safety benchmarks, and competitive dynamics with genuine sophistication. They understood the technology in ways that most citizens could not, and their understanding was real and valuable.
But their understanding was also bounded by the fishbowl in which it was produced. The AI researcher understands how a large language model generates text. The AI researcher does not understand, except as an abstraction, what it means for a senior software architect to watch twenty-five years of hard-won expertise become economically marginal in a single quarter. The venture capitalist understands the market dynamics of the SaaS death cross. The venture capitalist does not understand, except as a line item, what it means for a family when the primary earner's profession is repriced overnight. The policy professional understands regulatory frameworks. The policy professional does not understand, except through survey data, what it feels like to check your phone compulsively at dinner because the AI tool on the other side of the screen has made it impossible to distinguish between productivity and addiction.
Segal captured this experiential knowledge with unusual honesty in The Orange Pill. His description of the Trivandrum training — twenty engineers whose job descriptions changed in a week, whose professional identities were reorganized around a tool they had not chosen and whose deployment they had not been consulted about — is a case study in what Jasanoff would call the epistemic exclusion of affected communities. The engineers adapted. Some thrived. The senior engineer who spent two days oscillating between excitement and terror eventually found his footing. But at no point in the narrative was the question asked: Did anyone ask these twenty people whether the twenty-fold productivity gain was what they wanted? Whether the erasure of the boundary between backend and frontend, between specialist and generalist, between the work they had trained for and the work they were now expected to do, was a transition they consented to or a fait accompli presented as an opportunity?
The answer, visible in the structure of the narrative itself, is that the question was not asked because it was not recognized as a question. The gain was assumed to be self-evidently good. The builder's fishbowl contains a specific assumption: that expanding capability is always desirable, that removing friction is always beneficial, that the people whose work is transformed by a new tool will recognize the transformation as liberation rather than displacement, or at least that they should. This assumption is not malicious. It is structural. It is the water the builder swims in, and Jasanoff's life work has been the study of that water.
Her concept of "civic epistemology" provides the analytical tool. Different communities — different nations, different professional cultures, different social groups — have different ways of producing and validating knowledge. Silicon Valley's civic epistemology privileges demonstration: if you can build it and show it works, that constitutes sufficient justification for deploying it. The demonstration is the argument. The working prototype is the evidence. This epistemology is extraordinarily powerful for producing innovation. It is catastrophically inadequate for governing it.
A civic epistemology that privileges demonstration has no mechanism for incorporating the knowledge of people who cannot demonstrate anything because their knowledge is experiential rather than technical. The senior software architect who feels like a master calligrapher watching the printing press does not possess knowledge that can be demonstrated on a screen. His knowledge is in his body — in the thousands of hours of debugging that deposited layers of intuition, in the architectural sense that lets him feel a codebase the way a doctor feels a pulse. That knowledge is real, and it is relevant to any honest assessment of what AI costs, and it is systematically excluded by an evidentiary framework that recognizes only what can be measured, benchmarked, or shipped.
Jasanoff's insistence on the citizen's knowledge is not populism. It is not the claim that everyone's opinion is equally valid regardless of expertise. It is the more precise and more uncomfortable claim that governance requires multiple kinds of knowledge, that technical knowledge alone produces technically competent but democratically illegitimate decisions, and that the legitimacy of a governance framework depends on whether it can hold expert knowledge and experiential knowledge simultaneously without collapsing either into the other.
The AI governance frameworks currently under construction — the EU AI Act, the American executive orders, the emerging frameworks in Singapore and Brazil and Japan — are almost entirely expert-driven. They classify risks using technical taxonomies. They impose requirements using regulatory instruments designed for a previous generation of technologies. They address the supply side — what companies may and may not build — while leaving the demand side, the citizens who live with the consequences, almost entirely unaddressed. Segal identified this gap in The Orange Pill and recognized it as dangerous. Jasanoff's framework explains why the gap persists: the governance institutions admit only one kind of knowledge, and the kind they admit is the kind that the builders produce.
The result is a governance landscape in which the most important questions — What does this technology do to the people who use it? What does it cost in cognitive autonomy, professional identity, creative depth? Who bears the burden of the transition, and who captures the gains? — are treated as externalities rather than as the central subjects of governance. They are acknowledged in preambles and ignored in provisions. They appear in commissioned reports and vanish from binding regulations. They are spoken about at conferences and absent from the frameworks that determine how AI actually enters the classroom, the workplace, the home.
This is not a failure of intention. It is a failure of epistemic architecture. The governance institutions are built to process one kind of input — technical risk assessment, economic impact analysis, safety benchmarking — and they process that input with genuine competence. But the inputs they cannot process, the experiential knowledge of affected communities, the qualitative evidence of cultural transformation, the felt sense of a world reorganizing faster than any individual can track — these inputs have no port of entry. They accumulate outside the institutional walls, in Substack posts and dinner-table anxieties and the quiet terror of a parent whose twelve-year-old asks "What am I for?" — and they exert no force on the decisions that shape their lives.
Jasanoff has observed this pattern across technologies for forty years. In biotechnology, the experts debated gene sequences while the citizens worried about what it meant to engineer life. In nuclear energy, the experts debated reactor design while the communities near the reactors worried about what would happen when something went wrong. In each case, the expert knowledge was real and the citizen knowledge was real, and the governance frameworks that admitted only the former produced decisions that the latter experienced as illegitimate, imposed, undemocratic.
The AI moment reproduces this pattern at a scale and speed that makes all previous instances look like rehearsals. The technology is more pervasive — it touches every domain of human activity, not just energy or agriculture or medicine. The pace is faster — monthly capability advances that render quarterly governance reviews obsolete before they are completed. The opacity is deeper — the internal mechanisms are illegible not just to citizens but to the engineers themselves, a feature Jasanoff finds both novel and profoundly concerning.
And the stakes, measured not in economic output but in the currency of human meaning — what it means to be skilled, to be creative, to be needed, to contribute something that cannot be produced by a machine — are as high as any she has studied.
The question Jasanoff brings to this moment is not the technical question of how to make AI safe. It is the democratic question of who gets to decide what safety means. Whether the governance frameworks being constructed can incorporate the knowledge of the people whose lives the technology reshapes. Whether the expert and the citizen can be brought into the same institutional space, with the same epistemic standing, to make decisions together about a technology whose consequences neither can fully predict.
Her answer, which the following chapters will develop, is that this incorporation is both necessary and extraordinarily difficult. Necessary because governance that excludes affected communities is illegitimate regardless of its technical competence. Difficult because the knowledge of experts and citizens is produced in different registers, validated by different standards, and expressed in different languages — and no existing AI governance institution has been designed to hold both.
The twelve-year-old who asks "What am I for?" possesses knowledge that no AI safety benchmark can capture. Jasanoff's project is the construction of institutions that can hear her.
---
On a February morning in Trivandrum, India, twenty engineers sat across from Edo Segal while he told them something that must have sounded extraordinary: by the end of the week, each of them would be able to do more than all of them together. The tool was Claude Code. The cost was a hundred dollars per person, per month. By Friday, the transformation was measurable.
But what exactly had been transformed?
The standard account, the one that technology companies tell and that The Orange Pill narrates with genuine vividness, treats the transformation as primarily technical. A more powerful tool entered the workflow. Productivity increased. Capabilities expanded. Individual engineers reached across disciplinary boundaries they had never crossed before. The backend engineer built user interfaces. The designer wrote features end to end. The imagination-to-artifact ratio collapsed.
Jasanoff's framework reveals something the standard account obscures: the technical transformation and the social transformation were not sequential. They were simultaneous. They were, in the precise sense she has spent her career developing, co-produced.
Co-production, as Jasanoff formulates it, is the observation that scientific knowledge and social order do not exist independently of each other. They are made together, each constituting and being constituted by the other in real time. When a new scientific claim is established — the gene, the atom, the algorithm — it does not merely add to the stock of human knowledge. It reorganizes the social world. It creates new categories of people (the genetically at-risk, the algorithmically profiled), new institutions (the bioethics board, the AI safety team), new hierarchies of authority (the geneticist, the data scientist), and new configurations of power (who controls the gene sequence, who controls the training data). Simultaneously, the social order shapes the science. What gets studied, what gets funded, what questions are deemed worth asking — these are not determined by the logic of discovery alone. They are shaped by institutional priorities, market incentives, cultural assumptions, and political power.
The Trivandrum training was a case of co-production at every level. When Claude Code entered the engineering team's workflow, it did not merely change what the engineers could produce. It changed what an engineer was. The backend specialist whose identity had been organized around a specific, hard-won expertise found that the boundaries defining her role had dissolved. The senior engineer who had spent decades building architectural intuition through the friction of manual debugging found that the friction was gone and the intuition was being replaced — or augmented, or made irrelevant, depending on who was telling the story — by a tool that operated faster than his judgment could track.
These were not secondary consequences of a primarily technical change. They were the change itself. The technology and the new social order — the new hierarchy of skills, the new definition of expertise, the new distribution of authority between human judgment and machine output — were being produced in the same room, during the same week, by the same process. The code and the org chart were being written simultaneously, even if only one of them was visible on the screen.
Jasanoff's concept of co-production originated in her study of how regulatory science operates. In States of Knowledge, she demonstrated that the knowledge produced by regulatory agencies — about the safety of pharmaceuticals, the risks of environmental pollutants, the efficacy of medical devices — is not simply "applied science." It is a hybrid form of knowledge, shaped simultaneously by scientific methods and by the institutional, legal, and political contexts in which the science is conducted. The drug is not safe or unsafe in the abstract. It is safe or unsafe within a specific regulatory framework, which defines what counts as evidence of safety, who produces that evidence, and what standard of proof is required. Change the framework and you change the knowledge. Not arbitrarily — the molecules do not rearrange themselves — but consequentially. What a society knows about the safety of a drug is inseparable from the institutions through which that knowledge is produced.
Applied to AI, co-production reveals what the productivity narrative structurally cannot see. When Segal reports a twenty-fold productivity multiplier, the number is real. It measures something. But it measures only the technical dimension of a process that is simultaneously technical and social. The productivity gain is inseparable from the social reorganization that produced it — the dissolution of specialist roles, the redistribution of authority from senior to junior engineers (or from humans to machines), the redefinition of what counts as expertise, the implicit devaluation of the embodied knowledge that the senior engineer had accumulated through decades of friction-rich practice.
The co-production framework does not claim that the productivity gain is illusory or that the social reorganization is necessarily harmful. It claims that they cannot be analyzed separately. A governance framework that evaluates the productivity gain without examining the social reorganization has studied half the phenomenon and drawn conclusions about the whole.
This analytical error is endemic in AI discourse. The adoption curves, the revenue metrics, the capability benchmarks — these are the measures that dominate the conversation. They are real, and they are insufficient, because they capture only the technical dimension of a co-produced phenomenon. The social dimension — what happened to professional identity, to the distribution of authority, to the felt experience of work — is treated as a downstream effect rather than as a constitutive feature of the technological change itself.
Jasanoff would argue that the distinction between "the technology" and "its social effects" is itself the problem. There is no technology prior to its social embedding. Claude Code does not exist in a vacuum. It exists in a specific organizational context, with specific power relationships, specific cultural assumptions about what constitutes good work, specific market pressures that determine how productivity gains are distributed. These contexts are not external to the technology. They are part of it.
The training data that shapes a large language model's outputs embeds the patterns of the society that produced the data — its linguistic conventions, its cultural assumptions, its systematic biases, its distribution of attention across topics and perspectives. The design decisions that determine how the model interacts with users — its tendency toward agreeability, its confident presentation of uncertain claims, its aesthetic preference for smoothness — reflect the values and priorities of the institutions that built it. The deployment choices that determine who has access, at what cost, under what terms of service, in which languages, with what support structures — these are social decisions with technical consequences and technical decisions with social consequences, and the line between them does not exist.
When Segal describes Claude's tendency to produce "confident wrongness dressed in good prose" — the Deleuze passage that sounded like insight but broke under examination — he is describing a co-produced phenomenon. The confidence is a technical feature of the model's architecture. The wrongness is a consequence of training data limitations and inferential mechanisms. But the seductiveness, the fact that the author almost kept the passage because it sounded right, is a social phenomenon. It depends on a cultural context in which polished prose is treated as evidence of sound thinking, in which the aesthetic of smoothness, the very quality that Byung-Chul Han diagnoses as the signature pathology of the contemporary moment, functions as a proxy for intellectual rigor. The model produces smooth output because the culture that produced the model values smoothness. The smoothness then reinforces the cultural preference, training users to expect and accept polished surfaces as indicators of substantive depth. The technical and the social spiral together, each producing the other.
This spiraling co-production is visible at every scale. At the individual level, the engineer who uses Claude for six months develops new capabilities and loses old ones simultaneously. The new capabilities — broader reach, faster execution, the ability to work across domains — are real. The losses — the atrophy of debugging intuition, the erosion of the embodied knowledge that only friction builds — are equally real. Both are consequences of the same process, and that process is neither purely technical nor purely social. It is co-produced.
At the organizational level, the company that deploys AI tools reorganizes itself around the tool's affordances. Specialist silos dissolve. New roles emerge — the "vector pod," the prompt engineer, the AI practice coordinator. Old roles are redefined or eliminated. The reorganization is not imposed by the technology. It is co-produced by the interaction between the technology's capabilities and the organization's existing structure, culture, and power dynamics. A different organization, with different culture and different power dynamics, would reorganize differently around the same tool.
At the societal level, the co-production of AI and social order is reshaping the most fundamental categories of economic life: what counts as skill, what counts as expertise, what counts as a profession, what counts as a contribution. When the imagination-to-artifact ratio approaches zero, the entire social architecture that was built around the scarcity of implementation capability — the university programs that train implementors, the career ladders that reward implementation skill, the professional identities that are organized around specific technical competences — is destabilized. Not by the technology alone, but by the technology and the social response to it, co-produced in real time, faster than any governance framework can track.
The governance implications of the co-production framework are profound and uncomfortable. If the technical and the social are inseparable, then governing AI cannot mean governing the technology alone. It means governing the social order that the technology co-produces. And governing a social order requires democratic participation — the involvement of the people whose social world is being remade, not as beneficiaries or victims of a process controlled by others, but as participants in the decisions that shape it.
This is not what is happening. The current AI governance landscape is organized around the fiction that the technology and its social consequences can be governed separately — that technical standards can ensure safety, that economic policy can manage displacement, that educational reform can address skill gaps, each in its own institutional silo. Jasanoff's co-production framework reveals this organizational fiction for what it is: a failure to see that the safety, the displacement, and the skill gaps are not separate problems with separate solutions. They are different facets of a single co-produced phenomenon, and they can only be governed together, by institutions that can hold the technical and the social in the same analytical frame.
No such institution currently exists for AI at the scale the moment demands.
---
In the winter of 2025, two kinds of evidence circulated about artificial intelligence, and they passed through different channels, reached different audiences, and carried different weight in every room where decisions were being made.
The first kind was quantitative. Adoption curves showing ChatGPT reaching fifty million users in two months. GitHub data indicating that four percent of all commits were AI-generated, rising monthly. Revenue figures: Claude Code crossing two and a half billion dollars in run-rate revenue by February 2026. Productivity metrics from company after company — the twenty-fold multiplier in Trivandrum, the solo-built revenue-generating products, the thirty-day development sprints that would previously have taken quarters. This evidence was precise, replicable, and legible to the institutions that make decisions: boardrooms, investment committees, policy offices, regulatory bodies. It traveled on slides. It appeared in earnings calls. It shaped budgets.
The second kind was qualitative. A Substack post titled "Help! My Husband Is Addicted to Claude Code." A senior software architect at a San Francisco conference who said he felt like a master calligrapher watching the printing press arrive. An engineer in Trivandrum who spent two days oscillating between excitement and terror before finding his footing. A twelve-year-old who asked her mother, "What am I for?" This evidence was vivid, specific, and true in a way that no metric could capture. It traveled on social media, in private conversations, in the quiet hours of early morning when parents lay awake. It shaped nothing — no budgets, no policies, no regulatory frameworks. It was treated, in every institutional context, as anecdote.
Jasanoff has spent her career studying this asymmetry. In every technology debate she has examined — biotechnology, nuclear power, environmental regulation, pharmaceutical safety — the same pattern recurs: the evidentiary standards of governance institutions are calibrated to admit one kind of knowledge and exclude another. The kind they admit is quantitative, technical, expert-produced. The kind they exclude is qualitative, experiential, community-generated. And the exclusion is not random. It is systematic, and it produces systematic consequences.
The consequences are not that governance gets the numbers wrong. The numbers are usually right. Jasanoff's point is more subtle and more damaging: governance that admits only quantitative evidence produces decisions that are technically competent and humanly inadequate. Decisions that optimize the measurable while ignoring the meaningful. Decisions that look rational on a spreadsheet and feel irrational to the people who live with them.
The UC Berkeley study that The Orange Pill examines in detail is a perfect illustration. Xingqi Maggie Ye and Aruna Ranganathan embedded themselves in a two-hundred-person technology company for eight months and produced findings that met every standard of quantitative rigor: AI intensified work, colonized pauses, fractured attention, correlated with measurable burnout indicators. These findings entered the governance conversation. They were cited in policy discussions, referenced in corporate AI practice frameworks, presented at conferences.
But Segal's analysis of the study revealed something Jasanoff's framework would predict: the study measured behavior without capturing meaning. It documented that workers worked more but could not determine whether the additional work was trivial or genuinely new. It measured hours but not satisfaction. It recorded boundary erosion but could not distinguish between the erosion of boundaries that protected rest and the dissolution of boundaries that had confined ambition. A person can be exhausted by work that gives their life meaning. The Berkeley data could not tell the difference, because the difference is qualitative, and the study's evidentiary framework was quantitative.
This is not a criticism of the Berkeley researchers, who produced the most rigorous empirical study of AI's workplace effects available. It is a diagnosis of the evidentiary architecture within which all such studies operate. The architecture admits measurements. It does not admit meanings. And the governance frameworks built on this architecture inherit its blind spots.
Jasanoff's concept of "regulatory science" — the hybrid form of knowledge produced at the intersection of scientific inquiry and institutional decision-making — illuminates why the evidentiary asymmetry persists. Regulatory science must be actionable. It must produce findings that can be translated into rules, standards, and enforceable requirements. Quantitative evidence is actionable in ways that qualitative evidence is not. A finding that "AI increases task volume by twenty-seven percent" can be translated into a workload standard. A finding that "AI erodes the felt sense of professional identity in ways that accumulate below the threshold of conscious awareness" cannot. It is true, but it is not actionable within the existing institutional architecture. And because it is not actionable, it is not admitted.
The exclusion compounds over time. When qualitative evidence is systematically excluded from governance, the governance frameworks develop around the quantitative evidence alone. The frameworks become optimized for what they can measure and blind to what they cannot. The institutions that produce governance knowledge — the policy research centers, the regulatory agencies, the commissioned studies — learn to produce the kind of evidence that the governance architecture can process. They study adoption rates, not identity erosion. They measure productivity, not meaning. They track economic displacement, not the quiet grief of a person who has lost the thing that made their work feel like theirs.
Jasanoff observed in her Harvard podcast appearance that the asymmetry extends to how threats and promises are discussed. The promises of AI are described with exquisite quantitative specificity: productivity gains, cost reductions, capability expansions, all measured and projected with the confidence of an earnings forecast. The threats are described in the language of science fiction: extinction, superintelligence, existential risk — vast, cinematic, and almost entirely devoid of the causal specificity that would make them governable. "There is this coupling of the idea of extinction together with AI," she noted, "but very little specificity about the pathways by which the extinction is going to happen."
The asymmetry serves a function. Specific promises and vague threats produce a governance environment in which the promises can be pursued with precision while the threats can be acknowledged in principle and deferred in practice. The specificity of the promise demands action — investment, deployment, adoption. The vagueness of the threat permits delay — further study, continued monitoring, eventual regulation. The governance conversation is structured, before it begins, by the evidentiary standards that determine what kind of knowledge is taken seriously.
Jasanoff's framework does not argue that quantitative evidence should be excluded or that qualitative evidence should be privileged. It argues that governance adequate to the AI moment requires institutions that can hold both — that can take a productivity metric and a Substack confession seriously, simultaneously, without reducing either to the other's terms. This is a harder institutional design problem than either purely technocratic or purely democratic governance would present. It requires what Jasanoff calls "technologies of humility" — institutional practices designed to incorporate uncertainty, acknowledge the limits of expert knowledge, and create legitimate channels for the knowledge of affected communities to enter the governance conversation.
The practical implications are specific. When a government commissions a study on AI's impact on employment, the study should be designed to capture not just job losses and job creation but the transformation of professional identity — how workers understand their own expertise, whether they feel their skills are valued, what it means to them that a machine can now do what they spent years learning to do. When a company designs an AI practice framework, the framework should include mechanisms for surfacing qualitative evidence from the people who use the tools daily — not satisfaction surveys, which capture opinions, but narrative accounts of experience, which capture meanings. When a school district evaluates AI in education, the evaluation should include not just test scores but the voices of teachers describing what happens to their students' relationship with learning when AI does the homework.
These are not soft supplements to rigorous governance. They are the missing components of governance that has been rigorous in one dimension and catastrophically incomplete in another. The precision of the productivity metric is real. The truth of the parent's midnight anxiety is equally real. A governance framework that can hold both — that can make decisions informed by what the technology does and by what the technology does to people — is the framework the AI moment demands and the framework that does not yet exist.
The silent middle that Segal identifies — the people who feel both the exhilaration and the loss but cannot find a place in a discourse that rewards clean narratives — is, in Jasanoff's terms, an epistemic community whose evidence has no port of entry. The silent middle possesses the most accurate knowledge of the AI transition: what it actually feels like, day by day, to live inside a transformation that no one fully understands. That knowledge is the most important input to governance. It is also the input that every existing governance institution is designed to exclude.
The evidentiary architecture must be rebuilt. Not because the quantitative evidence is wrong, but because it is half.
---
In 2024, the European Union finalized the AI Act — the most comprehensive regulatory framework for artificial intelligence yet produced by any democratic government. It classified AI systems by risk level, imposed transparency requirements, prohibited certain applications outright, and established enforcement mechanisms with real penalties. It was, by any measure, a serious attempt to govern a technology whose consequences most legislators could not fully predict and whose mechanisms most regulators could not fully understand.
In the same year, the United States produced a series of executive orders and voluntary commitments from leading AI companies. The commitments were real but non-binding. The executive orders established principles but created few enforceable standards. The approach was calibrated to avoid constraining innovation while signaling awareness that consequences deserved attention.
China, meanwhile, had already implemented regulations on algorithmic recommendation, deepfakes, and generative AI content — specific, targeted rules focused on information control and social stability, deployed with a speed that reflected both centralized authority and a strategic vision of AI as an instrument of state capacity.
Singapore produced a governance framework notable for its flexibility: a principles-based approach designed to adapt as the technology evolved, light enough to attract investment and substantive enough to demonstrate that governance was not an afterthought.
These four approaches are not merely different policies applied to the same problem. They are, in the analytical vocabulary Jasanoff developed in Designs on Nature, expressions of different civic epistemologies — different culturally embedded ways of knowing, of producing and validating claims about the world, of deciding what counts as sufficient grounds for action. The AI governance divergence across nations is not primarily a story about different political preferences. It is a story about different knowledge cultures, and the failure to recognize this has produced a global governance conversation in which participants talk past each other with reliable precision.
Jasanoff introduced the concept of civic epistemology through her comparative study of how the United States, the United Kingdom, Germany, and the European Commission governed biotechnology. The four jurisdictions faced the same scientific questions — Is this genetically modified organism safe? What risks does it pose? How should it be regulated? — and arrived at dramatically different answers. Not because they had different science. Because they had different ways of determining what the science meant and how it should translate into governance.
The American civic epistemology, as Jasanoff characterized it, is adversarial and empiricist. Knowledge is produced through contestation — competing experts, cross-examination, the marketplace of ideas. The standard of proof is high for restricting action and low for permitting it. The default is innovation, and the burden falls on those who would constrain it to demonstrate, with specificity, why constraint is necessary. The German civic epistemology is consensual and precautionary. Knowledge is produced through deliberation among recognized authorities. The standard of proof is high for permitting action that carries uncertain consequences. The default is caution, and the burden falls on those who would deploy a new technology to demonstrate that it is safe.
These are not merely different regulatory postures. They are different ways of knowing — different assumptions about how certainty is achieved, how expertise should be organized, what role the public should play in decisions about science and technology, and what it means for a governance decision to be legitimate. They produce different institutional architectures, different evidentiary standards, and different relationships between experts and citizens.
The AI governance debate makes these differences vivid and consequential.
The Silicon Valley civic epistemology — which is the dominant epistemology not just of American technology companies but of the global AI development ecosystem — privileges speed, iteration, and empirical demonstration above all else. If you can build it and it works, that is evidence. If it scales, that is more evidence. If users adopt it rapidly, that is the strongest evidence of all, because adoption is the market's verdict and the market is the ultimate arbiter of value. Within this epistemology, regulation is epistemically suspect — it represents the judgment of people who do not build, imposed on people who do, based on evidence that is speculative (what might happen) rather than empirical (what has happened). The builder's conviction, visible throughout The Orange Pill, that the way to understand AI is to use it, to build with it, to feel it in your hands — this conviction is an expression of a civic epistemology, not merely a personal preference. It reflects a knowledge culture in which doing is knowing and showing is proving.
Segal operates explicitly within this epistemology. His account of the transformation is grounded in demonstration: the twenty-fold multiplier, the thirty-day sprint to CES, the engineer who built a complete user-facing feature in two days. The evidence is in the artifact. The proof is in the product. The legitimacy of the argument rests on the fact that it was lived, not theorized. This epistemology has extraordinary strengths. It produces rapid innovation, it rewards creative risk-taking, and it generates the kind of visceral, embodied knowledge that comes only from building something under pressure and watching it succeed or fail. But Segal's own narrative reveals its limitations with equal clarity. His honest account of the costs — the inability to stop, the erosion of the boundary between work and everything else, the productive addiction that his wife documented, the compulsive 3 a.m. sessions — suggests that the builder's civic epistemology, powerful as it is, lacks the internal resources to govern what it builds. It can tell you what works. It cannot tell you what it costs, because the costs manifest in registers — experiential, relational, existential — that the epistemology does not recognize as evidence.
The European civic epistemology, expressed in the AI Act, operates from different premises. Knowledge about a technology's consequences is not established solely through deployment and observation. It is established through assessment — structured, precautionary, conducted before deployment rather than after. The burden of proof runs in the opposite direction: the builder must demonstrate that the technology is acceptable before deploying it, rather than deploying it and waiting for evidence of harm to accumulate. This epistemology has its own strengths and its own blind spots. It excels at anticipating harms that can be specified in advance, but it struggles with harms that emerge unpredictably from the interaction between a technology and the social order it co-produces. The EU AI Act classifies AI systems by risk level, but the classification depends on predicting which applications will produce which consequences — and the most important consequences of AI, as Jasanoff has emphasized, are uncertain in the technical sense. They cannot be predicted with the confidence that a risk classification system requires.
The Chinese civic epistemology treats AI governance as an expression of state capacity. The relevant knowledge is strategic — how does AI serve national objectives? — and the governance framework is designed to maximize the state's ability to direct AI development toward those objectives while managing social consequences through centralized control. This epistemology produces fast, targeted regulation, but it admits only one kind of consequence — the kind that the state recognizes as relevant — and excludes the experiential knowledge of citizens by design rather than by oversight.
The Singaporean civic epistemology privileges adaptability. Governance frameworks are designed to evolve with the technology, to avoid locking in assumptions that will be obsolete within a year. This epistemology has real advantages in a fast-moving domain, but its flexibility comes at a cost: principles-based governance provides less certainty than rules-based governance, and the people living with the consequences of AI may find that adaptive governance feels less like responsiveness and more like an inability to commit.
Each of these civic epistemologies captures something real about the AI governance challenge and misses something equally real. The builder's epistemology understands the technology from the inside but cannot see its social consequences from the outside. The precautionary epistemology anticipates harms but struggles with uncertainty. The strategic epistemology produces coherent policy but excludes democratic participation. The adaptive epistemology responds to change but may not protect the vulnerable quickly enough.
Jasanoff's comparative method does not rank these epistemologies. It maps them. It shows what each reveals and what each conceals. And it asks a question that none of them can answer from within their own walls: What would a governance framework look like that could hold multiple civic epistemologies simultaneously? That could incorporate the builder's embodied knowledge, the precautionary impulse's demand for prior assessment, the strategic thinker's concern for coherence, and the adaptive framework's responsiveness — without collapsing into any single one of them?
This is not an academic question. It is the central governance challenge of the AI moment. AI is global — built in one jurisdiction, deployed in another, affecting communities everywhere. A governance framework adequate to the AI moment must be able to navigate between civic epistemologies that produce different knowledge, demand different evidence, and define legitimacy in different terms.
Jasanoff's own proposal is not a meta-epistemology that resolves these differences. It is an institutional posture — technologies of humility — that can operate within and across different civic epistemologies. Humility, in her formulation, is the recognition that no single way of knowing is adequate to the consequences of a technology this powerful. The builder's epistemology is valuable. It is not sufficient. The precautionary epistemology is valuable. It is not sufficient. What is sufficient, if anything is, is an institutional architecture that can hold multiple ways of knowing in productive tension — that can take the builder's demonstration and the citizen's experience and the regulator's assessment and the comparativist's insight, and forge from them governance decisions that are both technically informed and democratically legitimate.
No such architecture exists yet. Building it may be the most important governance challenge of the century, and the challenge is as much epistemic as institutional. Before societies can build new governance structures, they must first recognize that the current structures admit only certain kinds of knowledge, and that the knowledge they exclude is precisely the knowledge they most need.
The conventional telling of the governance gap goes like this: technology moves fast, institutions move slowly, and the distance between them is where the damage occurs. This narrative is so familiar that it has acquired the force of natural law — a thing that simply is, lamentable perhaps but as inevitable as gravity. AI companies deploy capabilities monthly. Legislatures deliberate annually. The gap widens. The people inside the gap absorb the consequences.
Jasanoff has spent decades arguing that this narrative is wrong — not in its observation, which is accurate, but in its explanation, which is misleading in ways that serve specific interests. The law-lag narrative, as she has called it, treats the speed differential between technology and governance as a fact of nature rather than a product of choices. It assumes that technology develops autonomously, according to its own internal logic, and that governance can only react — scrambling to catch up, always arriving after the damage is done, always one crisis behind the frontier. This framing makes regulation feel futile and deference to technologists feel rational. If the river cannot be governed in real time, then perhaps the builders should be trusted to govern themselves.
The framing is, in Jasanoff's analysis, a mechanism for delegating power. When a society accepts that governance inevitably lags technology, it has already conceded that the period between deployment and regulation — the gap itself — will be governed by the people who built the technology, according to whatever principles they happen to hold, with whatever accountability structures they choose to impose on themselves. The law-lag narrative does not describe a governance failure. It produces one, by making the failure appear natural and therefore ungovernable.
The history does not support the narrative's premises. Law and social institutions do not simply react to technology. They co-produce it. The patent system shaped the direction of industrial innovation for centuries, not by catching up to invention but by creating the institutional framework within which invention occurred. Environmental regulation did not merely respond to pollution; it reshaped industrial chemistry by making certain processes legally impossible and others economically necessary. Securities regulation did not lag financial innovation; it constituted the market within which financial innovation took place, defining what instruments were legal, what disclosures were required, what behaviors were fraudulent. In each case, law was not a lagging variable. It was a constitutive force — present at the creation, shaping the technology's trajectory from the inside.
The AI moment appears to confirm the law-lag narrative only if the analysis begins in 2022, with the release of ChatGPT, and treats everything that followed as governance playing catch-up to a technological fait accompli. But the story begins much earlier. The computational infrastructure on which large language models run was built inside a regulatory environment — telecommunications law, data protection regulation, intellectual property frameworks, export controls — that shaped every aspect of its development. The training data that gives language models their capabilities was produced inside a legal and institutional context that determined what data was available, what could be collected, what privacy expectations attached to it. The venture capital structures that funded AI development were themselves products of regulatory choices about tax policy, securities law, and the legal treatment of intellectual property. The technology did not develop outside the governance framework and then arrive, fully formed, for governance to address. It was produced within a governance framework that made specific choices — about data, about intellectual property, about market structure — whose consequences are now manifesting at scale.
Jasanoff's point is not that existing governance was adequate. It was not. Her point is that the inadequacy was not a lag. It was a series of choices — choices about what to regulate and what to leave unregulated, what evidence to require and what to accept on faith, whose voices to include in governance conversations and whose to exclude. The governance gap is not a speed problem. It is a design problem. The institutions were designed for a world in which the co-production of technology and social order moved slowly enough that sequential governance — first the technology, then the assessment, then the regulation — could work tolerably well. The AI moment broke that sequential model, not because the technology is faster (though it is), but because the co-production is denser. The technical and the social are being produced simultaneously, at every level, in every domain, and the governance architecture that processes them sequentially cannot keep pace with a process that is not sequential.
The SaaS death cross that The Orange Pill documents — the trillion-dollar market-value collapse in early 2026 — illustrates the design problem with painful clarity. The market repriced an entire industry segment in weeks. The repricing was not caused by regulation or the absence of regulation. It was caused by the co-production of a new technological capability (AI-generated software) and a new economic reality (the commoditization of code). No governance framework was designed to address the consequences of this repricing — the workers inside the repriced companies, the communities dependent on those companies' economic activity, the downstream effects on education and career planning and professional identity. These consequences were real, immediate, and ungoverned. Not because governance was slow, but because no governance institution was designed to address the specific kind of co-produced disruption that the AI moment generates.
Segal acknowledged this gap and proposed organizational responses: AI Practice frameworks, attentional ecology, the structured pauses that the Berkeley researchers recommended. These proposals are real and valuable. They are also, in Jasanoff's framework, radically insufficient — not because they are wrong but because they operate at the wrong scale. An organization can build dams within its own boundaries. It cannot govern the river. The decisions about how AI reshapes labor markets, educational institutions, creative industries, and democratic culture are not organizational decisions. They are societal decisions, and societal decisions require democratic institutions with the authority, the knowledge, and the legitimacy to make them.
The governance gap, reframed through Jasanoff's analysis, is not a gap between speed of technology and speed of regulation. It is a gap between the scale of the decisions that must be made and the scale of the institutions available to make them. The EU AI Act operates at the scale of a continental market. The American executive orders operate at the scale of federal authority. Corporate AI governance operates at the scale of individual firms. None of these scales matches the scale of the phenomenon, which is global, instantaneous, and operating simultaneously in every domain of human activity. The governance gap is a scalar mismatch — and scalar mismatches are not closed by moving faster. They are closed by building institutions at the right scale.
What would such institutions look like? Jasanoff's work suggests several features. They would be epistemically plural — designed to hold technical knowledge, regulatory knowledge, and experiential knowledge simultaneously, without reducing any of them to the others' terms. They would be constitutionally self-aware — explicit about the values they embody, the tradeoffs they accept, and the imaginaries they serve. They would be participatory — not in the thin sense of public comment periods and stakeholder consultations, but in the thick sense of including affected communities in the governance process with genuine authority to shape decisions. They would be adaptive — capable of revising their own assumptions as the technology and its consequences evolve. And they would be humble — designed around the recognition that the consequences of AI cannot be fully predicted, that every governance choice is provisional, and that the posture of certainty that characterizes both the technology industry and most regulatory frameworks is itself a form of governance failure.
None of these features is present in the current governance architecture at the scale required. The EU AI Act is impressive but rigid — a classification system designed for a technology that refuses to stay classified. The American approach is flexible but toothless — principles without enforcement, commitments without accountability. The corporate governance frameworks are genuine but captured — designed by the same institutions whose products they govern, accountable to the same shareholders whose returns they protect. Each governance instrument addresses one dimension of the problem while leaving the others unaddressed, and the dimensions interact in ways that no single instrument can capture.
Jasanoff observed in her 2023 keynote at the University of Chicago that recent developments in AI "have unsettled expectations about the firmness of the line between human and nonhuman, emotion and intellect, and person and machine." This unsettlement is not a problem that governance can solve. It is a condition that governance must inhabit. The governance gap will not close because institutions speed up. It will narrow only when institutions are redesigned to operate inside uncertainty rather than against it — to make decisions that are explicitly provisional, explicitly humble, and explicitly open to revision by the communities whose lives those decisions shape.
The law does not lag technology. The law shapes technology, and the technology reshapes the law, and both are being remade simultaneously. The question is not how to make governance faster. It is how to make governance adequate to a process of co-production that is faster, denser, and more consequential than anything the current institutional architecture was designed to address.
---
The word "humility" has been domesticated. In common usage it denotes modesty, self-effacement, the polite reluctance to claim more than one's due. Jasanoff uses it differently. In her formulation, humility is not a personality trait. It is an institutional capacity — a set of practices designed to produce governance that acknowledges what it does not know, incorporates the knowledge of people it does not employ, and creates mechanisms for detecting the consequences it cannot predict.
Technologies of humility, as Jasanoff introduced the concept in a 2003 essay that has become one of the most cited works in science and technology studies, are the institutional counterpart to technologies of hubris — the quantitative risk assessments, cost-benefit analyses, and expert-dominated decision processes that characterize most technology governance. Technologies of hubris are not useless. They produce genuine knowledge. But they produce it within a framework that systematically overestimates what can be known and systematically underestimates what cannot. A risk assessment that assigns probabilities to outcomes is a technology of hubris: it transforms uncertainty, which is unmeasurable, into risk, which is measurable, and in doing so it creates the illusion that the consequences of a technology can be calculated, ranked, and managed through technical means.
The AI moment has produced an extraordinary proliferation of technologies of hubris. Safety benchmarks that quantify a model's tendency to produce harmful outputs. Alignment protocols that measure the distance between a model's behavior and its designers' intentions. Responsible AI frameworks that enumerate principles and track compliance. Each of these is real, each is valuable, and each operates within the same epistemic limitation: it measures what it can measure and treats the unmeasurable as though it does not exist.
What cannot be measured is often what matters most. The slow erosion of professional identity as expertise is commoditized. The gradual atrophy of cognitive capacities that are exercised only through friction. The quiet displacement of human relationships by machine interactions that are more convenient, more available, more responsive, and less real. The cumulative effect of a million small decisions — to prompt rather than think, to generate rather than create, to accept the smooth output rather than wrestle with the rough idea — on the texture of a life, a career, a culture. These consequences are real. They are consequential. They are, in the precise sense Jasanoff uses the term, uncertain — not in the sense that they might not happen, but in the sense that they cannot be assigned probabilities, cannot be captured in a risk matrix, cannot be anticipated with the specificity that a governance framework requires to act.
Technologies of humility are designed to govern under these conditions. Jasanoff's framework identifies four components, each corresponding to a question that technologies of hubris systematically fail to ask.
The first is framing. How a problem is defined determines what solutions are imaginable. The dominant framing of AI governance treats it as a safety problem: How do we prevent AI from producing harmful outputs? This framing is not wrong — harmful outputs are real and preventing them matters. But the framing excludes from consideration the harms that do not arise from the technology's outputs but from its integration into human life. The harm is not that Claude produces incorrect information, though it does. The harm is that the relationship between a human being and the work that gives their life meaning is being restructured by a tool whose designers did not intend that restructuring and whose governance frameworks do not address it. A technology of humility would reframe the governance question: not "Is AI safe?" but "What kind of society are we building with AI, and is that the society we want?"
The second is vulnerability. Who is most exposed to harm, and how do they differ from the populations that the technology's designers had in mind? AI tools are designed, tested, and refined by people with specific demographic, economic, and educational profiles — predominantly English-speaking, predominantly educated in Western institutions, predominantly employed in knowledge-economy occupations. The governance frameworks designed for these tools inherit the same profile. The vulnerability analysis asks what happens when the tools reach populations that do not share that profile. The developer in Lagos, who appears in The Orange Pill as a beneficiary of democratization, is also, from a vulnerability perspective, the person most exposed to disruption when access changes — when pricing models shift, when bandwidth fails, when terms of service are rewritten, when the platform that democratized her capability is acquired, pivoted, or shut down. The democratization narrative captures the expansion. The vulnerability analysis captures the precarity.
The third is distribution. Who benefits from the technology and who bears the cost? The productivity gains from AI are real and measurable. The distribution of those gains is less visible and rarely addressed in governance frameworks. Segal's honest account of the boardroom arithmetic — the twenty-fold productivity multiplier that could be converted directly into headcount reduction, the quarterly pressure to capture the gain as margin rather than reinvest it in human capability — reveals the distributional question with unusual clarity. The choice to keep and grow the team was a distributional decision, made by an individual leader with specific values, against the structural incentives of the market. A governance framework that relies on individual leaders making the right distributional choice in the face of market pressure is not a governance framework. It is a hope.
The fourth is learning. How do institutions detect their own errors and revise course? The AI governance landscape is characterized by an almost total absence of learning mechanisms. The EU AI Act was drafted before the generative AI explosion and revised under enormous time pressure. The American executive orders reflect a specific political moment and may not survive a change in administration. Corporate AI governance frameworks are designed by the companies they govern and audited, when they are audited at all, by firms those companies hire. None of these structures is designed for the kind of continuous, humble, self-critical learning that the pace of AI development demands. None of them has a mechanism for incorporating the experiential knowledge of affected communities into the revision process. None of them treats its own assumptions as provisional in the way that the uncertainty of the consequences requires.
Jasanoff's framework is demanding. It requires institutions that can ask hard questions about their own assumptions, that can hold multiple kinds of knowledge in productive tension, that can include the voices of people who do not speak the language of technical governance, and that can revise their own decisions in light of evidence that accumulates slowly, below the threshold of crisis, in the daily experience of people whose lives the technology reshapes.
This demand is "profoundly at odds with the culture of the technology industry," to use language consistent with Jasanoff's own assessment. The technology industry's culture privileges confidence over humility, speed over deliberation, demonstration over assessment, and the knowledge of builders over the knowledge of the communities in which the products are deployed. The industry's instinct, when confronted with uncertainty, is to build through it — to deploy, observe, iterate, and treat the consequences as data for the next version. This instinct is not irrational. It has produced extraordinary innovation. But it is not governance. It is engineering applied to a governance problem, and the category error is consequential.
Governance under uncertainty requires a different instinct: the instinct to pause, not because the technology is dangerous but because the consequences are unknown. To include voices that slow the process but improve the decisions. To treat the absence of evidence of harm not as evidence of absence but as evidence that the detection mechanisms are inadequate. To build institutions that are epistemically humble enough to say "we do not know" and institutionally robust enough to act wisely in spite of not knowing.
This is what Jasanoff means by humility as democratic practice. Not modesty. Not timidity. Not the paralysis that technologists fear when they hear the word "precaution." Democratic practice — the hard, slow, contentious, imperfect work of building institutions that can govern what they cannot fully understand, in ways that the people who live with the consequences recognize as legitimate.
The AI moment is the most consequential test of that practice in human history. Whether the institutions being built now can meet the test depends on whether the people building them understand that the test is not technical. It is democratic. And democratic governance of a technology this powerful, this fast, this opaque requires a kind of institutional humility that the current governance landscape has barely begun to develop.
---
Every powerful technology arrives wrapped in a story about the future. Not a prediction — predictions are falsifiable, bounded, subject to revision. A story — a collectively held vision of the world the technology will create, carrying within it assumptions about human nature, social organization, and the purpose of capability that are rarely made explicit and almost never subjected to democratic deliberation. Jasanoff and Sang-Hyun Kim called these visions sociotechnical imaginaries, and the concept has become one of the most widely applied frameworks in the scholarly analysis of artificial intelligence.
A sociotechnical imaginary is not marketing. Marketing is deliberate and instrumental — designed to sell a product to a specific audience. An imaginary is deeper and more structural. It is the shared dream that organizes a community's relationship to its own technological future, that determines which developments seem natural and which seem aberrant, which applications seem obvious and which seem perverse, which consequences seem acceptable and which seem intolerable. The imaginary does not describe the future. It shapes it, by determining what counts as progress and who counts as progressive.
The AI moment has produced several competing imaginaries, each coherent, each supported by evidence, each carrying political consequences that its adherents rarely acknowledge.
The productivity imaginary is the dominant narrative of the technology industry. In this vision, AI is an amplifier of human capability — a tool that collapses the distance between intention and artifact, that democratizes the capacity to build, that frees human beings from mechanical drudgery to concentrate on what machines cannot do: judgment, creativity, the irreducibly human work of deciding what is worth making. The Orange Pill is, among other things, a sustained and eloquent articulation of this imaginary. The imagination-to-artifact ratio approaches zero. The developer in Lagos gains access to the same creative leverage as the engineer in San Francisco. The twelve-year-old who asks "What am I for?" discovers that she is for the questions — for the wondering, the caring, the human direction of inhuman power.
The productivity imaginary is compelling because it is partly true. The capability expansion is real. The democratization, partial and precarious as it is, represents a genuine expansion of who gets to participate in the building of technological artifacts. The imaginary captures something that matters, and its adherents hold it with the conviction of people who have felt its truth in their own experience.
But Jasanoff's framework asks questions that the imaginary's adherents are structurally unable to ask from inside it. Whose vision of productivity is embedded in the tools? When Claude Code removes the friction of implementation, it removes a specific kind of friction — the kind that the tool's designers identified as an obstacle — and leaves other frictions untouched or invisible. The friction of securing funding, of navigating institutions, of overcoming the social barriers that determine whose ideas get taken seriously — these frictions are not addressed by a tool that makes coding faster. The imaginary promises democratization while delivering a specific, bounded version of it: democratization of implementation capability, within a system whose larger structure of power — who funds, who deploys, who captures value — remains undisturbed.
Researchers applying Jasanoff's framework to AI have documented the imaginary's operation across domains. In a study of AI recruitment technology companies in Italy, scholars identified three sociotechnical imaginaries — "the third eye, the river, and the car bonnet" — and found that each exhibited what the researchers described as "an overriding desire for productivity and talent capture" accompanied by "a consequential de-prioritisation of addressing social inequality." The imaginary shaped not just how the technology was marketed but how it was designed — what the system optimized for, what it measured, what it treated as noise. The social inequality was not an unintended consequence of the technology. It was a structural feature of the imaginary within which the technology was built.
The existential risk imaginary stands in dramatic opposition to the productivity narrative. In this vision, AI is an existential threat — a technology whose development, if not controlled, could lead to outcomes ranging from mass unemployment to civilizational collapse to human extinction. This imaginary dominates a specific community — AI safety researchers, some philosophers, certain public intellectuals — and it operates with a rhetorical structure that Jasanoff has identified as politically consequential. The threats are described in language that is vast, cinematic, and almost entirely devoid of causal specificity. "There is this coupling of the idea of extinction together with AI," Jasanoff observed, "but very little specificity about the pathways by which the extinction is going to happen."
The vagueness is not incidental. It is functional. A threat that cannot be specified cannot be governed through normal institutional means. It can only be addressed through extraordinary measures — moratoriums, international treaties, the concentration of governance authority in the hands of the experts who claim to understand the threat. The existential risk imaginary, whatever the sincerity of its adherents, produces a political outcome: the delegation of governance authority to a small community of technical experts who position themselves as the only people capable of understanding and managing the risk. Jasanoff would note that this is precisely the epistemic exclusion her framework is designed to surface — the replacement of democratic governance with expert governance, justified by the claim that the stakes are too high for democratic deliberation.
The democratic imaginary exists but is underdeveloped. In this vision, AI is governed not as a product to be deployed or a threat to be managed but as a constitutional question — a question about the kind of society that wants to exist on the other side of the transition. This imaginary does not begin with the technology's capabilities or its risks. It begins with the values a society holds and asks how AI should be designed, deployed, and governed to serve those values. It treats the choice between productivity optimization and democratic participation not as a technical decision but as a political one — a decision that belongs to the public, not to the engineers or the executives or the safety researchers.
This imaginary is Jasanoff's own, though she would resist claiming it as a personal vision rather than an analytical finding. Her comparative work has shown that different societies embed different values in their technological projects, and that the values shape the technology as much as the technology shapes the society. The AI moment demands a democratic imaginary not because democracy is inherently superior to technocracy — though Jasanoff would argue that it is, on grounds of legitimacy — but because the decisions being made about AI are too consequential, too pervasive, too deeply embedded in the fabric of social life to be delegated to any single community, however knowledgeable.
Scholars across multiple disciplines have adopted Jasanoff's framework to study what they term AI imaginaries. A study of public perceptions argued that a sociotechnical perspective for AI "allows for a deeper investigation of the social, economic and political roots" of competing visions, revealing that public attitudes toward AI are not simply reactions to the technology but expressions of deeper assumptions about human nature, social organization, and the purpose of technological capability. An analysis of AI risk governance narratives found that "discussions around the risks of artificial intelligence are shaped by narratives that define how we envision the role of technology in society" — narratives that "actively shape institutional agendas, funding priorities, and regulatory pathways." The concept has been extended into specialized applications: algorithmic imaginaries, platform imaginaries, each tracking how collectively held visions of technological futures shape the institutions that govern them.
The analytical power of the sociotechnical imaginary concept lies in its refusal to evaluate imaginaries as true or false. An imaginary is not a prediction that can be verified or falsified. It is a vision that organizes action — that determines what gets built, what gets funded, what gets regulated, and what gets ignored. The productivity imaginary is not wrong about capability expansion. The existential risk imaginary is not wrong about uncertainty. The democratic imaginary is not wrong about the need for public participation. Each captures a real dimension of the AI moment and suppresses others.
The governance challenge is not to determine which imaginary is correct. It is to build institutions that can hold competing imaginaries in productive tension — that can take the productivity vision's genuine insights about capability expansion, the existential risk vision's genuine insights about uncertainty, and the democratic vision's genuine insights about legitimacy, and forge from them governance decisions that are informed by all three without being captured by any one.
This is, to put it mildly, difficult. It requires institutions that are epistemically plural, politically legitimate, and structurally humble. It requires the recognition that the story a society tells about its technological future is not a description of what will happen but a blueprint for what will be built — and that the choice of blueprint is the most consequential political decision of the AI moment.
Jasanoff concluded her contribution to the STS program's twentieth anniversary by stating the imperative plainly: "We are designing these futures. We must learn how to govern them." The verb is precise. Not predict. Not manage. Govern — with all the democratic weight and democratic difficulty that the word implies.
---
In the summer of 2025, a twelve-year-old asked her mother: "What am I for?"
The question was not rhetorical. It was not philosophical in the way that philosophy is usually practiced — detached, contemplative, conducted at a safe distance from the conditions it examines. The question was existential in the precise sense: it concerned the child's existence, her place in a world that had rearranged itself around a technology whose capabilities seemed to leave no space for the things she could do. She had watched a machine compose music better than she could, write stories more fluently than she could, answer questions faster and more accurately than she could. The question was not abstract. It was urgent, personal, and fundamentally unanswerable by the institutions that were supposed to help her navigate the world.
No AI safety benchmark addresses this question. No risk classification system captures it. No cost-benefit analysis can assign it a value and weigh it against a productivity gain. The question exists in a space that the entire apparatus of AI governance — the frameworks, the assessments, the regulatory instruments — was not designed to reach.
Jasanoff's distinction between risk and uncertainty explains why.
Risk, in the technical sense that governs both insurance mathematics and regulatory frameworks, refers to outcomes that can be specified in advance and assigned probabilities. The risk of a bridge collapsing can be calculated from the properties of its materials, the loads it bears, and the environmental forces it endures. The risk of a pharmaceutical producing a specific side effect can be estimated from clinical trial data. The risk of an AI system generating toxic content can be measured through benchmark testing. Risk is the domain of prediction, and prediction is the domain of expertise. Experts can assess risks, rank them, and design mitigation strategies because the outcomes are knowable even if they are not yet known.
Uncertainty, in the equally technical sense that Jasanoff uses, refers to outcomes that cannot be specified in advance because they depend on interactions between systems whose behavior is emergent — arising from the interaction itself rather than from the properties of any individual component. Uncertainty is not risk with wider error bars. It is a categorically different epistemic condition. Under risk, you know what might happen and can estimate how likely it is. Under uncertainty, you do not know what might happen, because the relevant outcomes have not yet been imagined.
The most important consequences of AI are uncertain in this precise sense. They arise from the interaction between the technology and the social order it co-produces, and that interaction generates outcomes that no participant in the interaction — not the designer, not the user, not the regulator — can specify in advance.
What happens to professional identity when expertise is commoditized? Not job loss — that is a risk, and it can be estimated, however roughly. Identity erosion — the slow dissolution of the felt sense that your skills matter, that your years of practice produced something durable, that the work you do is yours in a way that cannot be replicated by a machine and a prompt. This consequence cannot be specified in advance because it depends on how individuals, organizations, and cultures respond to the commoditization, and those responses are themselves uncertain. The senior engineer in Trivandrum who oscillated between excitement and terror for two days before finding his footing experienced one version of the identity renegotiation. Another engineer, in another context, with different institutional support and different personal resources, might experience something very different. The outcome is not predictable from the properties of the technology. It emerges from the interaction.
What happens to children's cognitive development when AI does their homework? Not cheating — that is a behavioral problem with behavioral solutions. The question concerns something deeper and less tractable: what happens to the capacity for sustained intellectual effort when the friction of that effort can be bypassed by a conversation with a machine? What happens to the relationship between struggle and understanding — the specific, neurologically grounded process by which difficulty deposits layers of comprehension that ease cannot produce? The developmental consequences of removing that friction from a child's education are uncertain in Jasanoff's sense. They cannot be specified in advance because they depend on interactions between cognitive development, pedagogical context, family environment, and cultural expectations that have never existed in this configuration before. There is no historical precedent. There is no clinical trial. There is no data set from which to extrapolate, because the condition is genuinely new.
What happens to democratic culture when AI generates persuasive content at scale? Not misinformation — that is a known threat with known, if imperfect, countermeasures. The deeper question concerns what happens to the epistemological foundations of democratic deliberation when the cost of producing persuasive text, persuasive images, persuasive arguments drops to zero and the volume of machine-generated content exceeds human-generated content by orders of magnitude. How do citizens evaluate claims when the production of claims has been decoupled from the experience, expertise, or conviction of any human claimant? This question is uncertain because the answer depends on how democratic institutions, media ecosystems, and individual citizens adapt to a condition that has no precedent and no model.
AI governance has been conducted almost entirely in the language of risk. The EU AI Act classifies systems by risk level. Safety benchmarks quantify the probability of specific harmful outputs. Alignment research measures the distance between a model's behavior and its designers' intentions. These are genuine and valuable governance instruments. They address the portion of AI's consequences that can be specified in advance and assigned probabilities.
But they do not address the portion that cannot. And Jasanoff's career-long argument is that the portion that cannot be specified — the uncertain portion, the emergent portion, the portion that arises from interactions no one has modeled — is typically where the most consequential outcomes live. The known risks of nuclear energy were not what destroyed public trust. The uncertain consequences — Three Mile Island, Chernobyl, Fukushima — events that fell outside the risk models, that emerged from interactions the models did not capture, that produced social and political consequences the risk assessors had not imagined — were what reshaped the trajectory of the technology.
The distinction has immediate institutional implications. Risk can be managed through technical instruments: standards, benchmarks, audits, enforcement mechanisms. Uncertainty cannot be managed through technical instruments because the outcomes are not yet specified. Uncertainty requires a different institutional posture: continuous monitoring rather than point-in-time assessment, adaptive governance rather than fixed regulation, democratic deliberation rather than expert determination, and humility rather than confidence.
Humility, here, means designing governance institutions around the assumption that the most important consequences of AI have not yet been imagined. Not because the technology is mysterious, but because the interaction between the technology and the social order it co-produces generates emergent outcomes that no single perspective can predict. The builder cannot predict them from inside the building process. The regulator cannot predict them from inside the regulatory process. The citizen cannot predict them from inside the experience of living with the technology. Prediction itself is the wrong paradigm. Navigation is the right one — and navigation requires instruments designed for detecting what is actually happening, not instruments designed for predicting what might happen based on models of what has happened before.
The twelve-year-old's question — "What am I for?" — is the signature uncertainty of the AI moment. It is not a risk that can be mitigated. It is a consequence that emerges from the interaction between a child's developing sense of self and a technological environment that seems to render her capacities redundant. No benchmark captures it. No risk matrix addresses it. No cost-benefit analysis can weigh it against a productivity gain, because the units are incommensurable — one is measured in dollars and the other in meaning.
Jasanoff's insistence on the distinction between risk and uncertainty is not an argument for paralysis. It is an argument for a different kind of institutional intelligence — the kind that monitors rather than predicts, that adapts rather than plans, that listens to the people inside the transition rather than modeling the transition from outside it. The institutions that govern AI must be designed for uncertainty, which means they must be designed to learn, to revise, to incorporate knowledge that arrives slowly and in qualitative registers, to treat the absence of measurable harm not as evidence of safety but as evidence that the measurement instruments may be looking in the wrong place.
The child's question will be answered — not by a governance framework but by the accumulated choices of a civilization deciding, in real time, what it means to be human in the presence of machines that perform. Whether the answer preserves what Segal calls the candle in the darkness — the consciousness that wonders, that cares, that asks why — depends on whether the institutions we build now can govern under conditions of genuine uncertainty, with the humility to acknowledge what they cannot know and the democratic legitimacy to act wisely in spite of not knowing it.
In the early 1980s, the Danish Board of Technology convened a group of fifteen ordinary citizens — a nurse, a postal worker, a farmer, several retirees, a shopkeeper — and asked them to deliberate on the governance of a technology they did not build and could not fully understand. The technology was genetic engineering. The citizens were given briefing materials, access to expert witnesses whom they could question at length, and several weekends of structured deliberation. At the end of the process, they produced a consensus report that was presented to the Danish parliament.
The report was not technically sophisticated. It did not engage with the molecular biology at the level that the researchers would have preferred. It did, however, identify consequences that the expert community had not anticipated — consequences related to agricultural labor, to the relationship between Danish farmers and the seed companies that would control genetically modified organisms, to the felt experience of eating food whose provenance was no longer legible to the people who consumed it. The parliament incorporated several of the citizens' recommendations into subsequent legislation.
The Danish consensus conference became the founding model of what scholars of technology governance call participatory technology assessment — the practice of including affected communities in the evaluation and governance of technologies whose consequences extend beyond the domain of technical expertise. Jasanoff has studied these practices across decades and across political cultures, and her assessment is both encouraging and demanding. Participatory assessment works. It produces governance that is both better informed and more legitimate than expert-only governance. But it works only under specific institutional conditions, and those conditions are extraordinarily difficult to maintain — especially when the technology in question moves faster than any deliberative process ever designed.
The AI moment makes the case for participatory assessment with unusual force. The decisions being made about how AI enters workplaces, classrooms, creative industries, and democratic institutions are decisions whose consequences fall on communities that have had no voice in shaping them. The engineers in Trivandrum whose job descriptions changed in a week were not consulted about whether the change was desirable. The students whose educational experience is being restructured by AI tools were not asked what they need from the restructuring. The parents whose children encounter AI-generated content and AI-assisted homework have no institutional channel through which to contribute their knowledge of what the encounter costs. The citizens whose democratic culture is being reshaped by machine-generated persuasion have no forum in which to deliberate about what kind of information environment they want to inhabit.
This absence of participation is not the result of malice or indifference on the part of the people who build and deploy AI. It is the result of institutional design. The institutions that govern AI — corporate boards, regulatory agencies, legislative committees, standards bodies — are designed to process expert input. They have channels for technical testimony, economic analysis, and legal assessment. They do not have channels for the experiential knowledge that Jasanoff's framework identifies as essential: what it feels like to work alongside a machine that performs your expertise better than you do, what it means for a child's developing sense of self to encounter a tool that answers every question before the question is fully formed, what it costs a community when the economic foundation on which it depends is repriced overnight by a technological transition it did not choose and could not influence.
The participatory models that exist — citizens' assemblies, deliberative polls, public comment periods, stakeholder consultations — each capture some portion of the needed knowledge but fall short of the standard Jasanoff's framework sets.
Public comment periods, the most common participatory mechanism in American regulatory practice, are the thinnest form of participation. They invite input but impose no obligation to incorporate it. The comments are received, acknowledged, and filed. The decisions are made by the same experts who would have made them without the comments, influenced occasionally by well-organized interest groups whose submissions resemble the expert input the institution already knows how to process. The experiential knowledge of ordinary citizens — expressed in language the institution does not recognize as evidence, addressing consequences the institution's framework does not categorize as relevant — has no effective path from submission to decision.
Stakeholder consultations, favored by international organizations and some national regulators, are somewhat more substantive. They bring representatives of affected communities into structured dialogue with technical experts and policymakers. But the representatives are usually selected by the convening institution, which means the selection reflects the institution's existing understanding of who is affected — an understanding that may not include the populations whose vulnerability is least visible. The developer in Lagos appears in the productivity imaginary as a beneficiary of democratization. In a stakeholder consultation convened by a Western governance institution, she would likely not appear at all, because the institution's map of affected stakeholders reflects its own geography of attention.
Citizens' assemblies represent the most robust model. Ireland's Citizens' Assembly on abortion (2016-2017) and France's Citizens' Convention on Climate (2019-2020) demonstrated that randomly selected citizens, given adequate information, time, and institutional support, can deliberate on complex and divisive issues and produce recommendations that carry democratic legitimacy precisely because they emerge from a process that does not privilege any particular expertise or interest. The Irish assembly's recommendations led to a constitutional referendum that passed with sixty-six percent support — an outcome that the political system had been unable to produce through conventional legislative processes for decades.
Could a citizens' assembly deliberate on AI governance? The question is not hypothetical. Several jurisdictions have begun experimenting with deliberative processes for digital governance — including a citizens' panel on algorithms convened by the city of Amsterdam and a national dialogue on AI conducted by the Finnish government. These experiments are instructive, and their limitations are as revealing as their successes.
The primary limitation is temporal. A citizens' assembly requires months of preparation, weeks of deliberation, and additional months for the recommendations to enter the governance process. The AI capabilities that the assembly deliberates about will have changed significantly by the time the recommendations are implemented. The French climate convention deliberated for nine months. In nine months of AI development, the landscape can shift so fundamentally that recommendations drafted in January may address a world that no longer exists in October.
Jasanoff acknowledges this temporal mismatch but resists the conclusion that speed makes participation impossible. Her counter-argument has two dimensions.
The first is that the most important governance questions about AI are not about specific capabilities but about values and priorities — questions that do not change as rapidly as the technology. The question of whether productivity gains should be captured as corporate profit or distributed as shared prosperity is not a question that becomes obsolete when a new model is released. The question of what kind of educational experience children deserve in an AI-saturated environment does not change when the tool version is updated. The question of who bears the cost of technological displacement does not depend on which specific technology is doing the displacing. These are constitutional questions — questions about the kind of society that is being built — and constitutional questions are the questions for which democratic deliberation is most essential and most effective.
The second dimension is institutional design. Jasanoff's framework does not propose that every governance decision about AI should be subjected to a citizens' assembly. It proposes that governance institutions should be designed to incorporate participatory knowledge continuously — not as a one-time event but as an ongoing institutional practice. This requires standing mechanisms, not ad hoc convocations. Advisory bodies that include affected community representatives with genuine authority, not merely consultative status. Monitoring systems that detect emergent consequences and surface them for democratic deliberation before they compound into crises. Learning mechanisms that treat governance decisions as provisional and subject to revision in light of experience — including the experience of the people whose lives those decisions shape.
The Nordic model of co-determination in workplace governance offers a relevant precedent. In Sweden, Denmark, and Norway, workers have institutional representation in corporate decision-making — not just through collective bargaining over wages and conditions, but through board membership, works councils, and statutory requirements for consultation before major organizational changes. When a Swedish company deploys AI tools that restructure work processes, the workers affected by the restructuring have a legal right to participate in the decision — not merely to be informed of it after the fact.
This model does not solve every problem. It operates at the organizational level, not the societal level. It relies on labor institutions — unions, works councils — that may not exist in the gig economies and contractor relationships that characterize much of the AI-affected workforce. It assumes a stable employment relationship that AI itself is disrupting. But it embodies a principle that Jasanoff's framework identifies as essential: the people whose work is being transformed by a technology deserve institutional standing in the governance of that transformation. Not as beneficiaries of a benevolent builder's choices. As participants in decisions that concern their own lives.
Community benefit agreements, common in infrastructure development, offer another model. When a major development project — a highway, a stadium, an industrial facility — transforms a community, the community can negotiate an agreement that specifies how the project's benefits will be shared and its costs mitigated. The agreement is legally binding and enforceable. It transforms the community from a passive recipient of consequences into an active participant in shaping them.
Could a similar model be applied to AI deployment? When a company deploys AI tools that restructure a local labor market, could the affected community negotiate an agreement specifying retraining commitments, economic transition support, and ongoing monitoring of consequences? The institutional design is non-trivial, but the principle is sound: the people who bear the costs of a technological transition deserve a voice in how that transition is managed, and that voice should have institutional force, not merely advisory status.
Jasanoff's insistence on participation is not naive about the difficulties. Democratic deliberation is slow, contentious, and imperfect. It produces compromises that satisfy no one fully. It requires institutional infrastructure that is expensive to build and maintain. It asks citizens to engage with complex and unfamiliar subjects, which demands educational resources that are themselves scarce.
But the alternative — governance by builders alone, however well-intentioned — produces dams that serve the builder's vision. The Trivandrum training produced extraordinary capability expansion. It also produced a restructuring of professional identity, a redistribution of authority, and a redefinition of expertise that twenty people absorbed without institutional recourse. Their adaptation was impressive. Their consent was assumed.
Jasanoff's framework asks: What would governance look like if their consent were required rather than assumed? If the decision about how AI restructures work were made with the workers rather than for them? If the dam were built not by the beaver alone but by the ecosystem it claims to serve?
The answer is harder, slower, messier, and more legitimate than the alternative. And the legitimacy matters — not as an abstraction but as the condition under which people accept the restructuring of their lives as just rather than imposed, as chosen rather than inflicted.
---
The most dangerous consequences of a powerful technology are usually the ones that no one measured because no one thought to look. Jasanoff's career has been a sustained argument that governance institutions must be designed for exactly this condition — not for the risks that can be predicted and quantified, but for the consequences that emerge slowly, invisibly, below the threshold of institutional attention, and become visible only after they have compounded into damage that is difficult or impossible to reverse.
The slow violence of environmental degradation provided the paradigm case. For decades, the consequences of industrial pollution accumulated in bodies, communities, and ecosystems without triggering any governance response, not because the harm was negligible but because it was distributed across space and time in patterns that existing monitoring systems could not detect. The people who lived downstream from the factory knew something was wrong — their children were sick more often, the fish had disappeared from the river, the water tasted different — but their knowledge was qualitative, experiential, and inadmissible in governance frameworks calibrated for quantitative evidence. By the time the quantitative evidence caught up with the experiential knowledge, the damage had been done.
Jasanoff has observed the same pattern across technologies. The consequences that governance frameworks are designed to detect — the acute, the measurable, the attributable — are rarely the consequences that matter most. The consequences that matter most are chronic, distributed, and causally complex. They resist attribution to a single source. They accumulate below the threshold of crisis. They manifest in registers — identity, meaning, relationship, cognitive capacity — that existing measurement systems do not capture.
The AI moment is producing consequences of exactly this kind. They are not the consequences that dominate the governance conversation — not the toxic outputs, the deepfakes, the privacy violations, the market concentration. Those consequences are real and important, and they are the consequences that existing governance instruments are designed, however imperfectly, to address.
The consequences that governance cannot see are different. They are the slow restructuring of what it means to know something. The gradual atrophy of cognitive capacities that are exercised only through friction. The quiet displacement of human relationships by machine interactions that are more convenient, more available, and less demanding. The erosion of the distinction between what a person thinks and what a machine generates — an erosion that does not announce itself as a crisis but accumulates in the daily experience of a hundred million users who increasingly cannot tell whether the thought they are having is one they arrived at through their own cognitive effort or one that was suggested by a system optimized for plausibility.
Segal described this erosion with unusual honesty in The Orange Pill. The Deleuze passage that Claude produced — elegant, well-crafted, philosophically wrong in a way that was invisible until subjected to careful examination — exemplified a consequence that no safety benchmark captures. The consequence is not that the AI produced incorrect information. Incorrect information is a known risk with known countermeasures. The consequence is that the quality of the prose made the author unable to distinguish between genuine insight and sophisticated pattern-matching — and that this inability, repeated across millions of interactions by millions of users, produces a cumulative erosion of the epistemic standards that underlie every form of knowledge production, from journalism to scholarship to democratic deliberation.
How does a governance institution detect this erosion? It does not appear in any metric currently collected. No company reports on the rate at which its users confuse AI-generated plausibility with genuine understanding. No regulator has a framework for assessing the long-term cognitive effects of outsourcing intellectual effort to machines. No educational institution has instruments for measuring the difference between a student who understands a subject and a student who has learned to produce AI-assisted artifacts that look like understanding.
The Berkeley study that The Orange Pill analyzed came closest. By embedding researchers in a workplace for eight months, observing behavior directly rather than relying on surveys or metrics, the study detected consequences — task seepage, boundary erosion, attention fragmentation — that would have been invisible to any remotely administered assessment. But eight months is too short to detect the slow-accumulation consequences that matter most. The erosion of embodied expertise, the atrophy of debugging intuition, the gradual hollowing of professional identity — these are consequences that unfold over years, not months, and they require monitoring instruments designed for detection at that timescale.
Jasanoff's framework suggests that governing what you cannot see requires three institutional capacities that the current AI governance landscape almost entirely lacks.
The first is longitudinal monitoring. Not point-in-time assessments that capture a snapshot of AI's impact but ongoing, systematic observation of how the interaction between AI and human communities evolves over time. This monitoring must be epistemically plural — designed to capture quantitative data (productivity, employment, skill profiles) alongside qualitative data (experiential accounts, narrative evidence, the knowledge of people living inside the transition). It must be funded independently of the companies whose products it monitors, because the companies have structural incentives to measure what makes their products look beneficial and to ignore what does not. And it must be designed for slow detection — for identifying consequences that emerge gradually, below the threshold of crisis, in registers that existing monitoring systems do not capture.
The second is institutional reflexivity — the capacity of governance institutions to examine and revise their own assumptions. The AI governance frameworks currently under construction embed assumptions about what consequences matter, what evidence is relevant, and whose knowledge counts. These assumptions were formed in a pre-AI governance context, and they may be inadequate to the consequences that AI actually produces. A reflexive institution would treat its own assumptions as provisional, subject to revision in light of evidence that challenges them. It would ask, regularly and systematically, whether the consequences it is monitoring are the consequences that matter most, or whether the most important consequences are the ones it has not yet learned to see.
The third is what Jasanoff calls responsive governance — governance that treats its own decisions as experiments rather than settlements. Every governance decision about AI is, in a meaningful sense, provisional. The technology changes. The social context changes. The consequences evolve. A governance framework designed for a static technology in a stable social context will produce increasingly inappropriate decisions as both the technology and the context change. Responsive governance builds revision into its own structure — not as an admission of failure but as a recognition that governing under uncertainty requires the capacity to learn from consequences that were not anticipated and to adjust course in light of what that learning reveals.
None of these capacities exists at the scale the AI moment demands. Longitudinal monitoring of AI's social consequences is in its infancy. Institutional reflexivity is rare in any governance domain and almost entirely absent in AI governance. Responsive governance is advocated in principle and resisted in practice, because revision feels like instability and governance institutions are culturally oriented toward permanence.
Jasanoff's observation that "we are designing these futures" carries a corollary that is less often quoted: that the design must include the capacity to redesign. A future designed once and governed thereafter by the assumptions embedded at the moment of design will become progressively less adequate to the reality it governs. A future that is continuously redesigned in light of experience — including the experience of the people who live inside it — has a chance of remaining adequate, not because the designers are wiser but because the design process is humbler.
The sunrise at the end of The Orange Pill is seen from the top of the tower. The view is real, and it is genuinely beautiful — the expanded capability, the democratized access, the creative leverage that AI makes possible. Jasanoff's contribution is not to deny the beauty but to ask who else should be looking at it, and from where. The worker whose expertise was commoditized before the retraining arrived. The parent whose child's school has no AI governance framework. The community whose economic foundation was repriced by a technological transition it did not choose.
The view from below is different from the view from above. The consequences that are invisible from the roof of the tower are the lived reality of the people standing on the ground. Governance that cannot see those consequences — that monitors only what is visible from the positions of builders and regulators — will produce decisions that look rational from above and feel unjust from below.
Jasanoff's concept of technologies of humility is, at its core, a concept of technologies of attention — institutional structures designed not to predict the future but to notice the present, in all its complexity and contradiction, including the parts that no dashboard captures and no benchmark measures. The most important question of the AI moment is not a technical question or an economic question or even a philosophical question. It is a democratic question: Who decides?
And that question can only be answered well if the institutions that answer it are designed to hear every voice that has knowledge to contribute — not just the voices that speak the language of technical governance, but the voices that speak the language of experience, of vulnerability, of lived consequence.
The governance of artificial intelligence is too important to be left to the people who build it. It is too important to be left to the people who regulate it. It is too important to be left to any single community, however knowledgeable, however well-intentioned.
It belongs to all of us. And building the institutions that can hold that collective ownership — institutions that are technically competent, democratically legitimate, epistemically humble, and designed for the slow detection of consequences that no one yet knows how to measure — is the work of this generation.
Whether we do it will determine what kind of society we become. Not what technologies we possess. What kind of people we choose to be, and whether the institutions we build are worthy of the choices that lie ahead.
---
The consent I never thought to ask for was my own team's.
Jasanoff made me see it — not in her academic prose, but in the space between her concepts and my memory of a room in Trivandrum. Twenty engineers. A hundred dollars a month per seat. By Friday, each one operating with the leverage of a full team. I wrote about that week in The Orange Pill as a moment of transformation, and it was. But Jasanoff's framework forced a question I had managed to avoid: transformation for whom, decided by whom, on whose terms?
I chose to keep and grow the team. I wrote about that choice as an act of stewardship. I still believe it was. But Jasanoff's co-production framework showed me something I had missed: the twenty-fold productivity gain did not just change what those engineers could produce. It changed what an engineer was. The backend specialist became a generalist overnight. The senior architect watched twenty-five years of accumulated intuition get repriced in real time. These were not side effects of the technical change. They were the change itself — technical and social, produced simultaneously, in the same room, during the same week.
And I was the one who decided it would happen. Not maliciously. Not carelessly. But unilaterally.
The word that Jasanoff introduced into my vocabulary was not "humility," though that is the concept she is most known for. The word was "legitimacy." The question is not whether the dam is well-built. The question is whether the people downstream — the ones whose world the dam reshapes — had any say in where it was placed. I built a good dam. I am less certain I built a legitimate one.
This is uncomfortable. I do not write it to perform contrition. I write it because the discomfort is the point. The builder's fishbowl — my fishbowl — contains an assumption so pervasive it functions as oxygen: that expanding capability is self-evidently good, that the people whose work is transformed will recognize the transformation as liberation, and that the builder's vision is sufficient grounds for action. Jasanoff cracks that glass. Not to shatter it — she is too precise for destruction — but to show that the glass is there. That what I see as the natural shape of the world is actually the curvature of a container I did not choose and have rarely examined.
Her distinction between risk and uncertainty haunts me most. The risks of AI — toxic outputs, misinformation, market disruption — those I can measure, monitor, mitigate. The uncertainties — what happens to a twelve-year-old's sense of purpose when a machine can do everything she was learning to do, what happens to the texture of a life lived inside productive addiction, what happens to democratic culture when persuasive content costs nothing to produce — those I cannot measure. They do not appear on any dashboard I own. They accumulate in the lived experience of people whose names I will never know, in registers my instruments cannot detect.
I am still building. I will always be building — that is the beaver's nature, and I have made my peace with it. But Jasanoff taught me that building well is not enough. Building must also be building with — with the people whose lives the building reshapes, whose knowledge the builder cannot access alone, whose consent transforms an act of construction into an act of governance.
The institutions do not yet exist. But the understanding does. And understanding confers obligation.
— Edo Segal
AI governance is being built right now -- in boardrooms, in regulatory agencies, in the design choices of engineers who move faster than any legislature can follow. The decisions being made will reshape work, education, identity, and democratic culture for a generation. The people most affected by those decisions have almost no voice in making them.
Sheila Jasanoff has spent four decades studying what happens when powerful technologies are governed by the people who build them, without meaningful participation from the people who live with the consequences. Her framework -- co-production, civic epistemology, technologies of humility -- reveals the invisible architecture of authority that determines whose knowledge counts, whose experience matters, and whose future gets built. Applied to the AI moment described in Edo Segal's The Orange Pill, her work exposes the democratic deficit at the heart of the most consequential technological transition in human history.
This is not a book about slowing down. It is a book about who gets to steer.
-- Sheila Jasanoff

A reading-companion catalog of the 24 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Sheila Jasanoff — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →