The evidentiary asymmetry is the systematic difference in how governance institutions treat two kinds of evidence about technology's consequences. Quantitative evidence — adoption rates, productivity metrics, benchmark scores, economic indicators — is admitted, weighted heavily, and translated directly into policy. Qualitative evidence — experiential accounts, narrative testimony, the felt sense of transformation, the meanings people construct around their changing relationship to work — is acknowledged in principle and ignored in practice, treated as anecdote rather than evidence. This asymmetry is not accidental but structural: governance institutions are designed to process quantitative inputs because they can be compared, aggregated, and translated into rules. Qualitative inputs resist these operations and are therefore epistemically inadmissible, regardless of their truth or relevance. The asymmetry produces governance that optimizes the measurable while ignoring the meaningful — decisions that look rational on spreadsheets and feel unjust to the people who live with them.
Jasanoff identified the evidentiary asymmetry through her study of how regulatory agencies assess scientific claims. In Science at the Bar (1995), she examined courtroom disputes over toxic torts, forensic evidence, and environmental harm, showing that legal institutions privilege certain forms of evidence (peer-reviewed studies, quantitative data, expert testimony) while excluding others (community knowledge, lay observation, qualitative patterns). The exclusion was not because the excluded evidence was false but because it could not be processed by the evidentiary architecture of the legal system — an architecture designed for adversarial testing of discrete factual claims, not for the integration of distributed, qualitative, experientially grounded knowledge.
Applied to AI governance, the asymmetry is pervasive. The Berkeley study that The Orange Pill analyzes produced quantitative findings: AI increased task volume by 27%, colonized pauses, fractured attention. These findings entered governance conversations, were cited in policy discussions, and shaped corporate AI practice frameworks. But the study could not determine whether the additional work was trivial or genuinely new, whether workers found it more or less satisfying, whether the capability gain was worth the boundary erosion. These questions are qualitative, and the study's evidentiary framework was quantitative. The asymmetry meant that half the phenomenon was measured and the other half was treated as unmeasurable and therefore ungovernable.
The asymmetry operates at every scale. Individual workers report productive addiction — the inability to stop building, the collapse of work-life boundaries, the exhilaration that curdles into depletion. These reports circulate on Substack, X, and in private conversations. They describe a real phenomenon with real consequences. But they do not enter governance frameworks because they are not quantitative. No dashboard measures the rate at which flow becomes compulsion. No benchmark tracks the erosion of the capacity for presence. The phenomenon remains ungoverned not because it is unimportant but because it is inadmissible.
Jasanoff's 2024 observation about the AI discourse captures the asymmetry with precision: 'There's a disconnect between the kind of talk we hear about threat and the kind of specificity we hear about the promises.' The promises are quantified with exquisite precision — productivity gains, cost reductions, capability expansions. The threats are described in vague, cinematic language — extinction, superintelligence, civilizational collapse. The asymmetry is not random. Specific promises and vague threats produce a governance environment in which promises can be pursued immediately (because they are specific and measurable) while threats can be acknowledged in principle and deferred in practice (because they are vague and unactionable). The governance conversation is structured, before it begins, by evidentiary standards that determine what can be taken seriously.
The concept is implicit in Jasanoff's earliest work but was formalized in her 2007 essay 'Technologies of Humility Revisited,' where she argued that the dominance of quantitative evidence in regulatory science is not a reflection of its epistemological superiority but a consequence of institutional design. Regulatory frameworks are built to process numbers because numbers can be compared, ranked, and translated into enforceable standards. The institutional architecture determines what counts as evidence, and the determination is political — a choice about whose knowledge matters — disguised as a technical requirement.
Quantitative bias is institutional, not epistemological. Governance frameworks privilege quantitative evidence not because it is inherently superior but because institutional processes are designed to handle it — and the design choice excludes forms of knowledge that matter enormously.
The excluded evidence is often the most important. The consequences governance cannot measure — identity erosion, meaning displacement, cognitive atrophy, relational costs — are frequently more consequential than the consequences it can measure.
Asymmetry produces illegitimacy. When governance systematically excludes the evidence that affected communities can provide, those communities experience the resulting decisions as imposed rather than legitimate — technically competent but democratically inadequate.
Redesign is required, not data collection. Closing the evidentiary gap does not mean collecting more quantitative data about qualitative phenomena; it means redesigning governance institutions to process qualitative evidence as evidence rather than treating it as anecdote.