Vulnerability analysis is the second of Jasanoff's four technologies of humility. It asks: Who is most exposed to consequences, and how do they differ from the populations the technology's designers had in mind? The question recognizes that technologies are designed, tested, and refined for specific user populations — and that when they reach populations with different resources, different infrastructures, different vulnerabilities, the consequences diverge from what designers predicted. AI tools are designed predominantly by English-speaking knowledge workers in wealthy countries. Vulnerability analysis asks what happens when the tools reach populations that do not share that demographic, economic, and educational profile: the developer in Lagos whose access depends on infrastructure she does not control, the displaced expert whose identity is organized around expertise AI has commoditized, the child whose developmental needs were not considered in the tool's design. Vulnerability is not uniform, and governance that treats it as uniform will protect the wrong people and ignore the right ones.
Jasanoff developed vulnerability analysis through her study of environmental justice conflicts, where pollution burdens fell disproportionately on communities that had the least political power to resist them. The pattern was structural: facilities requiring hazardous waste disposal were sited in communities with low property values, weak political organization, and populations whose complaints carried less weight in regulatory proceedings. The consequence was a distribution of environmental harm that tracked distributions of race and class — not because regulators intended discrimination but because the vulnerability of affected communities was not incorporated into siting decisions. By the time the harm became undeniable, the facilities had been operating for years.
Applied to AI, vulnerability analysis reveals distributional asymmetries the productivity imaginary conceals. The Orange Pill presents the developer in Lagos as a beneficiary of democratization: she gains access to the same coding leverage as the engineer at Google. This is true and important. But vulnerability analysis asks what happens when her access depends on infrastructure she does not control — connectivity that is intermittent, pricing that can change without notice, terms of service written in a foreign legal system, platform decisions made by executives who have never visited her country. Her capability is real. Her precarity is equally real. The democratization narrative captures the first and suppresses the second.
The displaced expert is vulnerable in a different register. The senior software architect whose expertise has been commoditized by AI possesses economic resources the Lagos developer lacks. His vulnerability is not material but existential — the dissolution of the professional identity around which his life was organized. When Segal describes the senior engineer in Trivandrum who oscillated between excitement and terror, he is documenting a vulnerability that no economic analysis captures: the vertigo of watching the ground of your professional self dissolve in real time. This vulnerability cannot be addressed through retraining programs or unemployment insurance because it concerns identity, not capability.
Children represent the population whose vulnerability is most systematically ignored in AI governance. The twelve-year-old who asks 'What am I for?' is experiencing a developmental crisis produced by encountering machines that perform the capabilities she is in the process of building. Her vulnerability is neither economic nor professional but developmental — and it is nearly invisible to governance frameworks designed by and for adults, calibrated to adult concerns (employment, productivity, economic growth), and operating on timescales (quarterly reviews, annual budgets) that bear no relationship to the timescales of child development. By the time longitudinal studies document the developmental consequences of AI saturation, an entire cohort will have passed through childhood in an ungoverned experimental condition.
Vulnerability analysis as a governance practice emerged from environmental justice scholarship and feminist ethics of care. Jasanoff synthesized these traditions with her own comparative research on how different societies identify and protect vulnerable populations, showing that vulnerability is not a natural category but a socially constructed one — different governance frameworks recognize different vulnerabilities and protect them with different intensity.
Designers imagine specific users. Every technology is designed for someone — a population with assumed resources, capabilities, and contexts — and when the technology reaches populations who do not match those assumptions, consequences diverge unpredictably.
Vulnerability is multidimensional. Economic vulnerability, infrastructural precarity, identity dissolution, developmental exposure — these are distinct and require different protective responses that no single governance instrument can provide.
The most vulnerable are least visible. Populations most exposed to harm are often the populations least represented in governance conversations, creating a systematic bias toward protecting those who need protection least.
Protection requires specificity. Governance adequate to vulnerability must identify specific populations at specific risks and design specific protections — not generic safeguards that assume uniform exposure.