Responsive governance is Jasanoff's institutional model for decision-making under conditions of genuine uncertainty. It treats every governance choice as provisional rather than final — as a hypothesis to be tested through deployment, monitored for consequences, and revised in light of what monitoring reveals. The model requires four institutional capacities: mechanisms for detecting emergent consequences (including consequences no one predicted), processes for incorporating new evidence into governance revisions, authority to revise decisions without waiting for crisis, and cultural acceptance that revision is not failure but appropriate response to learning. Responsive governance stands in contrast to the stability paradigm that dominates most regulatory frameworks, which treat rules as permanent settlements requiring extraordinary justification to change. For AI, where capabilities and consequences evolve faster than any previous technology, the stability paradigm guarantees obsolescence. Responsive governance offers an alternative: institutions designed to learn at the pace reality demands.
Jasanoff introduced responsive governance in dialogue with adaptive management frameworks from ecology (C.S. Holling) and learning-organization theory from management (Peter Senge). Her distinctive contribution was to apply the learning paradigm to democratic institutions governing science and technology — showing that the same principles that make organizations and ecosystems adaptive can be embedded in regulatory frameworks, if the political culture can accept that governance is an ongoing practice rather than a completed design.
The EU AI Act exemplifies the stability paradigm. It classifies AI systems into risk categories, imposes requirements on each category, and establishes enforcement mechanisms. The framework is comprehensive, legally binding, and nearly obsolete before implementation because it was designed before the generative AI explosion and has limited capacity to revise its risk classifications as capabilities evolve. The Act's designers anticipated this problem and included provisions for updating the annexes listing high-risk applications. But the updating process operates on a timeline of years, while AI capabilities evolve on a timeline of months. The mismatch is structural.
Responsive governance would approach the problem differently. Rather than attempting to classify all AI applications in advance, it would establish monitoring systems designed to detect consequences as they emerge — both the quantifiable risks the Act already addresses and the emergent, qualitative consequences (professional identity disruption, cognitive atrophy, developmental effects) that classification systems cannot anticipate. The monitoring would be continuous, epistemically plural (incorporating quantitative and qualitative evidence), and institutionally empowered (able to trigger governance revisions without waiting for the next legislative cycle).
The practical challenge is cultural and institutional. Democratic institutions are designed for stability — rules that persist, precedents that bind, decisions that are not lightly reversed. This design reflects legitimate values: predictability, equal treatment, protection against arbitrary power. But the design assumes that the world being governed changes slowly enough that stable rules remain appropriate. AI violates that assumption. A rule appropriate in January may be obsolete in October, not because the rule was poorly designed but because the reality it governs has transformed. Responsive governance requires democratic institutions to accept that revision is not instability but appropriate adaptation — and that the risk of premature commitment (locking in rules that become inappropriate) may exceed the risk of continued flexibility (delaying rules until consequences are better understood).
Jasanoff's framework identifies longitudinal monitoring as the essential infrastructure for responsive governance. The Berkeley study operated for eight months — long enough to detect task seepage and attention fragmentation but too short to detect the consequences that matter most: skill atrophy, identity erosion, the gradual transformation of what it feels like to be a professional in an AI-saturated field. Monitoring adequate to these consequences requires years of sustained observation, epistemically plural methods (ethnography alongside metrics), and institutional commitment to funding research whose findings may challenge the interests of the institutions being studied. No such monitoring system exists at the scale the AI moment requires, and building it is among the most urgent governance tasks.
The concept synthesizes adaptive management from ecology, reflexive modernization from sociology (Giddens, Beck), and learning-organization theory. Jasanoff's contribution was to demonstrate that the same principles can be embedded in democratic governance institutions — if the political culture can accept that governing under uncertainty requires treating every decision as a hypothesis subject to empirical revision.
Governance decisions are hypotheses. Every rule, standard, and framework is a prediction about what will produce good consequences — and should be treated as provisional, subject to revision when evidence contradicts the prediction.
Monitoring must be continuous and plural. Detecting emergent consequences requires ongoing observation using both quantitative metrics and qualitative evidence, funded independently of the entities being monitored.
Revision is appropriate, not failure. Democratic institutions must accept that changing course in light of new evidence is the sign of a learning system, not a failed system.
The stability paradigm guarantees obsolescence. For technologies developing as rapidly as AI, governance frameworks designed for permanence will be inappropriate before they are implemented — making responsiveness a requirement, not an option.