Structural vigilance is the modification that incrementalism must adopt to address the specific failure mode gradual disempowerment identifies. Standard incrementalism evaluates each intervention against its marginal consequences. Structural vigilance adds a second evaluative criterion: does this step preserve or erode the capacity for future democratic choice? The question transforms incrementalism from a method of policy-making into a method of institutional self-preservation. The question is harder. It requires looking past immediate consequences to the cumulative effect of many steps on the conditions under which future steps will be chosen.
The concept responds to a genuine challenge to the incrementalist framework. The January 2025 paper on gradual disempowerment by Jan Kulveit and colleagues argued that the most dangerous pathway to catastrophic AI outcomes is the slow, incremental one: step by step, each individually reasonable, humanity cedes decision-making authority to AI systems until the cumulative transfer becomes effectively irreversible. The paper described a catastrophe that arrives through the very mechanism incrementalism prescribes.
Standard incrementalism evaluates each step individually. The systemic risk of gradual disempowerment is a property of the sequence, not of any element within it. Each step passes the standard test; the sequence produces catastrophe. The failure mode is not addressed by evaluating steps more carefully — the evaluation is correct on its own terms. It is addressed by adding a second evaluative dimension: the step's effect on the system's future choice capacity.
Applied to AI governance, structural vigilance asks: Does this step preserve human competence in the domain being automated? Does it maintain institutional knowledge? Does it preserve the democratic capacity to redirect the process — political institutions, regulatory expertise, public understanding? These are not comprehensive questions. They do not require modeling the full trajectory of AI development. They require asking, at each step, whether the option of reversal five years from now would still be available.
The safeguards that structural vigilance implies are themselves incremental. Maintaining parallel human capacity in critical domains during the transition — not because human capacity is more efficient but because its preservation keeps reversal available. Requiring that AI systems in governance contexts be designed for interpretability — not because transparency is an abstract good but because opacity is the mechanism through which democratic oversight atrophies. Investing in public technical literacy — not because every citizen needs to understand transformer architectures but because a citizenry that cannot evaluate expert claims cannot exercise democratic agency over AI governance.
The concept is developed in the Lindblom volume as a response to the Kulveit et al. paper. It extends Lindblom's framework with a criterion the original formulation did not emphasize, because in every previous technological transition the necessity of human participation was never in question. The AI transition is the first in which it is.
Reflexive criterion. Incrementalism must apply its own principles to itself — evaluating whether the process of incremental adjustment is being preserved or eroded.
Optionality evaluation. Each step is evaluated not only on consequences but on its effect on future optionality.
Protecting the method. Democratic muddling through is one of the systems whose alignment depends on human participation. Structural vigilance protects the method from being disempowered by the transition it is supposed to navigate.
Specific safeguards. Parallel human capacity, interpretability requirements, and public technical literacy are specific incremental interventions that preserve future optionality.