'Human in the loop' is the governance vocabulary that purports to preserve human oversight of automated systems by requiring a human reviewer at critical decision points. The phrase has become ubiquitous in AI policy, corporate ethics frameworks, and military doctrine. Suchman's framework insists on a question the phrase conceals: what does the human in the loop actually know, and can she actually exercise judgment in the time available? If the loop runs at machine speed and the human has not accumulated the situated knowledge that evaluation requires, her presence is procedural rather than substantive — a compliance checkpoint rather than a meaningful intervention. The phrase becomes an alibi for treating plans as actions while maintaining the appearance of human oversight.
The phrase emerged from cybernetics and control theory, where 'in the loop' originally referred to closing a feedback circuit. It acquired governance valence through military doctrine on autonomous weapons, where 'human in the loop,' 'human on the loop,' and 'human out of the loop' mark successively thinner forms of oversight. In contemporary AI policy the phrase is often used interchangeably with 'human oversight' or 'meaningful human control,' though the substance of the oversight is rarely specified.
Suchman's critique is structural. Presence is not oversight. A human present at a decision point may or may not possess the situated knowledge, the time, and the institutional support necessary to exercise judgment over the automated system's output. Her recent analyses of algorithmic targeting document the characteristic failure mode: operators whose role is nominally to review target nominations in fact approve them at rates and speeds that preclude substantive evaluation. The loop is closed procedurally; the oversight is theater.
The problem is intensified by the interpretive asymmetry of human-machine interaction. The human in the loop is doing all the interpretive work; the machine is producing outputs sophisticated enough to sustain the illusion of collaboration. When the human accepts the machine's output — for reasons of time pressure, institutional deference, or simple cognitive load — she is not endorsing the output so much as defaulting to it. The loop becomes a mechanism for distributing responsibility without ensuring judgment.
Genuine oversight would require institutional conditions the phrase does not specify: humans with situated knowledge adequate to the domain; time adequate to exercise judgment; institutional support for substantive disagreement with automated outputs; metrics that reward override when appropriate rather than penalize it as friction. These conditions are expensive, slow, and inefficient by output metrics. They are also what the phrase 'human in the loop' was supposed to guarantee. Suchman's framework demands that the guarantee be made substantive rather than rhetorical, and that institutions deploying AI be held accountable for whether the humans in their loops can actually do the work the phrase implies.
The concept has roots in mid-twentieth-century cybernetics and control engineering and was elaborated in military doctrine during the 1990s and 2000s. It entered mainstream AI governance discourse in the 2010s as a response to concerns about autonomous systems and has become a central organizing principle in frameworks like the EU AI Act.
Suchman's critique has been sustained across her work on automation, military AI, and algorithmic governance. Her 2019 open letter on autonomous weapons and her more recent essays on algorithmic targeting have sharpened the critique into specific demands for substantive rather than procedural human oversight.
Presence is not oversight. A human at a decision point does not automatically constitute meaningful human control. The substance depends on knowledge, time, and institutional support.
Speed defeats oversight. When outputs accumulate faster than humans can evaluate, oversight degrades into approval. The loop is closed; the judgment is not exercised.
Situated knowledge is the prerequisite. The human in the loop can evaluate outputs only to the extent that she possesses the domain knowledge the AI does not. When AI has automated the practices through which that knowledge develops, oversight becomes structurally impossible.
The phrase as alibi. 'Human in the loop' often functions to distribute responsibility — the AI proposed, the human approved — while obscuring whether either actually exercised substantive judgment.
Making oversight substantive. Genuine oversight requires specific institutional conditions that are expensive, slow, and counter-efficient. Whether institutions will invest in these conditions is the decisive governance question.