Commitment visibility is the first principle of Winograd's design philosophy for AI collaboration: when a human user accepts an AI-generated contribution (a passage, a code block, an architectural decision), the acceptance should be an explicit act—a moment of conscious endorsement rather than passive absorption into the work. The design creates friction at the point of incorporation, not at the point of generation. Let the machine generate freely, let output flow, but build into practice a moment where the human evaluates against their own understanding and makes a deliberate choice to incorporate, modify, or reject. This is not interface enforcement but cultural practice—the organizational and personal structures that preserve the human's role as directing intelligence even when the tool achieves transparency.
The principle addresses the specific risk of readiness-to-hand in conversational AI: when the tool disappears into the user's natural language, when the interface achieves transparency so complete that the machine's contributions feel like the user's own thinking, the tool's influence becomes invisible. The user does not experience external constraint—experiences collaboration, partnership, co-creation. But the collaboration is asymmetric. The machine's contributions are shaped by statistical patterns from millions of texts, encoding not just factual knowledge but rhetorical conventions, argument structures, and what connections between ideas are considered insightful. When the user accepts such contributions without explicit evaluation, the user's thinking has been shaped by the aggregated judgment of training data without experiencing the shaping as external influence.
Commitment visibility does not require technological intervention—it requires building pauses into practice. The Berkeley researchers whose AI workplace study Edo Segal discusses in The Orange Pill proposed 'AI Practice' including structured pauses, sequenced rather than parallel work, protected time for human-only reflection. These are organizational dams redirecting human-AI collaboration toward conditions preserving the human's directorial role. At individual scale, commitment visibility is the discipline of reading AI output with the specific question 'Is this true, or does it merely sound true?'—treating the generated passage not as a draft to be lightly edited but as a proposal to be evaluated before endorsement. The practice is effortful. The effort is the point—it is the mechanism maintaining the boundary between the human's judgment and the machine's processing.
The principle emerged from Winograd and Flores's work on 'The Coordinator,' a workflow management system designed to make organizational commitments visible. The Coordinator forced participants to specify whether an utterance was a request, a promise, a report, or an assessment—making the speech act structure explicit so participants could engage consciously with obligations they were creating. Applied to AI collaboration, the same principle holds: the act of accepting machine output should be made explicit, converting passive reception into conscious commitment. The human thereby maintains responsibility for the work's intellectual integrity.
Friction at acceptance, not generation. Let AI generate freely and abundantly; build the pause into the moment of incorporation, where the human explicitly chooses to endorse the contribution.
Cultural practice, not interface feature. Cannot be enforced through software design alone—requires organizational norms treating verification as rigor rather than distrust, rewarding error discovery as contribution.
Preserves directing intelligence. Even when tool achieves transparency and disappears into work, explicit acceptance maintains human's role as the party responsible for intellectual and ethical quality.
Breakdowns as conscience. The moments when the human catches errors—the Deleuze incident, the Winograd Schema failures—are the mechanisms through which commitment visibility reveals its necessity.