Instrumental trust is a term for the form of trust required when a human must act on information, recommendations, or analyses produced by AI systems whose reasoning she cannot directly observe. The condition has no precise precedent. The professional asked to rely on AI-generated output is practicing a form of trust distinct from the relational trust Brown's BRAVING research has primarily addressed. She cannot assess the system's boundaries, reliability, or integrity the way she assesses a colleague's, because the system lacks intentions, motivations, and the capacity for relational reciprocity. The functional demand is the same — act on information you cannot independently verify — but the relational ground is absent. The development of practices, norms, and institutional supports for instrumental trust is among the most urgent and least recognized tasks of the AI transition.
There is a parallel reading that begins not with the novelty of instrumental trust but with its ubiquity. The professional extending trust to an AI system whose reasoning she cannot observe is practicing the same form of trust she already extends to the compiler that transforms her code, the algorithm that routes her network traffic, the cryptographic protocol that secures her transactions, and the statistical model embedded in every medical device she relies on. The difference is not in kind but in visibility and cultural salience. We have been extending instrumental trust to opaque technical systems for decades; the AI case simply makes the practice harder to ignore.
The real shift is not that instrumental trust is novel but that it now operates at the semantic layer—producing sentences, diagnoses, arguments—rather than at the substrate layer where we have been comfortable not looking. The discomfort reflects not the absence of reciprocity (we never expected reciprocity from TCP/IP) but the violation of a guild boundary: the expectation that interpretation, judgment, and recommendation would remain human work. The focus on trust as the problem may therefore be a category error. The actual question is not whether we can trust systems whose reasoning we cannot observe—we already do, constantly—but whether we are prepared to relinquish interpretive authority at the semantic layer the way we have already relinquished it at the computational substrate. The concern about hollowing may be misplaced. The risk is not that we will trust AI too easily but that we will preserve an untenable distinction between layers that were never as separate as professional self-conception required.
The absence of reciprocity is the defining feature and the core difficulty. In human relational trust, the other party can be held accountable, apologize, demonstrate reliability over time, and participate in the repair of broken trust. The AI system can do none of these things. When it produces output that proves wrong, there is no meaningful sense in which it can be held accountable for the error. When it produces output that proves useful, there is no meaningful sense in which it earned the user's subsequent confidence. The trust is therefore always one-directional — the user extends it or withdraws it without any reciprocal movement from the other side.
Brown's framework suggests that instrumental trust nonetheless requires behavioral supports analogous to BRAVING's relational components. Boundaries about acceptable AI use and its limits. Reliability assessments based on empirical track record rather than felt confidence. Accountability practices that assign human responsibility for AI-mediated outcomes. Vault-equivalent practices for data handling. Integrity norms about attribution and honest representation. Non-judgment environments in which users can report AI failures without stigma. Generous interpretation of colleagues' AI use patterns. The translation is not mechanical — each component requires rethinking — but the underlying framework holds.
The larger concern is that instrumental trust may be easier to extend than relational trust, precisely because it requires no reciprocity. The AI system does not disappoint in the specific way humans disappoint. It does not demand emotional investment in return. It does not judge the user's vulnerability. This asymmetric ease is part of what Brown called at the Aspen Ideas Festival the seductive alternative for tapping out of human vulnerability. The extension of instrumental trust to AI systems, combined with the withdrawal of relational trust from human colleagues, produces the hollowing Brown has warned about — not because the tools force the withdrawal but because they make the withdrawal less costly in the short term.
The concept is an extension of Brown's BRAVING framework to the specific case of human-AI interaction. It has been developed in organizational practice and emerging academic literature on human-AI collaboration rather than in Brown's direct writing, but the framework it extends and the questions it asks are consistent with her research trajectory.
Absence of reciprocity. The AI system cannot participate in the mutual accountability relational trust requires.
One-directional extension. Trust flows from user to system without any movement from the other side.
BRAVING translation. Each relational-trust component requires rethinking rather than mechanical application to the AI case.
Seductive asymmetry. Instrumental trust is easier to extend than relational trust because it requires no vulnerability.
Hollowing risk. The extension of instrumental trust combined with withdrawal of relational trust produces the hollowing Brown has warned about.
The substrate argument is 80% right about the mechanics and 20% right about the stakes. We do extend trust constantly to opaque technical systems, and the AI case is continuous with that practice in its formal structure. But the weighting shifts sharply when you ask not "Is this trust structurally novel?" but "What work does this trust now encompass?" At the substrate layer, the trust extended to compilers and protocols operates on inputs and outputs we can independently verify—the code runs or it doesn't, the packet arrives or it doesn't. At the semantic layer, verification requires the same expertise the AI is meant to augment or replace. The professional cannot easily check the analysis without doing the analysis. That difference makes the lack of reciprocity newly consequential, even if it is not structurally new.
The hollowing concern is best reframed not as trust extended too easily but as accountability diffused too thoroughly. Instrumental trust without reciprocity works tolerably well when failure modes are legible and contained—a compiler error points to a line of code. It works poorly when failures are subtle, compounding, and distributed across decision chains the user cannot reconstruct. The BRAVING translation Edo identifies is 100% right as an institutional necessity: we need explicit practices for boundaries, reliability tracking, and accountability assignment precisely because the system cannot participate in those practices relationally. The risk is not that we will mistake AI for human but that we will build systems of consequential action around agents that cannot be held accountable in any meaningful sense, then discover too late that the accountability gap has hollowed out the institutional capacity to notice and correct compounding errors.
The synthesis frame: instrumental trust is trust in process legibility, not agent reciprocity. The question is not whether to extend it but how to build institutions that make failure modes visible and correctable when the agent cannot participate in correction.