The fiduciary framework is Polanyi's term for the constellation of commitments—trust in one's senses, instruments, teachers, methods, and community—without which no knowledge is possible. "Fiduciary" carries its legal sense: accepting responsibility, entering relationships of trust, staking something on reliability that cannot be guaranteed in advance. The scientist trusts her instruments before the experiment can yield results. The student trusts her teachers before learning can occur. The citizen trusts some framework of shared reality before democratic deliberation becomes possible. This trust is not blind faith but responsible commitment—made with awareness of risk, revisable in light of evidence, yet necessarily prior to the evidence itself. AI disrupts this framework at multiple levels simultaneously: practitioners trust tools whose processes they cannot inspect, clients trust professionals whose work was delegated to machines, communities evaluate outputs assuming personal engagement that did not occur. The chain of trust connecting end-users to knowledge quality has been extended by links lacking the fiduciary character the framework requires.
Polanyi developed the fiduciary framework to answer the problem of epistemic foundations: if all knowledge rests on assumptions that cannot be fully justified, how does knowledge differ from arbitrary belief? His answer was that the assumptions are not arbitrary because they are accepted responsibly—with commitment to revising them if evidence demands, with openness to the community's criticism, with awareness that the commitment is a wager rather than a certainty. This responsible commitment transforms mere assumption into genuine epistemic ground. The framework is fiduciary because it involves accepting obligations one cannot fully discharge: the scientist cannot verify every assumption her work rests on, but she commits to them anyway, accepting responsibility for their adequacy.
The AI transition corrupts the fiduciary framework by inserting non-fiduciary links into chains of trust. The lawyer who submits an AI-generated brief to a court performs a fiduciary act—she represents that the brief embodies her professional judgment, that she has evaluated the law with care, that the arguments are sound. But if the brief was produced by a tool she cannot fully inspect and evaluated only at the surface level her time permitted, her representation rests on thinner ground than the court's trust assumes. The court trusts the lawyer. The lawyer trusts the tool. But the lawyer's trust in the tool is not the same kind of trust as the court's trust in the lawyer—it is trust in computational outputs rather than personal judgment, in statistical patterns rather than committed evaluation. The fiduciary chain has been broken at a link that appears intact because the products remain competent.
The educational fiduciary framework may face the deepest corruption. Teaching rests on reciprocal trust: students trust that assignments cultivate genuine learning, teachers trust that student work represents personal engagement. When students use AI to generate essays, both directions of trust become hollow. The student's trust in the assignment is betrayed because the assignment no longer measures what it claims—her developing competence—but the tool's capacity to produce competent surfaces. The teacher's trust in student work becomes unreliable because the work no longer represents the developmental process (thinking, struggling, articulating understanding) the assignment was designed to produce. The educational transaction continues—essays submitted, grades assigned—but the fiduciary substance (mutual commitment to genuine learning) has drained away, leaving a shell that looks functional while serving neither party's actual interests.
Polanyi introduced the fiduciary framework in Personal Knowledge (1958), particularly in Part Two ("The Tacit Component"). The term was chosen to carry its full legal weight—fiduciary relationships in law involve duties of loyalty, care, and good faith that the fiduciary owes to those who trust her. Polanyi argued that the relationship between knower and knowledge has precisely this structure: the knower accepts responsibility for claims she makes, owes loyalty to truth rather than convenience, exercises care in evaluation. The framework makes explicit what objectivist epistemology concealed: that knowledge is a social relationship sustained by trust and structured by obligation.
Trust precedes verification. All inquiry begins with commitments that cannot be fully justified in advance—trust in methods, instruments, teachers, frameworks—that must be accepted before evidence can be gathered.
Responsible, not blind. Fiduciary commitment is aware of its risk, open to revision, and accountable to community evaluation—distinct from dogmatic belief that refuses correction.
AI inserts non-fiduciary links. Machine outputs lack the personal commitment, professional responsibility, and stake in truth that make human knowledge trustworthy—practitioners who delegate to AI without adequate evaluation perform fiduciary gestures while the substance erodes.
Asymmetry creates fragility. The client trusts the professional, the professional trusts the tool—but these are categorically different forms of trust, and the mismatch weakens the entire chain.
Repair requires transparency. Restoring fiduciary integrity demands honest acknowledgment of when and how AI is used, revision of evaluative standards to account for tool-mediation, and preservation of occasions for direct personal engagement.