Commitment, in Polanyi's epistemology, is not mere belief or psychological conviction but the structural act that converts information into knowledge. When a scientist publishes a finding, she commits herself to its truth—staking her professional reputation, accepting responsibility for having evaluated the evidence with due care, asserting that the claim deserves the community's trust. This commitment is what gives the finding its epistemic weight. A claim that no one commits to, that arrives without a personal knower standing behind it, lacks authority regardless of its surface quality. Commitment is risky—it involves the possibility of error, the acceptance of responsibility for being wrong. But the risk is constitutive of knowledge: only someone who can be wrong can know that she is right. AI outputs are produced without commitment. The machine has no stake in their truth, no reputation to risk, no responsibility to bear. This absence is not a contingent limitation but a structural feature of systems that process information without possessing the personal dimension that makes information meaningful.
Polanyi developed commitment as the answer to the skeptical challenge: if all knowledge rests on assumptions that cannot be fully justified, how can knowledge be anything more than arbitrary belief? His answer: commitment is not arbitrary because it is responsible—the knower commits with awareness of the risk, with openness to revision in light of new evidence, with acceptance of the community's evaluation. The commitment is fiduciary in the legal sense: it involves entering a relationship of trust, accepting obligations, staking something of value. This fiduciary character is what separates committed knowing from mere opinion or comfortable prejudice.
The concept exposes the hollowness at the heart of AI-mediated professional practice. The lawyer who signs a brief produced by AI performs the external gesture of commitment—her name on the document, her representation to the court—without the internal reality. She has not performed the personal evaluation that commitment presupposes: the sustained engagement with cases, the wrestling with legal reasoning, the exercise of judgment about which arguments are sound. She commits to a product she did not produce and cannot fully evaluate. The commitment is a performance, not a reality, and the gap between performance and reality is where the fiduciary framework of professional practice fractures.
Educational assessment faces the commitment problem in its starkest form. The assignment that asks students to write an essay is designed to cultivate committed engagement with ideas—to force students to formulate positions, evaluate evidence, exercise judgment about what arguments are strong. When students submit AI-generated essays, they perform the ritual of submission without the substance of commitment. They have not personally evaluated whether the argument is sound, whether the evidence supports the conclusion, whether the position is one they would defend under scrutiny. The essay exists. The commitment does not. And without commitment, the essay represents not the student's developing knowledge but the tool's pattern-matching—information that has passed through the student without transforming into personal understanding.
Commitment is the central organizing concept of Personal Knowledge (1958), where Polanyi argued that all knowledge involves "a passionate contribution of the person knowing what is being known"—an inversion of the positivist ideal that sought to eliminate the personal element entirely. Polanyi's commitment is not passion in the ordinary sense but the acceptance of intellectual responsibility: the recognition that knowing is an act that the knower performs and for which she bears responsibility, rather than a state that simply occurs.
Commitment is constitutive. Information becomes knowledge when someone commits to it—accepts responsibility, stakes judgment, exercises personal evaluation rather than mechanical processing.
Risk is essential. Genuine commitment involves the possibility of error; only someone who can be wrong can know that she is right—automated systems cannot commit because they cannot meaningfully err.
Authority derives from it. A claim's epistemic weight comes not from its logical structure but from the personal commitment it embodies—the community's trust that someone exercised due diligence.
AI cannot commit. Machine-generated outputs arrive without stake, without responsibility, without the personal dimension that transforms competent information into reliable knowledge.
Hollow commitment corrupts. Performing commitment gestures (signing, submitting, presenting) without the substance (personal evaluation, engaged judgment) produces a fiduciary framework resting on false premises.