The distinction matters for AI governance because it clarifies what kind of analysis policy-makers should seek. Comprehensive theoretical analyses of AI's social consequences are academically valuable but politically unusable. By the time the analysis is completed, the technology has moved. By the time the analysis has been translated into decision-relevant terms, the decision window has closed. By the time the translation has been debated and refined, the next technological development has changed the terms of the original question.
Usable knowledge for AI governance is different in character. It is specific rather than general: this organization's AI use policy, not a theory of AI and work. It is timely rather than comprehensive: partial information available when the decision must be made, not complete information available after. It is practical rather than theoretical: what happens if we do this, not what the general principles of AI-human interaction would predict should happen. And it is revisable: designed with the expectation that subsequent decisions will modify it as new information arrives.
The Berkeley study's AI Practice framework is usable knowledge at its best. Not a comprehensive theory of human-AI cognitive interaction but specific, testable interventions grounded in eight months of observation in a functioning organization. The interventions are modest. They are revisable. They are useful to organizations that must make decisions now about AI deployment. They produce more useful guidance than any comprehensive theoretical framework could, because their modesty is calibrated to the actual constraints of practical decision-making.
The concept has implications for research funding, academic training, and the relationship between social science and policy. Research designed to produce usable knowledge looks different from research designed to produce comprehensive theory. The research questions are more specific. The research timelines are shorter. The research products are more directly relevant to particular decisions. The epistemic standards are different — not lower, but calibrated to different purposes.
Lindblom and Cohen developed the concept in Usable Knowledge: Social Science and Social Problem Solving (1979). The book was written during a period of increasing concern about the practical relevance of academic social science to pressing policy problems — a concern that has persisted and intensified in the AI era.
Practical versus theoretical. Usable knowledge is calibrated to decision constraints, not to theoretical comprehensiveness.
Specific versus general. The most useful knowledge for practical decisions is specific to the context of those decisions.
Timely versus complete. Partial information available on time is more useful than complete information available too late.
Revisable versus definitive. Usable knowledge is designed to be superseded by better knowledge as new information arrives.