AI IS A TOOL is the conceptual metaphor that dominates corporate strategy, mainstream journalism, and most policy documents about artificial intelligence. The source domain is the handheld implement: a hammer, a telescope, a spreadsheet. A tool is an object; it possesses no agency; it does not act but is acted upon. The human is the subject; the tool is the instrument; the relationship is one of mastery. A tool extends human capability — the hammer extends the arm's striking force — but the extension does not alter the fundamental nature of the agent. These entailments transfer to the target domain (AI) whether the speaker intends them or not. The metaphor determines which questions feel natural (how do we use it well? how do we ensure it remains under control? how do we prevent misuse?) and which are rendered invisible (what happens when the tool participates in the user's cognitive process? how does the instrument reshape the person holding it?).
The TOOL frame is powerful because the source domain is universally familiar. Every human being has used tools. The conceptual structure is immediately available, immediately understood, immediately applicable. When someone calls AI a tool, the listener does not need explanation — the frame snaps into place before the conversation begins. This is the ordinary efficiency of conceptual metaphor. It is also its concealed politics: the frame determines the landscape of conceivable policies, and the determination is invisible to participants who believe they are simply using a word.
The entailments are precise. A tool does not participate in cognition. A hammer does not suggest designs. A telescope does not propose hypotheses. The TOOL frame has no structure for accommodating the specific phenomenon that distinguishes large language models from previous tools: they make connections the user did not see, propose structures the user had not considered, and produce outputs that belong to the collaboration rather than to either participant. Edo Segal describes this directly in The Orange Pill, particularly in the moments when Claude linked laparoscopic surgery to ascending friction — a connection Segal had not seen and Claude had not set out to find. Neither partner owns that insight. The collaboration does. The TOOL frame renders this invisible because a tool, by definition, has no cognitive contribution to make. Whatever does not fit the frame does not get seen.
The policy consequences are immediate. Regulatory frameworks built within the TOOL frame treat AI as a hazardous instrument whose deployment must be managed: the EU AI Act classifies systems by risk level, imposes disclosure requirements, restricts certain applications. These regulations address real risks. They also background the capability expansion, treating AI primarily as something to be managed rather than something to be cultivated. Liability frameworks within the TOOL frame locate responsibility in the user (who operates the tool) or the manufacturer (who produces the tool), leaving no conceptual space for the joint responsibility that a genuine collaboration would require. Educational frameworks within the TOOL frame teach students to use AI tools competently, reproducing the frame in the next generation.
The TOOL frame is not wrong. AI systems are, in many respects, used as tools: they are deployed by humans to accomplish human purposes. But the frame is partial. It captures the dimensions of AI that resemble previous tools and obscures the dimensions that do not. The alternative frames — AI IS A MIND, AI IS A COLLABORATOR — capture other dimensions while obscuring their own. The work of the present moment is not to select the single correct frame but to recognize that no single frame is adequate and to hold multiple frames in productive tension.
The TOOL frame for AI has deep roots in the computing industry's self-understanding. From the earliest mainframes through personal computers to contemporary AI systems, the dominant metaphor has positioned the machine as an instrument extending human capability. The frame was reinforced by the command-line interface, the graphical user interface, and the mobile touchscreen — each generation of human-computer interaction designed to make the user feel in control of a passive instrument.
Source domain: handheld implement. Hammer, telescope, spreadsheet — objects acted upon by agents, possessing no agency of their own.
Entailment: passivity. Tools do not participate in cognition; they execute operations specified by the user.
Entailment: control. The user is the master; the tool is the instrument; the relationship is hierarchical and asymmetric.
Policy implications. The TOOL frame generates regulation of use, liability frameworks locating responsibility in operators, and educational programs teaching competent tool use.
Concealed blind spot. The frame cannot accommodate cognitive participation by the instrument — the specific phenomenon that distinguishes AI from every previous tool.