The myth of the neutral tool treats technology as a finished object presented to users who freely choose among its possible applications. A hammer can build or destroy; the technology is innocent, only its use carries moral weight. Smith's institutional framework exposes this myth as ideological cover for design choices that have already constrained use before the tool reaches users. The precision manufacturing tools at Springfield were not neutral instruments craftsmen could employ to enhance existing practice—they were designed to replace craft practice with standardized procedure, eliminating the autonomy and holistic knowledge on which craft depended. The design was the decision. By the time the tool reached the workshop floor, the consequential choice had already been made by institutions serving interests the craftsmen did not share.
The myth serves specific institutional interests by directing attention away from design processes and toward moments of use. If tools are neutral, then negative consequences of their deployment are users' responsibility—failures of judgment, discipline, or adaptation rather than structural features of the tool's design. This framing absolves designers and institutions of accountability for embedded values while placing the entire burden of responsible use on individuals least equipped to understand the institutional forces shaping the tools they employ. The pattern repeats across emotional labor platforms, engagement-optimized social media, and now AI systems whose helpfulness, fluency, and comprehensiveness reflect institutional priorities.
Large language models embody non-neutrality with particular clarity. The optimization criteria guiding their development—fluency, helpfulness, breadth of competence, user engagement—reflect priorities of commercial institutions seeking profitable applications. These criteria sound unobjectionable in isolation, but they produce systems with specific tendencies: providing answers rather than questions, smoothing difficulty rather than preserving productive friction, confirming user framings rather than challenging them. A system optimized for different values—intellectual rigor, productive uncertainty, explicit refusal of tasks beyond competence—would behave differently. Its absence from the market is not technological limitation but institutional choice, reflecting that commercial incentives reward engagement over cognitive development.
The legal brief-drafting example from the Smith volume illustrates consequences. An AI system trained to draft briefs is not a neutral writing instrument but a system optimized for comprehensiveness, citation accuracy, and structural coherence—qualities valuable in legal writing but achieved at the expense of strategic ambiguity, rhetorical choices signaling respect for particular judges, and judgment about what to exclude. The lawyer relying on AI-drafted briefs without recognizing what the tool's optimization selected for and against allows embedded values to shape practice in ways she might not choose if the choice were made explicit. The tool's design has already determined a substantial portion of the output before the lawyer contributes her judgment.
The critique of technological neutrality emerged across multiple intellectual traditions in the 1960s–1970s. Langdon Winner's 'Do Artifacts Have Politics?' argued that technologies embody political arrangements in their physical structure. David Noble's Forces of Production demonstrated that automation technologies were selected for their capacity to transfer control from workers to management, not for technical superiority. Smith synthesized these insights with his armory research, providing empirical demonstration that design choices embed values before use begins.
Design embeds values before use. The consequential choices about what a technology will do easily and what it will resist have been made during development, reflecting institutional priorities of designers and funders.
Optimization criteria are never neutral. Every system is optimized for something—fluency, engagement, efficiency, control—and the choice of optimization target reflects institutional values that may not align with users' needs.
The myth serves concentrations of power. Framing technology as neutral absolves designers of responsibility while placing accountability burdens on individual users least equipped to understand the institutional forces shaping their tools.
AI's helpfulness is a designed tendency. Large language models' tendency to provide answers rather than challenge framings reflects optimization for helpfulness—a commercial value, not a neutral technical property.
Different institutional priorities would produce different tools. Systems optimized for intellectual development rather than user satisfaction would refuse inadequate prompts, demand clarification, and preserve productive uncertainty—capabilities absent from commercial models.