The from-to structure is Polanyi's phenomenological description of how all awareness operates. We never attend to things neutrally—we attend from some elements to others. The pianist attends from her fingers to the music. The reader attends from the words to the meaning. The diagnostician attends from a constellation of symptoms to a focal judgment. The subsidiary elements (fingers, words, symptoms) are not absent from awareness—they must be present for the focal awareness to emerge—but they function subsidiarily, supporting focal attention without becoming its object. When the structure inverts—when the pianist notices her fingers, the reader notices the typography—the understanding collapses. This architecture explains both why AI tools enable extraordinary productivity when successfully indwelt and why they pose distinctive risks: the tool's mediation must be trusted (made subsidiary) for flow to occur, but AI tools can fail in ways that trust conceals.
Polanyi illustrated the from-to structure with examples drawn from perception, skill, and scientific practice. In perception, we attend from retinal stimulations, contextual expectations, and learned schemas to the focal object we perceive—a face, a landscape, a diagram. We do not first perceive features and then assemble them into wholes; we perceive wholes by attending from features that remain subsidiary. In skilled performance, we attend from our bodily movements to the goal they accomplish—the pianist from fingers to music, the carpenter from hammer-blows to the nail being driven. The movements are controlled subsidiarily; focal attention to them disrupts control. In scientific understanding, we attend from experimental details, theoretical commitments, and background assumptions to the focal meaning of a result. The details support the meaning without themselves becoming the focus of inquiry.
The AI interface revolution of 2024-2025 created an unprecedented case of rapidly established from-to structure. Within weeks of Claude Code's release, developers who had spent decades attending from syntax and debugging to architectural goals found they could attend from conversational prompts to the same goals. The implementation layer—historically the substrate of subsidiary awareness—had been compressed into natural-language specification. The from-to structure was preserved in form: the developer still attended from subsidiary elements to focal products. But the content of the subsidiary layer had changed. Where previously she attended from hand-written code (whose production built tacit understanding), she now attended from AI-generated code (whose production built prompt-crafting skill). Whether the new subsidiary layer supports the same depth of focal judgment is the question Polanyi's framework forces.
The structure also explains the specific phenomenology of AI-collaboration failures. The Deleuze fabrication Segal describes—where Claude produced an elegant passage connecting Csikszentmihalyi to a concept falsely attributed to Deleuze—is a paradigmatic case of failed subsidiarity. The passage functioned subsidiarily: Segal attended from it to the argument it supported, and the passage's surface quality (eloquent prose, apparent scholarly depth) reinforced its subsidiary status. The error was detected only when Segal inverted the structure—shifted from attending from the passage to attending to it, making it focal and subjecting it to the critical scrutiny that subsidiary elements normally escape. The lesson: AI outputs optimized for subsidiary smoothness actively resist the focal scrutiny that would detect their failures.
Polanyi first articulated the from-to structure in Personal Knowledge (1958) and developed it most fully in The Tacit Dimension (1966). The structure built on phenomenological analyses by Edmund Husserl and Maurice Merleau-Ponty while adding Polanyi's distinctive emphasis on the functional asymmetry between subsidiary and focal awareness. The from-to movement is not merely descriptive of consciousness—it is constitutive of what consciousness does. Understanding emerges only when elements are held subsidiarily and integrated into focal meaning. Make the subsidiaries focal, and the meaning dissolves.
All knowing has this structure. From perception to skill to scientific understanding, consciousness always attends from subsidiary clues to focal meanings—there is no neutral, structureless awareness.
Subsidiaries must stay subsidiary. When attention shifts to the elements that should remain subsidiary—the fingers, the grammar, the tool's mediation—the focal meaning collapses and performance degrades.
Trust enables subsidiarity. The from-to structure requires trust in the subsidiary elements—they must be reliable enough to support focal attention without demanding conscious scrutiny.
AI risks exploiting trust. Tools capable of confident error exploit the from-to structure by appearing reliable enough to remain subsidiary while actually requiring focal evaluation their smoothness suppresses.
Oscillation is the discipline. Safe AI use requires deliberate inversion—periodically making the tool focal to evaluate its outputs before returning it to subsidiary status for productive flow.