Scaffolding and prosthesis are not distinguished by the quality of support. Both can be exquisitely designed, precisely calibrated, genuinely helpful. The distinction is directional: scaffolding moves toward independence, prosthesis maintains dependence. Scaffolding succeeds by becoming unnecessary. Prosthesis succeeds by remaining indispensable. The scaffold withdraws as the learner develops the capability it was providing; the prosthesis permanently replaces a function the user cannot perform independently. The distinction sounds simple and is devastating in application. The Bruner volume argues that AI support, as currently designed and economically incentivized, follows the trajectory of prosthesis rather than scaffolding — producing capability that is real but situated in the partnership between human and machine rather than in the human alone.
The distinction rests on the principle that scaffolding exists to be withdrawn. Every function the scaffold performs is temporary. The scaffold succeeds when the child builds the next pyramid alone, when the student organizes an argument without the teacher's outline, when the junior developer diagnoses the bug at 2 a.m. without the senior colleague. The measure of effective support is not the quality of the supported performance but the quality of the independent performance that follows the support's withdrawal.
Prosthesis has a different logic. The person with a prosthetic limb does not expect, through use of the prosthesis, to develop the biological limb it replaces. The prosthesis is permanent support, and its value lies in its permanence — in the fact that it will be there tomorrow and next year and for the rest of the user's life. There is nothing wrong with prosthesis. It is not a failure of scaffolding. It is a different kind of support, designed for different ends.
The confusion between the two is the problem. When a tool designed to permanently extend capability is mistaken for a tool that builds capability, the user experiences the supported performance as their own — and experiences the tool's removal as self-diminishment rather than as a test. Segal's admission that turning off Claude 'felt like voluntarily diminishing yourself' is the diagnostic marker: scaffolded capability, when withdrawn, reveals internalized understanding; prosthetic capability, when withdrawn, reveals its absence.
Three forces push AI toward the prosthetic trajectory. Commercial incentive: usage-based revenue rewards continued use, not graduated withdrawal. User expectation: users want immediate help, not pedagogical restraint. Architectural limitation: current AI systems maintain conversational context but not developmental trajectory — they cannot distinguish a question asked for the tenth time from a question that is genuinely more sophisticated than the previous nine. The convergence produces scaffolds that do not withdraw.
The distinction is implicit throughout Bruner's six decades of educational research but is crystallized in the Bruner — On AI volume as the central diagnostic for the AI moment. Related concepts appear in Shannon Vallor's work on moral deskilling, Hubert Dreyfus on embodied expertise, and the ironies of automation literature.
Directional difference. Scaffolding moves toward independence; prosthesis maintains dependence. The distinction is in the trajectory, not the quality of support.
Purpose of obsolescence. Scaffolding succeeds when no longer needed; prosthesis succeeds by remaining indispensable.
Diagnostic marker. When support is withdrawn, scaffolding reveals internalized capability; prosthesis reveals dependency. The felt sensation of self-diminishment on removal is the signal.
Three structural forces. Commercial incentive, user expectation, and architectural limitation converge to push AI toward prosthesis even when no designer chose that trajectory.
Invisible from outside. Both supported performances look identical; only the withdrawal test distinguishes them.
The framework raises the question of whether AI can be redesigned for graduated withdrawal given its commercial structure. Optimists point to educational AI systems (Abel, Khan Academy's tutor) that deliberately withhold answers. Skeptics argue that commercial AI tools — the ones millions of workers actually use — are structurally committed to the prosthetic trajectory because their business model depends on continued use.