AI can instruct with extraordinary capability. Current large language models explain concepts with clarity, demonstrate techniques with breadth, correct errors with specificity, and provide domain knowledge with scope no individual teacher can match. What AI cannot do is design challenges targeting the specific developmental needs of the individual practitioner with the precision effective deliberate practice requires. The limitation is not computational but evaluative. Designing practice activities requires understanding not just what the practitioner got wrong but why — what specific representational gap produced the error, and what specific kind of practice would close that gap most efficiently.
The teacher maintains an independent model of the student's understanding that differs from the student's self-assessment. The most consequential representational gaps are precisely the ones the student cannot perceive, because perceiving them requires the expertise the practice is supposed to develop. This independent perspective allows the teacher to design activities addressing needs the student does not recognize, to provide feedback that challenges the student's self-assessment, and to maintain developmental trajectory through phases of apparent stagnation.
Claude and other current AI systems model user requests, not user understanding. These are different things. A user can request a solution to a problem she does not understand, and the system cannot currently distinguish this request from one made by a user who understands deeply and uses the tool to accelerate implementation. If the user's self-assessment is inaccurate — believing she understands what she does not, or needing help in area A when her actual developmental need is in area B — the tool responds to the inaccurate self-assessment with the same helpfulness it brings to accurate ones. No mechanism detects the discrepancy.
A growing body of work is attempting to bridge this gap. A 2025 paper in Education Sciences describes a generative AI platform for teacher training that creates situations of calibrated difficulty, provides immediate feedback, and allows repeated goal-directed practice. Practica Learning has integrated deliberate practice methodology with AI avatars simulating difficult professional conversations. These systems represent genuine attempts to move AI from production-assistance toward developmental-coaching, but they remain limited to specific bounded domains where skills are identifiable and feedback loops are relatively tight.
The distinction between instruction and design emerged from Ericsson's studies of master teachers in music, chess, and surgery across multiple decades. The finding was consistent: expert teachers did not produce better instructions than competent teachers; they designed better developmental trajectories for their students.
Design, not instruction. The teacher's primary expertise is designing challenges that make learning a condition of success.
Independent model required. Effective teaching requires a model of the student's understanding separate from the student's self-assessment.
Four functions. Developmental initiative, invisible-weakness identification, strategic difficulty introduction, error pattern diagnosis.
AI limitation is evaluative. Current systems model requests, not understanding — and cannot diagnose gaps the user cannot see.
Bridging attempts exist. Domain-specific systems can simulate aspects of the teaching function but do not yet address open-ended, judgment-intensive domains where expertise matters most.