Nakamura conducted extensive research on mentoring alongside her flow and vital engagement work, producing a body of evidence that mentoring's core function is not the transmission of knowledge or technique but the transmission of standards — the implicit, often inarticulate criteria for what counts as excellent work within a domain. Standards cannot be written in manuals. They are transmitted socially, through the specific mechanism of sustained proximity between a master and an apprentice, watching the master reject what looks acceptable and accept what looks ordinary, gradually absorbing the distinction between competence and mastery. The AI moment threatens this transmission mechanism by making mentoring functionally optional — the junior builder can solve any implementation problem by consulting Claude — while the developmental function of the relationship remains essential.
Nakamura's research on 'good mentoring' emerged from a large Good Work project she conducted with Csikszentmihalyi and Howard Gardner. The project examined how excellent practitioners in various domains developed and sustained their work across careers. A consistent finding: the practitioners who had experienced strong mentoring relationships — not just instruction, but sustained proximity to practitioners who embodied high standards — developed deeper and more durable vital engagement than those who had not.
The mechanism Nakamura identified is specifically relational. The mentor does not explain why she rejects a component that looks perfect. She says something like, 'It will work, but it will not last.' The standard is not a rule. It is a sensibility — a way of caring about the work that can only be transmitted through shared practice. The apprentice absorbs the standard gradually, through the specific intimacy of watching the master evaluate hundreds of instances over years.
AI provides feedback. It provides it instantly, consistently, and without the social friction that makes human feedback difficult. But the feedback AI provides is calibrated to a standard that is, by nature, aggregated — trained on the vast corpus of human output, optimized to produce responses that satisfy the statistical average of human preference. It is excellent at telling the builder whether her code works. It is less reliable at telling her whether her code is beautiful. It cannot transmit the watchmaker's standard because that standard is not a feature of the output. It is a property of the relationship between practitioner and domain, transmitted through the specific social mechanism of shared practice.
The structural threat to mentoring in the AI age is not that it becomes impossible but that it becomes optional. The junior builder who can solve any implementation problem by consulting Claude has no functional incentive to consult a senior colleague. The relational friction that previously maintained the mentoring relationship — the junior builder's need for help — has been eliminated. The developmental function that relied on the relational friction remains essential, but the relationship itself, maintained by functional necessity, begins to thin.
Nakamura's mentoring research drew on both her longitudinal studies of creative professionals and her involvement in the Good Work Project at the Claremont Graduate University. The project's empirical findings, published across multiple papers in the early 2000s, established the specifically relational character of standards transmission and laid the groundwork for her later framework of vital engagement.
Standards, not knowledge. The mentor's core function is transmitting implicit criteria for excellent work — criteria that cannot be articulated in rules.
Proximity as mechanism. Standards are absorbed through sustained shared practice, not through instruction. The apprentice learns what cannot be taught.
The watchmaker's standard. 'It will work, but it will not last' — the specific kind of evaluation that raises the bar beyond functionality toward meaning.
AI feedback as aggregation. The machine provides statistically average evaluation, not domain-specific standards. It can tell you if it works; it cannot tell you if it matters.
The optionality threat. AI makes the functional need for mentoring disappear. The developmental need remains, but the structural support for the relationship erodes.
The strongest counterargument is that AI can transmit standards through carefully designed training data and prompting — that a well-configured model can embody and communicate the standards of a specific community. Defenders of Nakamura's framework point out that standards transmission requires sustained relationship with a practitioner who holds the standards and evaluates output against them over years, and that aggregated feedback — even when calibrated to a community — is structurally different from the specific, relational feedback of a mentor who knows the apprentice's work history.