The inflation operates through the interface's transparency. When previous tools required translation — learning a programming language, mastering a framework, navigating technical constraints — the translation served as continuous reminder that output was a joint product. The friction of translation was also the friction of recognition. Natural language interfaces eliminate this recognition. The worker describes intention in her own words, the tool produces output matching the intention, and the entire sequence feels like personal accomplishment because the experience is indistinguishable from being individually competent at a dramatically higher level.
The inflation is experiential rather than cognitive. It is not a reasoning error correctable by reflection. It is embedded in the structure of the interaction itself. The worker cannot simply decide to feel less capable. The sense of amplified capability is generated by the same cognitive machinery that would generate accurate efficacy assessment under traditional tool conditions, operating on interactions whose structure no longer provides the information the machinery would need to calibrate.
The clinical consequence is a specific form of self-concept vulnerability. The worker integrates the inflated efficacy into her professional identity. She thinks of herself as a person who can build certain things, solve certain problems, contribute at a certain level. When the tool becomes unavailable — through pricing changes, connectivity failures, platform decisions, organizational restrictions — the inflated capability contracts to baseline, and the worker experiences not merely reduced output but reduced self. The professional identity built on system efficacy collapses when the system changes.
The 2024 Nature Humanities and Social Sciences Communications study found that self-efficacy in AI learning moderated the relationship between AI adoption and burnout: workers who felt capable with the tools experienced less burnout than those who did not. The finding is consistent with the efficacy inflation analysis but addresses only one direction of the vulnerability. The workers who felt most capable — who had most fully integrated tool-amplified capability into their professional self-concept — were precisely the workers most vulnerable to the identity disruption that tool unavailability would produce.
Measuring personal versus system efficacy is methodologically challenging because the inflation is experiential. Direct questions about tool contribution will produce inaccurate responses because the fusion obscures attribution even for reflective workers. Indirect assessment through questions about capability under changed tool conditions — "If my AI tools were unavailable for a week, would I still feel competent?" — offers a more promising path.
The concept emerges from the collision between Maslach's reduced efficacy dimension and the specific experiential properties of natural-language AI interaction. The original dimension assumed that efficacy would be measured against relatively stable baselines — the worker's sense of what she could do compared to what she had previously done or what her profession expected. AI tools violate the stability assumption by expanding capability rapidly and continuously, making baseline comparison systematically inaccurate.
Segal's account in You On AI of the senior Trivandrum engineer oscillating between excitement and terror captures the phenomenology: the excitement was efficacy inflation in real time, the terror was the partial recognition that the capability was systemic rather than personal, and the question he arrived at — "what is the remaining twenty percent actually worth?" — was the question the inflation ordinarily prevents workers from asking.
Experiential, not cognitive. The inflation is embedded in the interaction structure, not a reasoning error correctable by reflection.
Interface transparency. Natural language removes the friction that would have flagged the tool's contribution in earlier eras.
Validated from every direction. Colleagues, metrics, and managers confirm the inflated self-assessment because none separates worker contribution from tool contribution.
Identity integration. Inflated efficacy enters professional self-concept, creating vulnerability to tool disruption.
Dependency masquerading as resilience. High efficacy scores in AI-augmented workers indicate tool dependence, not psychological strength.