You On AI Encyclopedia · Generative AI at Work (2023) The You On AI Encyclopedia Home
Txt Low Med High
WORK

Generative AI at Work (2023)

Brynjolfsson, Danielle Li, and Lindsey Raymond's landmark 2023 study of 5,179 customer-service agents using a generative AI assistant — the first rigorous empirical evidence that AI compresses skill gaps by helping the least-skilled workers the most.
The 2023 NBER working paper Generative AI at Work, by Erik Brynjolfsson, Danielle Li, and Lindsey Raymond, provided some of the first rigorous empirical evidence about how generative AI affects worker productivity. Studying the staggered rollout of a generative AI conversational assistant to 5,179 customer support agents at a large software company, the authors found that access to the tool increased productivity (measured in issues resolved per hour) by 14 percent on average. The gains were radically uneven across the skill distribution: novice and low-skilled workers improved by 34 percent, while experienced and highly skilled workers saw minimal impact. The finding suggested the AI was effectively capturing and disseminating the tacit knowledge of the best workers — helping newer employees move down the experience curve faster. If the pattern generalized, AI could compress skill inequality rather than widen it, though subsequent evidence on junior hiring declines complicated the optimistic interpretation.
Generative AI at Work (2023)
Generative AI at Work (2023)

In The You On AI Encyclopedia

The study's design enabled causal inference in a way most AI productivity research has not. Because the tool was rolled out in a staggered fashion across teams, the authors could compare productivity changes in teams that received the tool at different times, controlling for general trends and seasonal effects. This quasi-experimental design — closer to a randomized trial than the correlational analyses that dominate the literature — gave the findings unusual empirical weight.

The mechanism the authors identified was pedagogical: the AI was surfacing the responses and approaches of the company's most effective agents, making this tacit knowledge available in real-time to agents who had not yet developed it through years of experience. The AI functioned less like a replacement for human judgment and more like an automated mentor, compressing the training timeline for skill development. Novices gained most because they had the most to learn. Experts gained least because they had already developed the capabilities the AI was now propagating.

Turing Trap
Turing Trap

The finding had major implications for the Great Decoupling debate. If AI generally compressed skill distributions rather than widening them, the technology could narrow rather than widen inequality — a dramatic reversal of the pattern established by previous digital technologies. But the generalization had limits. The customer service context provided tight feedback loops, clear quality metrics, and well-defined tasks — conditions that may not hold across all work domains. Moreover, separate data began showing entry-level hiring declining sharply in AI-exposed occupations by 2025, suggesting that AI's effect on the existing workforce (compression) diverged from its effect on the future workforce (pipeline collapse).

The paper became one of the most cited economics papers on AI and work by 2025, shaping both the academic debate and policy discussions. It functioned as empirical ballast for the augmentation-over-automation argument — evidence that AI could function as augmentation in practice, not merely in principle, producing broadly distributed gains rather than concentrating them.

Origin

Danielle Li is an associate professor at MIT Sloan; Lindsey Raymond completed her PhD at MIT and is now at Stanford. The study was supported by the anonymized Fortune 500 software company that provided the data, which allowed the researchers unusual access to granular performance data combined with AI tool usage data.

The working paper was released through NBER in April 2023, revised through 2024, and published in the Quarterly Journal of Economics in 2025. It entered the public discourse at the moment when AI workforce effects were becoming the central policy question, giving it disproportionate influence on both academic and public debates.

Key Ideas

The study's design enabled causal inference in a way most AI productivity research has not

14 percent average productivity gain. Access to the AI assistant increased resolved-issues per hour across the full workforce.

34 percent gain for novices. Workers with less experience saw dramatically larger productivity improvements than experienced workers.

Minimal gain for experts. Experienced workers showed almost no productivity improvement from the tool.

Compression mechanism: tacit knowledge diffusion. The AI surfaced the approaches of top performers, making them available to novices in real time.

Optimistic generalization limited. The customer service context may not generalize, and the effect on existing workers diverges from the effect on pipeline hiring.

Further Reading

  1. Brynjolfsson, Erik, Danielle Li, and Lindsey Raymond. Generative AI at Work. NBER Working Paper, 2023.
  2. Noy, Shakked and Whitney Zhang. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. Science, 2023.
  3. Peng, Sida, Eirini Kalliamvakou, Peter Cihon, and Mert Demirer. The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. 2023.
Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
WORK Book →