The economic literature identifies three mechanisms for partially resolving information asymmetry: signaling (the informed party takes a costly action to communicate quality), screening (the uninformed party designs a mechanism inducing revelation), and reputation (accumulated track record over repeated interactions). Each mechanism has worked in specific markets for specific goods. Each requires reconstruction for AI-augmented professional markets, where the lemons problem of polished output has disabled the traditional signals on which each mechanism depended.
Signaling requires a costly action that credibly communicates quality. Educational credentials function as signals because obtaining them requires effort correlated with the underlying capability the credential attests to. In AI-augmented markets, potential signals include process transparency (documented analytical trails), verification workflows (evidence of independent review), and third-party certification of judgment exercise. Each signal must be costly enough to be credible yet observable enough to be interpretable — a design challenge the markets have not yet solved.
Screening requires the uninformed party to design mechanisms that induce the informed party to reveal quality. Segal provides an elegant example: the teacher who stopped grading essays and started grading questions. Shifting evaluation from output (where AI has eliminated the quality differential) to the meta-cognitive work of evaluating output (where human judgment remains scarce) reveals the underlying investment. The mechanism generalizes: law firms evaluating attorneys on issues the AI missed, consulting firms evaluating analysts on critiques of AI-generated analyses, software companies evaluating engineers on architectural risks the AI did not flag.
Reputation accumulates through repeated interactions with observable long-term consequences. The professional who consistently produces durable work — that withstands scrutiny, generates value over time, avoids the costly errors that confident wrongness eventually produces — builds a reputation signaling genuine expertise. The mechanism works but slowly. It requires time for downstream consequences to materialize, and during the interval, the market cannot distinguish between expert and surface. In AI-augmented markets moving at unprecedented speed, the interval may be long enough to cause significant damage to professional standards before reputation feedback engages.
The three mechanisms interact. Signaling works better when reputation markets reward the signals. Screening works better when signals are already reliable. Reputation works better when screening and signaling reduce the noise in the quality measurements it aggregates. The failure of any one mechanism cascades to the others, which is why the AI-augmented professional market requires coordinated institutional reconstruction across all three dimensions simultaneously.
The three mechanisms were developed as responses to Akerlof's 1970 analysis: Michael Spence's 1973 signaling model, Joseph Stiglitz and Michael Rothschild's 1976 screening work, and a subsequent literature on reputation in repeated games. The three authors shared the 2001 Nobel Prize for these contributions.
Signaling shifts cost to the informed party. The producer takes a costly action that low-quality producers cannot profitably mimic, allowing high-quality producers to distinguish themselves.
Screening shifts cost to the uninformed party. The evaluator designs mechanisms that induce self-revelation, transferring the cost of asymmetry resolution from buyer to seller.
Reputation aggregates over time. Repeated interactions generate track records that reduce per-interaction asymmetry at the cost of delayed feedback.
The mechanisms require coordinated reconstruction. AI-augmented markets have disabled the surface signals on which each mechanism depended, requiring new institutional designs for professional quality verification.
The practical question is which mechanism best fits which AI-augmented professional market. Markets with quick feedback loops (software, where code either works or does not) can rely more heavily on reputation. Markets with slow feedback (legal strategy, medical diagnosis) require stronger signaling and screening. The institutional design challenge is to match mechanism to market without creating perverse incentives or raising barriers to entry that defeat the purpose.