Expert-amplifying AI is the category of artificial intelligence systems designed to preserve the practitioner's expertise as the foundation of the system's capability — to make expert knowledge more powerful and reach farther, rather than to replace expertise with general-purpose capability available to anyone. Such systems are technically feasible and work well in narrow domains. They are the contemporary analog of record playback: the suppressed alternative whose underdevelopment reveals the political content of the dominant paradigm's selection.
The architectural features of expert-amplifying AI are specifiable. The system is trained on the reasoning patterns of a specific clinical team, a specific law firm, a specific engineering practice — capturing not merely outputs but the chains of inference, the weighting of evidence, the contextual factors that shift probabilities. The training data is curated with consent and compensation. The model's reasoning is transparent to the expert user, who can evaluate it, challenge it, and override it. The competitive advantage accrues to the practitioner whose expertise the system amplifies, not to an employer who could substitute a cheaper operator.
Such systems exist. They exist in radiology, where models trained on the reasoning of senior radiologists augment junior practitioners' work. They exist in patent law, where tools trained on specific attorneys' argumentation support the attorneys' ongoing practice rather than replacing them. They exist in industrial engineering, where design systems learn from senior engineers' specific decision patterns. In each case, the system is most powerful in the hands of the most knowledgeable user.
The paradigm is systematically underdeveloped relative to general-purpose expertise-replacing models for the structural reason Noble identified in the record playback case: the smaller market. Expert-amplifying systems serve the population of existing experts. Expertise-replacing systems serve the vastly larger population of everyone who lacks expertise and wants the tool to supply it. Market logic rewards the larger market. The smaller market is underinvested, even when the smaller market's systems are technically superior for their specific purposes.
The suppression is not absolute. Expert-amplifying AI continues to develop at the margins, funded by specific professional communities willing to pay for tools that serve their interests rather than their employers'. But the research investment, commercial deployment, and cultural attention directed at expert-amplifying AI is a small fraction of that directed at general-purpose models. The disparity is not a judgment about technical merit. It is a judgment about market size, and market size is determined by institutional structures that reward expertise-replacement over expertise-amplification.
The category is articulated as a contemporary parallel to record playback, but the underlying architectural ideas trace back to decades of expert systems research — the AI approach dominant from the 1960s through the 1980s that attempted to capture and operationalize expert knowledge. The failure of classical expert systems to scale was real, but the framework has been renewed by modern machine learning techniques that can handle the tacit dimensions classical approaches could not encode.
Expertise preserved, not replaced. The practitioner's knowledge remains the foundation; the system extends reach without eliminating the practitioner's role.
Smaller market, systematically underinvested. The paradigm serves existing experts rather than the larger market of those who lack expertise, which produces predictable underinvestment.
Technically feasible. The paradigm works; narrow-domain implementations demonstrate both capability and commercial viability.
Recoverable alternative. Unlike record playback, expert-amplifying AI is not yet lost; the political question is whether institutional conditions can be created to support its development.
Critics argue that expert-amplifying AI is less democratic than general-purpose models — that it entrenches existing expertise hierarchies rather than distributing capability broadly. The Noble framework concedes the tension while insisting on the empirical question of distribution: general-purpose models distribute nominal capability broadly while concentrating actual control narrowly, and the democratic language of the dominant paradigm obscures what its architecture actually does.