Becker's 1957 doctoral dissertation, published as The Economics of Discrimination, introduced the concept that prejudice is not free. It is costly — and the cost is borne primarily by the discriminator. The discrimination coefficient is a measure of the premium an employer is willing to pay to indulge a preference for one type of worker over another. An employer with a coefficient of, say, twenty percent against a particular group will hire from that group only if the group's members accept wages at least twenty percent below the wages of the preferred group. The employer is, in effect, paying a tax — not to the government but to his own prejudice. The tax takes the form of higher labor costs, reduced access to the full talent pool, and a competitive disadvantage relative to employers who do not discriminate and can therefore hire the best workers at the market wage regardless of origin.
The AI transition is applying unprecedented pressure on the discrimination coefficient across the global knowledge economy. The mechanism is not moral. It is structural. AI tools do not know where a user went to school. They do not hear an accent. They do not see a skin color, a gender, a disability. They do not care whether the person describing the problem is in San Francisco or in Lagos, whether she has a degree from MIT or learned to code from YouTube videos in a one-room apartment in Dhaka. The tool processes the description and produces the output. The quality of the output depends on the quality of the description — on the human's judgment, clarity, and specificity — not on any characteristic that has historically functioned as a proxy for competence.
The credential has always served a dual function. Its first function is informational: it signals competence. Its second function is discriminatory: it rations access to opportunity based on characteristics (family wealth, geographic location, social network) that are correlated with the credential but not with the competency it is supposed to certify. When a tool allows competency to be demonstrated directly — when the developer in Lagos can build a working product without the intermediation of a credential — the discriminatory function is exposed.
The claim requires qualification. AI does not eliminate inequality. It reduces one specific component — the barrier between imagination and execution, between competence and its market recognition — while leaving other components intact. Access to tools requires connectivity, hardware, and financial stability that billions lack. The tools are optimized for English-language users. The benefits accrue disproportionately to those who already have the general human capital to direct the tool effectively. The coefficient is declining. It has not reached zero. The distance to zero is measured not in technology but in the foundational human capital that makes the tool useful.
Becker's dissertation was completed at Chicago in 1955 and published in 1957, making an argument so counterintuitive that the profession took two decades to absorb it. The work earned him a reputation as an economist willing to apply rational-choice analysis to domains where the framework seemed inappropriate — a reputation that deepened through his subsequent work on crime, the family, and addiction, and that culminated in the 1992 Nobel Prize.
Discrimination has measurable costs. The discriminating firm operates below its production possibility frontier, earning less than non-discriminating competitors and creating pressure toward inclusion.
The credential as discriminatory filter. Beyond its informational function, the credential rations access based on characteristics correlated with the credential but not with the competency it certifies.
Structural bypass through AI. When competency can be demonstrated directly through AI-augmented output, the discriminatory function of the credential is exposed and partially circumvented.
The declining but nonzero coefficient. AI reduces direct and structural discrimination but does not eliminate the underlying inequalities in the human capital required to use the tools effectively.
Critics objected that Becker's framework understates the persistence of discrimination — markets do not automatically eliminate prejudice, as his model sometimes seemed to predict. Becker acknowledged the complication without abandoning the framework: the model does not say markets automatically eliminate discrimination, only that discrimination has a cost and the cost creates pressure, however slow, toward inclusion. The AI case extends this logic into the twenty-first century, showing both the power of the mechanism and its limits.